Richard,
thank you that you invite me to explain more on the S-C-S mantra. Hope that Pedro and the readers of
this chat are not bored.
Your questiona:
> Karl, this is interesting to me. Maybe the principle I seek is your
> "sequential-commutative-sequential mantra." I'm sure you know
that
> it is not friendly to Crick's central dogma, but that may not mean it
> is necessarily wrong. Just exactly how this principle is implemented
> by a "liquid-crystalline micelle" (Stan) is not made clear to
me by
> invoking the S-C-S mantra. It merely says that genetic information
> can be written and rewritten. But how? By natural selection or
> random genetic drift? Maybe. But this does not explain the origin of
> a coded genetic language.
The points you rise:
* Me not friendly to Crick.
Not that I know. I don' speak about why and by what motivation etc. I just happen to say, here in
congruence with Arno, that what we can discuss are our pictures about reality. Our pictures are
half-pictures. We stilldon't know about reality (and Arne rightly points out that in an exact sense,
we cannot anyway), but adjusting the visor may help to approximate reaity better.
To my knowledge the Crick discvery was THAT it is the DNA and it is built such. I never knew if or
what he speculated about a why. If he did, I cannot be friendly or undfriendly to his wievs bec I
don't discuss these questions. I just talk about the grammar of the logical language, like Mr W.
said it is polite to do.
* How this principle is implemented
Well, this is a long and boring story about certainities. You can only be certain about a conclusion
if you write the praemisses. The praemisses we use to generate certainities are the properties of
two and more concurrently existing additions. This is like what we call here
"Rasterfahndung" and with you filtered targeting or so. You lay categories above the
population and check who is in that raster (eg young, unemployed, angry, dissatisfied, talented,
etc.) Here, we transfer the properties from the margins unto the center of the net. That what we
fish is like this. This is a type.
Now we generalise the method and use rasterfahdung with targeted groups quelconque (possessing
properties whatever). We see that the category structure is identical with the method of writing up
a sum.
Now we use the cuts and not the stitch and categorise the most common piece to be cut by method 1
together with the most common piece cut by method nr. 2.
In an example:
We search for the most typical habitant in a city. We know that the group he will be in is the most
strong with respect to age and income and weight. In whichever way we dissect the categories of
weight, there will be one most frequent /probable/ class in it. The same with income and age.
We are sure that we shall meet a person who is in the label "most frequent" in weight,
income and age if we pick a random person. This is the only thing we can be certain of. If we
manipulate the selection criteria, we gain certainbity over the results of the sample. The selection
categories are like the summands of an addition. Somehow I always arrive at n, even if I categorise
the age group -5,6-59,60+. It is always an addition and this addition will always have a summand
that is the biggest. This is the most common category.
So I can make certainities no end for personal use by finding the results of my searches. I am
absolutely sure that the whole population is in all of the cells of the matrix (age categories times
income categories times weight categories), I just dont know in which cell is how many of them.
This technique is widely used in applied statistics and by the software package spss.
We now generalise the concurrent filtering and speak of carriers of symbols whom we pick out of a
multitude. The person we pick will surely have k symbols and of these i will say that he belongs to
the most common category.
The number k being the result of a surprisingly slow but steep, highly assymmetric function f(n),
with n growing having almost no effect on k growing, we can be sure that the individual we pick will
be in a given range of numerosity.
I have to add as an excurs that as we co-count the cuts with the stitches, wo only have some E96
logical statements and not E97 to play with in the case we look into the statistics of 67 objects
carrying symbols. The requirement that the sentence be true in both logical languages, the
continuity and the diversity based algebras both, brings forth that we do not count above 136
because the two become too much desynchronised and the numeric object-stability is no more given.
(One needs it to be able to count back from the number of logical relations unto the minimal number
of objects these are realised on.) So we are very much in a restricted set within which we operate.
We can fabricate a simple cross-tabulation where we write on the rows the summands that the logical
statements (which are like the habitants) saare a part of, and on the columns also. So we are most
positively sure that trhe cell in the most numerous row and in the most numerous column will have
the most entries. This is the most sure result we have in this tabulation. These will carry symbols
that say "I belong to themost usual summand" in i cases.
The next well populated cell will be that of the most frequent summand in the rows and the
second-most frequent summand in the ccolumns. These carry the sybols "I am concurrently in the
most frequent summand in addition A and in the second-most frequent summand in addition B".
Now comes the numeric surprise. It is easier that you thought, folks, said the accountant and turned
to the numbers themselves.
It just happens (by using the n! and the n? functions concurrently) that a mass overhang appears
with center 67 on N. 67 again connects to /if I remember correctly/ 16 as the number by far most
probable of its summands. So the average summand is 4, which again agrees with 4 being the basic
unit /the difference between two consecutive units/) on M, the number line of diversity. But not
every summand is 4, so we have types of symbol carriers, of whom some say "I am a perfectly
regular 4,4,4,4,3,5,etc.", and some say "I am a somewhat over-wide under-long 5,6,5,4,5,
etc.)
The reasoning is quite different to this and has to do with increasing redundancy of the symbols and
so, but the general idea is that there exist types, namely the absolutely mainstream, the
second-most absolutely mainstream and so forth.
The enumeration of this is no small task, but is conceptually no big deal, as we have a perfect link
to how many logical statements make up a logical object. We know how many logical statements we have
all together in a maximally structured set and look for types of certainities.
These, which I'd like to call logical archetypes, come in varieties. There can be so many varieties
as there are cuts. Indeed, after repeated filtering you arrive at a density threshold below or above
which you find or dont find certainly existing subsets. The logical categories imposed by the cuts
bind the behaviour of the types. If the symbol it carries says that it is in the most common
category it cannot be next /in the same subset/ to one that says that IT is in the most common
category /maybe in a different respect/. The multidimensional summands have attachment-repulsion
properties like a Nintendo game's actors and tools and situations, just more clear and easy because
they only obey the numeric system's whims as a programmer.
After this, it is only Lego.
The main thing to understand, Richard, is:
* logical statements densify into logical objects;
* these densifications have properties of seldomness or usualness;
* there is but a limited number of types of these;
* and their numeric properties are a bit shadowy but can be figured and counted out.
Then you get a logical Lego to build organisms with. Always check that the cut-algebra matches the
stitch-algebra and all will be fine and exciting.
Hope to have made a nice contribution to your Xmas.
>
> Best regards to all,
>
> Richard
>
>
>
>
>
>
>
>
>
>
>
>
_______________________________________________
fis mailing list
fis@listas.unizar.es
http://webmail.unizar.es/mailman/listinfo/fis
Received on Wed Nov 8 20:12:53 2006