[Fis] Re: about meaning (FIS)

From: <[email protected]>
Date: Wed 15 Oct 2003 - 08:09:23 CEST

Hi Sakari:
Some comments

> >
> >> >CM: More precisely, I would say that matter is S/I (presence vs lack of
> >> >matter is energy variation). And information can be meaningful or not.
> >> >Meaningfulness being understood relatively to the constraint of a system.
> >> >(small flying insect is meaningful information for a frog, not for a
> >> >mouse - meaning is there vs survival constraint, finding food -).
> >>
> >> SA: Would you say that constraint = a (perceived) difference that has
> >> meaning for the entity?
> >>
> >CM: I'm affraid I don't understand what you mean here.
> >Could you pls reword ?
>
> SA: Can you say that constraint is a situation where there is a difference
> that has a meaning for the system involved.
>
CM: Yes, one can consider the constraint as being a meaningful
information for other elements of the MGS. But it generates a
regression of meaning generation at lower levels of complexity than
the one of the system. i.e. in case of basic life for MGS, we would
have to consider meaning generation for inert elements.
This will of course have to be done, but I just don't feel ready to
deal with this aspect of the subject right now.
>
>
> >> >CM: Regarding "meaning", I am carefull separating human (and human
> >> >built artefacts) from non human life. This because we do not know the
> >> >nature of human (the "hard problem"). So I feel it may be misleading
> >> >to define a meaning vs constraints we do not know (constraints
> >> >for human are very interesting subjects on which psychology and
> >> >philosophy has been working hard. But I feel more is to be done).
> >>
> >> SA: Separation of human/other in terms like "meaning" is very important.
> >> We have learned this the hard way.
> >>
> >CM: Have you noted that today philosophy of mind concentrates the
> >analysis of "meaning" on human, more precisely on linguistics ?
> >(a possible consequence of the "linguistic turn")
> >How do you plan to handle this point in your approach ?
> >(regarding my Entropy publication, some readers have asked me to
> >position my systemic approach vs the works of R. Millikan and
> >F. Dretske. I'm working on it. It's intersting.)
>
> SA; Yes we have. It is very difficult to overcome. And no we have not a plan
> for this. We just now emphasise that "meaning" has not so strong human
> context, it is context whitout consioisness.
> By the way I read you Entrophy paper and find it had many of our
> perceptions.
>
> >> >> 3. CM: No theory is available as covering the concept of
> >> >> meaning, whatever the system.
> >> >> ENSI: No in ENSI either. But it is a try to formulate an evolutionary,
> >> >> universal, information theory.
>
> >> >CM: We are on the same track, but defining appropiate hypothesis
> >> >is very important. I choose to begin with basic life because
> >> >we understand its constraints. And I do not want to begin with
> >> >human because we do not understand its constraints. Understand
> >> >the constraints of human and correspondingly apply the MGS
> >> >will be second step (bottom up or evolutionary approach).
> >>
> >> SA: Right. ENSI is also bottom up, evolutionary approach. We have a hint,
> >> a guess, how complex human information processing happens.
> >> It is a multilevel system, which has an emergent transformation at every
> >> new level. This
> >> emergent transformation, a new information phenomenon (ref. data,
> >> information, knowledge, understanding, wisdom), makes it hard and complex
> >> to understand human information processing in detail and as a whole.
> >> A fact: Human brains has 100.000.000.000 neuron. A hypothese: New
> emergent
> >> feature needs 300 networked subentities (first 300 neuron and then 300
> >> bunches of 300 neurons and so on) to form a new emergent unit. 300 is
> from
> >> the game "Life", where there is no new behaviour of the network before
> this
> >> amount of units. So just a guess.
> >> So if this 300 (about) is the complexity level to form a new phenomenon,
> >> there is about 5 levels of different information phenomenons in a man's
> >> brains. Do we know any of them?
> >>
> >> CM: On the same token, I consider all human build up
> >> >(books, computers, ...) as capable to express only
> >> >derivated meaning. Derivated from human constraints.
> >> >Consequently, I will look at meaningfulness of S/I in books
> >> >or computers only after having covered the case of humans.*
> >>
> >> SA: Ok. But computers are a special case, first because they process
> >> information but e.g. books do not, secondly because one can see that
> there
> >> is a possibility to a technological consiousness based on silicon as
> there
> >> have been a biological consiousness based on carbon. And why? Because of
> the
> >> multilevel and complex structure of computers and computer networks. Ref.
> >> for instance 100.000.000.000 neurons in a brain, 50.000.000 transistors in
> a
> >> microprocessor, signal speed neuron 0,1 km/s, computer 300.000 km/s. Ref.
> >> also internet, agent based software, computer neuro-networks, the Grid...
> >>
> >CM: Yes, perhaps. But I'm more cautious than you are on this point.
> >I don't see clearly enough the nature of human mind.
>
> SA: Well, I understand your point. It is not so clear. But according to our
> "theory"...
>
> >> >> 6. CM: A meaning is a meaningful information that is created
> >> >> by a system submitted to a constraint when it receives an
> >> >> external information that has a connection with the constraint.
> >> >> The meaning is formed of the connection existing between the
> >> >> received information and the constraint of the system.
> >> >> The function of the meaningful information is to participate
> >> >> to the determination of an action that will be implemented in
> >> >> order to satisfy the constraint of the system.
> >> >> ENSI: A meaning is a difference that causes something in a system,
> >> >> that has an effect.
> >>
> >> >CM: Yes. More precisely, I see the difference as being the
> >> >relation between the received information and the constraint
> >> >of the system. But the effect is for me outside the MGS.
> >> >The effect is a consequence of the meaningful information
> >> >generated by the system
> >>
> >> SA: Yes. In an act there is first an information phase (IP) and then the
> >> act.
> >> IP = (1) info to be interpreted, (2) interpreting info (system and its
> >> structure) and (3) result of the previous ones, interpreted info. And
> still
> >> fourth, (4) know-how info to be able to do the interpreted thing.
> >> This is actually colonel John Boyd's OODA loop, which I have expanded to
> >> general doing. So the chain goes on:
> >> (5) ability to do the interpreted thing (raw material, tools,
> >> organization...)
> >> (6) courage to do
> >> (7) will to do
> >> (8) endurance to do and
> >> (9) time and energy to do.
> >> Will and courage is of cause only human features.
> >>
> >Did you have a look at the Peircean theory on sign ? It introduces
> >the notion of interpreter.
>
> SA: No I have not myself read anything from Peirce, but I know he has
> something to offer for our approach. Have to do some homework.
>
> >PS: Why don't you post your ansers to FIS Forum also ?
> >You may get more comments and the subject is interesting for other
> >persons.
>
> SA: Maybe I should. Thanks for an advise.
>
CM: I have taken the liberty to put our exchanges back on FIS Forum.

Cheers

Christophe

_______________________________________________
fis mailing list
fis@listas.unizar.es
http://webmail.unizar.es/mailman/listinfo/fis
Received on Wed Oct 15 08:10:39 2003

This archive was generated by hypermail 2.1.8 : Mon 07 Mar 2005 - 10:24:46 CET