SV: SV: [Fis] Re: What is the definition of information ?

SV: SV: [Fis] Re: What is the definition of information ?

From: Søren Brier <[email protected]>
Date: Thu 01 Sep 2005 - 14:25:56 CEST

What we need to do is that you offer your best possible description of your favorite information concept, its basis, subject area, use and method of measuring and relation to a concept of meaning or thermodynamics. I suggest we collect them in a sort Wikipedia system and continue to work on them.

    Søren

-----Oprindelig meddelelse-----
Fra: Srinandan Dasmahapatra [mailto:sd@ecs.soton.ac.uk]
Sendt: 1. september 2005 08:50
Til: Julio Stern
Cc: Søren Brier; lin@mdpi.org; Dupagement@aol.com; fis@listas.unizar.es
Emne: RE: SV: [Fis] Re: What is the definition of information ?

I agree that both these formulations -- as an update of the distribution on X
or in the appearance of the effects of further observation, or association, or
anything at all, in the distance function -- capture the essence of how the
statistical description of the system is affected. However, when the
resultinng entropy or variance or some other measure of a particular
variable's spread is increased, it is quite clear that when we sample from the
resultinng distribution, we will be able to have less faith in the mean value.
 This rephrases the amount of information about a variable as being the degree
of surprise samples from it bring.

All this can be viewed in its mathematical essence as you have done, or in
specific applications where the mathematics can be interpreted. A really
crude example would be if you bunch up a lot of particles in one spot.
Looking at the bunch allows you to quantify its location in a fairly specific
way. If the set is subject to diffusion, then the particles form a cloud with
the speciifcation of its mean providing a less informative description of the
location of the collection. Here the observation is merely the interactions
of the particles with the medium in which they are suspended.

A similar case can be made for the dissipation of any localised signal. For
instance, the passage of an action potential (a spike) down the axon of a
neuron will also have an increasing variance with the passage of time as it
propagates.

In all of these examples, information is closely tied in to the specification
of all the possible outcomes of measurement, or the description of all the
possible degrees of freedom. The details of how the spread of the system's
actual occupation probabilities is measured and what constitutes information
is already circumscribed by this initial settting of the bounndaries, in the
choice of variables, and an understanding of the possible values they can
take. The rest is handle churning. What intrigues me most is the fact that
there are loads of situations -- and let me conjecture wildly, that these
distinguish living from non-living systems -- in which the possiblities
themselves can get reinvented midstream, depending on what is required. Sure,
we can reconstitute the analysis by taking the sum total of all the variables
and their possible values at different stages in time and track these changes
to have a meta-description of changes in entropy, etc., and that *is* a very
valuable exercise. But thhe mere fact that we can actively make these changes
in possibilities is fascinating for me.

-- Sri

Quoting Julio Stern <jmstern@hotmail.com>:

>
> I think I CAN agree with this one:
>
> >information is 'a difference that makes a difference'.
> (e-mail from: Søren Brier)
>
> In fact, using the notation below,
> the amount of information in observation X,
> relative to the prior p_0, is the prior-posterior
> ``distance'' D(p_0, p_n)
>
> There are several choices for the ``distance'' D( , )
> A good one the the relative entropy,
> (or Kullbach-Leibler Divergence)
> I(p_n,p_0)= \int_\Theta \log(p_n/p_0) p_n d\theta
>
> OBS1: I(p_n,p_0)=0 ONLY IF p_n=p_0
>
> OBS2: One common critique to this point of view
> is that this measure of information in the observation X
> is RELATIVE to the prior, p_0 .
> As a good Bayesian however,
> I think that is exactly as it should be...
>
> *****************************************
>
> I work with Bayesian Statistics, where the main operation is of the form
>
> p_n(\theta) \propto p_{0}(\theta) L(\theta | X) ,
>
> where:
> - \theta is the (vector) parameter of the statistical model,
> - p_0 and p_n are the prior and posterior distributions
> of the parameter \theta, and
> - L(\theta | X) is the likelihood function of \theta,
> given the observed data X=[x_1,x_2,...x_n]
>
> Now: It is possible that the observation X is such that
> the uncertainty* of p_n is greater than that of p_0
> (* entropy, variance, or whatever you want)
> Still, the information in X is statistically relevant!
>
> -- Julio Stern
>
>
> _______________________________________________
> fis mailing list
> fis@listas.unizar.es
> http://webmail.unizar.es/mailman/listinfo/fis
>

-- 
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.344 / Virus Database: 267.10.18/86 - Release Date: 31-08-2005
 
-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.344 / Virus Database: 267.10.18/86 - Release Date: 31-08-2005
 
_______________________________________________
fis mailing list
fis@listas.unizar.es
http://webmail.unizar.es/mailman/listinfo/fis
Received on Thu Sep 1 14:33:14 2005


This archive was generated by hypermail 2.1.8 on Thu 01 Sep 2005 - 14:33:14 CEST