Nature of Complexity

From: <[email protected]>
Date: Sun 13 Oct 2002 - 18:26:42 CEST

Dear Colleagues:

The ongoing discussions of WESSbook have stimulated evaluations of
basic aspects of metaphysics and science. The following essay by
Steven Weinberg overlaps some aspects of our WESSbook discussion and
provides an articulate view of relations between physics and
computation. Has Weinberg captured the essence of the distinction
between mathematics, theoretical physics and computation?

This email also serves as a preliminary notice of the upcoming
WESSbook meeting. The next WESSbook discussion is being organized for
Nov. 16, 2002. We are seeking volunteers to address the issue of the
Nature of Mind. Please call me at 703-790-1651 if you would like
further information.

Cheers

Jerry

The New York Review of Books
October 24, 2002

Review

Is the Universe a Computer?

By Steven Weinberg
A New Kind of Science
by Stephen Wolfram
Wolfram Media, 1,197 pp., $44.95

1.

Everyone knows that electronic computers have enormously helped the
work of science. Some scientists have had a grander vision of the
importance of the computer. They expect that it will change our view
of science itself, of what it is that scientific theories are
supposed to accomplish, and of the kinds of theories that might
achieve these goals.

I have never shared this vision. For me, the modern computer is only
a faster, cheaper, and more reliable version of the teams of clerical
workers (then called "computers") that were programmed at Los Alamos
during World War II to do numerical calculations. But neither I nor
most of the other theoretical physicists of my generation learned as
students to use electronic computers. That skill was mostly needed
for number crunching by experimentalists who had to process huge
quantities of numerical data, and by theorists who worked on problems
like stellar structure or bomb design. Computers generally weren't
needed by theorists like me, whose work aimed at inventing theories
and calculating what these theories predict only in the easiest cases.

Still, from time to time I have needed to find the numerical solution
of a differential equation,[1] and with some embarrassment I would
have to recruit a colleague or a graduate student to do the job for
me. It was therefore a happy day for me when I learned to use a
program called Mathematica, written for personal computers under the
direction of Stephen Wolfram. All one had to do was to type out the
equations to be solved in the prescribed code, press shift-enter,
and, presto, the answer would pop up on the monitor screen. The
Mathematica user's manual now sits on my desk, so fat and heavy that
it does double duty as a bookend for the other books I keep close at
hand.

Now Wolfram has written another book that is almost as heavy as the
Mathematica user's manual, and that has attracted much attention in
the press. A New Kind of Science describes a radical vision of the
future of science, based on Wolfram's long love affair with
computers. The book's publisher, Wolfram Media, announces "a whole
new way of looking at the operation of our universe" and "a series of
dramatic discoveries never before made public." Wolfram claims to
offer a revolution in the nature of science, again and again
distancing his work from what he calls traditional science, with
remarks like "If traditional science was our only guide, then at this
point we would probably be quite stuck." He stakes his claim in the
first few lines of the book: "Three centuries ago science was
transformed by the dramatic new idea that rules based on mathematical
equations could be used to describe the natural world. My purpose in
this book is to initiate another such transformation...."

Usually I put books that make claims like these on the crackpot shelf
of my office bookcase. In the case of Wolfram's book, that would be a
mistake. Wolfram is smart, winner of a MacArthur Fellowship at age
twenty-two, and the progenitor of the invaluable Mathematica, and he
has lots of stimulating things to say about computers and science. I
don't think that his book comes close to meeting his goals or
justifying his claims, but if it is a failure it is an interesting
one.
------------------------------------------------------------------------

The central theme of the book is easily stated. It is that many
simple rules can lead to complex behavior. The example that is used
repeatedly to illustrate this theme is a favorite toy of complexity
theorists known as the cellular automaton, so I will have to say a
bit about what cellular automata are.

Take a piece of white graph paper that has been cross-hatched into
little squares. These are the "cells." Blacken one or more of the
cells in the top row, chosen any way you like, leaving all the others
white. This is your input. Now blacken some cells in the second row,
according to some fixed rule that tells you to make any cell black or
leave it white depending on the colors of its three neighboring cells
in the first row (that is, the cells in the first row that are either
immediately above the cell in the second row or one cell over to the
right or left.) Then use the same rule, whatever it is, automatically
to color each cell in the third row according to the colors of its
three neighboring cells in the second row, and keep going
automatically in the same way to the rows below. The coloring rule
used in this way is an elementary cellular automaton.

This may seem like a solitaire variation on tick-tack-toe, only not
as exciting. Indeed, most of the 256 possible[2] elementary cellular
automata of this sort are pretty boring. For instance, consider rule
254, which dictates that a cell is made black if the cell immediately
above it, or above it and one space over to the left or right, is
black, and otherwise it is left white. Whatever the input pattern of
black cells in the top row, the black cells will spread in the rows
below, eventually filling out an expanding black triangle, so that
the cells in any given column will all be black once you get to a
low-enough row.

But wait. Wolfram's prize automaton is number 110 in his list of 256.
Rule 110 dictates that a cell in one row is left white if the three
neighboring cells in the row above are all black or all white or
black-white-white, and otherwise it is made black. Figure A shows the
result of applying this rule twenty times with a very simple input,
in which just one cell is made black in the top row. Not much
happening here. Wolfram programmed a computer to run this automaton,
and he ran it for millions of steps. After a few hundred steps
something surprising happened: the rule began to produce a remarkably
rich structure, neither regular nor completely random. The result
after 700 steps is shown in Figure B. A pattern of black cells
spreads to the left, with a foamy strip furthest to the left, then a
periodic alternation of regions of greater and lesser density of
black cells which moves to the right, followed by a jumble of black
and white cells. It is a dramatic demonstration of Wolfram's
conclusion, that even quite simple rules and inputs can produce
complex behavior.
------------------------------------------------------------------------

Wolfram is not the first to have worked with cellular automata. They
had been studied for decades by a group headed by Edward Fredkin at
MIT, following the ground-breaking work of John Von Neumann and
Stanislas Ulam in the 1950s. Wolfram is also not the first to have
seen complexity coming out of simple rules in automata or elsewhere.
Around 1970 the Princeton mathematician John Horton Conway invented
"The Game of Life," a two-dimensional cellular automaton in which
cells are blackened according to a rule depending on the colors of
all the surrounding cells, not just the cells in the row above.
Running the game produces a variety of proliferating structures
reminiscent of microorganisms seen under a microscope. For a while
the Game of Life was dangerously addictive for graduate students in
physics. A decade later another mathematician, Benoit Mandelbrot, the
inventor of fractals, gave a simple algebraic prescription for
constructing the famous Mandelbrot set, a connected two-dimensional
figure that shows an unbelievable richness of complex detail when
examined at smaller and smaller scales.

There are also well-known examples of complexity emerging from simple
rules in the real world. Suppose that a uniform stream of air is
flowing in a wind tunnel past some simply shaped obstacle, like a
smooth solid ball. If the air speed is sufficiently low then the air
flows in a simple smooth pattern over the surface of the ball.
Aerodynamicists call this laminar flow. If the air speed is increased
beyond a certain point, vortices of air appear behind the ball,
eventually forming a regular trail of vortices called a "Von Karman
street." Then as the air speed is increased further the regularity of
the pattern of vortices is lost, and the flow begins to be turbulent.
The air flow is then truly complex, yet it emerges from the simple
differential equations of aerodynamics and the simple setup of wind
flowing past a ball.

What Wolfram has done that seems to be new is to study a huge number
of simple automata of all types, looking specifically for those that
produce complex structures. There are cellular automata with more
than two colors, or with coloring rules like the Game of Life that
change the colors of cells in more than one row at a time, or with
cells in more than two dimensions. Beyond cellular automata, there
are also automata with extra features like memory, including the
Turing machine, about which more later. From his explorations of
these various automata, Wolfram has found that the patterns they
produce fall into four classes. Some are very simple, like the
spreading black triangle in the rule 254 elementary cellular
automaton that I mentioned first. Other patterns are repetitive, such
as nested patterns that repeat themselves endlessly at larger and
larger scales. Still others seem entirely random. Most interesting
are automata of the fourth class, of which rule 110 is a paradigm.
These automata produce truly complex patterns, neither repetitive nor
fully random, with complicated structures appearing here and there in
an unpredictable way.

So what does this do for science? The answer depends on why one is
interested in complexity, and that depends in turn on why one is
interested in science.

2.

Some complex phenomena are studied by scientists because the
phenomena themselves are interesting. They may be important to
technology, like the turbulent flow of air past an airplane, or
directly relevant to our own lives, like memory, or just so lovely or
strange that we can't help being interested in them, like snowflakes.
Unfortunately, as far as I can tell, there is not one real-world
complex phenomenon that has been convincingly explained by Wolfram's
computer experiments.

Take snowflakes. Wolfram has found cellular automata in which each
step corresponds to the gain or loss of water molecules on the
circumference of a growing snowflake. After adding a few hundred
molecules some of these automata produce patterns that do look like
real snowflakes. The trouble is that real snowflakes don't contain a
few hundred water molecules, but more than a thousand billion billion
molecules. If Wolfram knows what pattern his cellular automaton would
produce if it ran long enough to add that many water molecules, he
does not say so.

Or take complex systems in biology, like the human nervous or immune
systems. Wolfram proposes that the complexity of such systems is not
built up gradually in a complicated evolutionary history, but is
rather a consequence of some unknown simple rules, more or less in
the way that the complex behavior of the pattern produced by cellular
automaton 110 is a consequence of its simple rules. Maybe so, but
there is no evidence for this. In any case, even if Wolfram's
speculation were correct it would not mean that the complexity of
biological systems has little to do with Darwinian evolution, as
Wolfram contends. We would still have to ask why organisms obey some
simple rules and not other rules, and the only possible answer would
be that natural selection favors those rules that generate the kind
of complex systems that improve reproductive fitness.

Wolfram even tackles the old conflict between belief in a
deterministic view of nature and in the existence of free will. He
suggests that free will is an illusion that arises from the apparent
unpredictability of the complex behavior produced by those simple
rules of biology that he imagines to govern the human organism. This
is odd, because we certainly don't attribute free will to other
unpredictable complex phenomena like earthquakes or thunderstorms.
Surely the impression of free will arises instead from our personal
experience of actually deciding what to do, an experience that we are
unwilling to suppose is enjoyed by earthquakes or thunderstorms. This
is not to say that I have any enlightenment to offer about free will
either, because I have never been able to understand the
inconsistency that other people find between free will and a
completely deterministic view of nature. Free will to me means only
that we sometimes decide what we do, and we know that this is true by
the same sort of mental experience that convinced Descartes that he
existed, but we have no mental experience that tells us that our
decisions are not inevitable consequences of past conditions and the
laws of nature.

3.

Other scientists like myself study phenomena that may not be
intrinsically very interesting, because they think that studying
these phenomena will help them to understand the laws of nature, the
rules that govern all phenomena. In such work, we tend to study the
simplest possible phenomena, because it is in these cases that we can
most easily calculate what our theories predict, and compare the
results with experimental data to decide whether our theories are
right or wrong. Wolfram makes it seem that physicists choose simple
rather than complex phenomena to study because of long habit or
mathematical flabbiness, but in seeking the laws of nature it is the
essence of the art of science to avoid complexity.

My own work has been mostly on the theory of elementary particles,
but I have never found these particles very interesting in
themselves. An electron is featureless, and much like every other
electron. Most of those of us who study elementary particles do so
because we think that at this moment in the history of science it is
the best way to discover the laws that govern all natural phenomena.
Because of this special motivation, we don't generally care whether
we can calculate everything that happens to elementary particles in
complicated situations, only whether we can calculate enough to check
the validity of our theories. In collisions of elementary particles
at moderate energy the energy of the collision goes into complex
showers of particles. No one can predict the details of these
showers, even where the underlying theory is known, and hardly anyone
cares. At higher energies things are simpler: the energy goes into
well-defined jets, each containing particles that travel in the same
direction, in a way that can be calculated theoretically and compared
with experiment to test our theories of elementary particles. It is
the laws, not the phenomena, that interest us.

Unlike elementary particles, planets have historically seemed
interesting for religious and astrological reasons. But it was the
simplicity of planetary motions that allowed Newton to discover the
laws of motion and gravitation. The planets move in empty space and
to a good approximation under the influence of a single motionless
body, the sun. Newton would never have discovered his laws by
studying turbulence or snowflakes.

Wolfram himself is a lapsed elementary particle physicist, and I
suppose he can't resist trying to apply his experience with digital
computer programs to the laws of nature. This has led him to the view
(also considered in a 1981 article by Richard Feynman) that nature is
discrete rather than continuous. He suggests that space consists of a
network of isolated points, like cells in a cellular automaton, and
that even time flows in discrete steps. Following an idea of Edward
Fredkin, he concludes that the universe itself would then be an
automaton, like a giant computer. It's possible, but I can't see any
motivation for these speculations, except that this is the sort of
system that Wolfram and others have become used to in their work on
computers. So might a carpenter, looking at the moon, suppose that it
is made of wood.

 

There is another reason for studying complex phenomena-not because
the phenomena are interesting, which they sometimes are, or because
studying complex phenomena is a good way to learn the laws of nature,
which it isn't, but because complexity itself is interesting. Maybe
there is a theory of complexity waiting to be discovered, that says
simple things about complex behavior in general, not just about the
rule 110 cellular automaton or turbulent air flow or the human
nervous system.

There are other examples of what I like to call free-floating
theories, theories that are applicable in a wide (though not
unlimited) variety of very different contexts. The theory of chaos,
which has captured the public imagination, deals with systems, from
the weather to the pebbles in the rings of Saturn, whose behavior
exhibits an exquisite sensitivity to initial conditions.
Thermodynamics, the science of heat, is a less trendy example.
Concepts of thermodynamics like temperature and entropy are
applicable to black holes as well as to steam boilers. A less
familiar example is the theory of broken symmetry. Many very
different substances, including superconductors, magnetized iron, and
liquid helium, are governed by equations that have some symmetry, in
the sense that the equations look the same from certain different
points of view, and yet the substances exhibit phenomena that do not
respect this symmetry.

There is a low-intensity culture war going on between scientists who
specialize in free-floating theories of this sort and those (mostly
particle physicists) who pursue the old reductionist dream of finding
laws of nature that are not explained by anything else, but that lie
at the roots of all chains of explanation. The conflict usually comes
to public attention only when particle physicists are trying to get
funding for a large new accelerator. Their opponents are exasperated
when they hear talk about particle physicists searching for the
fundamental laws of nature. They argue that the theories of heat or
chaos or complexity or broken symmetry are equally fundamental,
because the general principles of these theories do not depend on
what kind of particles make up the systems to which they are applied.
In return, particle physicists like me point out that, although these
free-floating theories are interesting and important, they are not
truly fundamental, because they may or may not apply to a given
system; to justify applying one of these theories in a given context
you have to be able to deduce the axioms of the theory in that
context from the really fundamental laws of nature.

This debate is unfortunate, for both kinds of science are valuable,
and they often have much to teach each other. My own work in
elementary particle physics has benefited tremendously from the idea
of broken symmetry, which originated in the study of the solid state
but turned out to be the key both to understanding reactions
involving particles called pi mesons at low energy and to the
unification of some of the forces acting on elementary particles. The
theory of complexity might also have lessons for elementary particle
theory (or vice versa) but it is not likely to be fundamental in the
same sense as elementary particle physics.

Lately particle physicists have been having trouble holding up their
end of this debate. Progress toward a fundamental theory has been
painfully slow for decades, largely because the great success of the
"Standard Model" developed in the 1960s and 1970s has left us with
fewer puzzles that could point to our next step. Scientists studying
chaos and complexity also like to emphasize that their work is
applicable to the rich variety of everyday life, where elementary
particle physics has no direct relevance.

Scientists studying complexity are particularly exuberant these days.
Some of them discover surprising similarities in the properties of
very different complex phenomena, including stock market
fluctuations, collapsing sand piles, and earthquakes. Often these
phenomena are studied by simulating them with cellular automata, such
as Conway's Game of Life. This work is typically done in university
physics departments and in the interdisciplinary Santa Fe Institute.
Other scientists who call themselves complexity theorists work in
university departments of computer science and mathematics and study
the way that the number of steps in a computer calculation of the
behavior of various systems increases with the size of the systems,
often using automata like the Turing machine as specific examples of
computers. Some of the systems they study, such as the World Wide
Web, are quite complex. But all this work has not come together in a
general theory of complexity. No one knows how to judge which complex
systems share the properties of other systems, or how in general to
characterize what kinds of complexity make it extremely difficult to
calculate the behavior of some large systems and not others. The
scientists who work on these two different types of problem don't
even seem to communicate very well with each other. Particle
physicists like to say that the theory of complexity is the most
exciting new thing in science in a generation, except that it has the
one disadvantage of not existing.
------------------------------------------------------------------------

It is here I think that Wolfram's book may make a useful
contribution. Wolfram and his co-workers have been able to show that
numerous simple "class four" automata that produce complex behavior,
like the rule 110 cellular automaton, are able to emulate each other.
That is, by setting up a suitable input pattern of black and white
cells in the rule 110 cellular automaton, one can produce the same
complex pattern that would be produced by other class four automata,
and vice versa. (In this emulation, blocks of cells in one automaton
represent a single cell in the automaton being emulated.) What makes
this particularly interesting is that one of the automata that can be
emulated in this way is the universal Turing machine.[3]

The Turing machine is the most important automaton in the history of
computer science, and the forerunner of today's digital computers. It
was invented in 1936 by Alan Turing, who in World War II became one
of Britain's ace cipher-breakers and was later the hero of Hugh
Whitemore's play Breaking the Code. Turing's purpose was to answer a
classic question of mathematical logic known as the Decision Problem:
given some deductive mathematical system like arithmetic or Euclidean
geometry or symbolic logic, is there any logical method that, when
applied mechanically to any statement of that system, is guaranteed
to decide whether that statement can be proved by following the rules
of that system?[4]

To answer this question, the Turing machine was designed to capture
the essence of mechanical logical methods. Just as a person going
through a mathematical proof works with a string of symbols, focusing
on just one at a time, the Turing machine works on a one-dimensional
sequence of cells, each containing a symbol taken from some finite
list, with only one "active" cell that can be read and possibly
changed at each step. Also, to correspond to the fact that a person
working out a proof would keep some memory of previous steps, Turing
gave his machine a memory register, which can be in any one of a
finite number of "conditions."

Each type of Turing machine obeys a fixed rule that tells it at each
step how to change the symbol in the active cell, how to change the
condition of the memory register, and whether to move the active cell
one step to the left or right, according to the symbol in the active
cell and the condition of the memory register. It makes no difference
what symbols we choose to use or what conditions are possible for the
memory register; their significance arises only from the rules of the
machine. The problem to be solved and the data to be used are fed
into the machine as an initial string of symbols, and the answer
appears as the string of symbols found when the memory register
reaches a condition that tells the machine to stop.

Turing never actually built such a machine (though he did go on to
build some special-purpose computers), but if you like you can think
of the cells in a Turing machine as forming a paper tape, with the
symbols just a sequence of colored dots on the tape, read and written
by a scanning device that moves up or down the tape from one active
cell to another. In this example the memory register is a simple
mechanical pointer which can take any of two or more positions. The
decision how to change the color of the active cell and the position
of the memory register and how to move the tape is made according to
the color of the cell being read and the position of the pointer,
following rules that are hard-wired into the machine. A specific
Turing machine is entirely characterized by the number of possible
colors on the cells of the tape, the number of possible positions of
the pointer, and by the rules wired into the machine.
------------------------------------------------------------------------

The important thing about Turing machines is that some of them are
universal. Turing was able to prove that any of these universal
Turing machines could calculate or prove anything that could be
calculated or proved by any other Turing machine. For instance, at
least one of the huge number (96 to the power 48) of the possible
Turing machines that use two colors and have a memory register with
twenty-four positions is universal. Further, from the way that Turing
machines were designed to imitate the way humans mechanically do
calculations, Turing argued that universal Turing machines could
calculate or prove anything that could be calculated or proved by any
purely mechanical procedure. This is often called the Church-Turing
thesis, because at about the same time the Princeton mathematician
Alonzo Church reached similar conclusions about a more abstract but
equivalent mathematical method of his own. Incidentally, Turing and
Church found that the answer to the Decision Problem in most
mathematical or logical systems is "no": there is no mechanical
procedure that is guaranteed to decide whether any given statement
can be proved by following the rules of that system.

Since universal Turing machines can be emulated by the rule 110
cellular automaton, it follows that this cellular automaton, and all
the other automata that can emulate it, are also universal- they can
do any computation that can be done by any computer. The program for
the calculation and the data to be used would be fed into a rule 110
cellular automaton as a pattern of black cells in the top row, and
the answer would appear as a pattern on a lower row. Wolfram says
that all these automata are computationally equivalent, but that is
only true in a limited sense. The simpler the design of a universal
computer, the more steps it takes to emulate each single step of a
practical computer. This is why Dell and Compaq do not sell Turing
machines or rule 110 cellular automata.

Wolfram goes on to make a far-reaching conjecture, that almost all
automata of any sort that produce complex structures can be emulated
by any one of them, so they are all equivalent in Wolfram's sense,
and they are all universal. This doesn't mean that these automata are
computationally equivalent (even in Wolfram's sense) to systems
involving quantities that vary continuously. Only if Wolfram were
right that neither space nor time nor anything else is truly
continuous (which is a separate issue) would the Turing machine or
the rule 110 cellular automaton be computationally equivalent to an
analog computer or a quantum computer or a brain or the universe. But
even without this far-reaching (and far- out) assumption, Wolfram's
conjecture about the computational equivalence of automata would at
least provide a starting point for a theory of any sort of complexity
that can be produced by any kind of automaton.
------------------------------------------------------------------------

The trouble with Wolfram's conjecture is not only that it has not
been proved-a deeper trouble is that it has not even been stated in a
form that could be proved. What does Wolfram mean by complex? If his
conjecture is not to be a tautology, then we must have some
definition of complex behavior independent of the notion of
universality. The pattern produced by the rule 110 cellular automaton
certainly looks complex, but what criterion for complexity can we use
that would tell us that it is complex enough for Wolfram's conjecture
to apply?

There is a well-known parallel problem in defining randomness. The
most common precise definition of the randomness of a string of
digits or of a sequence of black and white cells on a tape is that it
is random if there is no way of describing it with a string of
shorter length. The trouble is that according to this definition the
string of digits in a number like the square root of two would not
qualify as random, because it can be described very simply -it is the
square root of two-even though it surely looks random. (Mathematica
gives the first thirty digits as 1.41421356237309504880168872420.) In
the same way, it wouldn't do to define the output (shown in Figure B)
of a cellular automaton like rule 110 with a single black cell in the
top row as complex only if it can't be described in simple terms,
because it can be described in simple terms-it is the output of rule
110 with a single black cell in the top row. There are other
definitions of randomness, such as the absence of correlations: the
digits in the square root of two can be said to be random because, as
far as is known, being given one digit at an unidentified decimal
place tells you nothing at all about what the next digit is likely to
be. Wolfram has not even begun to formulate a similar definition of
complexity.

In fact, as he admits, for Wolfram the real test of the complexity of
a pattern is that it should look complex. Much of his discussion of
complexity is anecdotal, relying on pictures of the patterns produced
by specific automata that he has known. In this, Wolfram is allying
himself with one side in the ancient struggle between what (with much
oversimplification) one might call cultures of the image and cultures
of the word. In our own time it has surfaced in the competition
between television and newspapers and between graphical user
interfaces and command line interfaces in computer operating systems.

The culture of images has had the better of it lately. For a while
the culture of the word had seemed to have scored a victory with the
introduction of sound into motion pictures. In Sunset Boulevard,
Norma Desmond recalls that in silent films, "We didn't need dialogue.
We had faces." But now movies can go on for long stretches with no
words, only the thunk of cars running into each other and the sizzle
of light sabers. The ascendancy of the culture of the image has been
abetted by computers and the study of complexity, which have made
possible the simulation of complex visual images.

I am an unreconstructed believer in the importance of the word, or
its mathematical analogue, the equation. After looking at hundreds of
Wolfram's pictures, I felt like the coal miner in one of the comic
sketches in Beyond the Fringe, who finds the conversation down in the
mines unsatisfying: "It's always just 'Hallo, 'ere's a lump of coal.'"

Wolfram's classification of the patterns produced by cellular
automata dates from the early 1980s, and the discovery that the rule
110 elementary cellular automaton is a universal computer was made in
the early 1990s. Since then, none of this work has had much of an
impact on the research of other scientists, aside from Wolfram's
employees. The strongest reaction I have seen by scientists to this
new book has been outrage at Wolfram's exaggeration of the importance
of his own contributions to the study of complexity. Wolfram's survey
of the complex patterns produced by automata may yet attract the
attention of other scientists if it leads to some clear and simple
mathematical statement about complexity. I doubt if even Wolfram
cares what picture is produced by the rule 110 cellular automaton
after a billion steps. But if Wolfram can give a precise statement of
his conjecture about the computational equivalence of almost all
automata that produce complex patterns and prove that it is true,
then he will have found a simple common feature of complexity, which
would be of real interest. In the study of anything outside human
affairs, including the study of complexity, it is only simplicity
that can be interesting

Notes

[1] A differential equation gives a relation between the value of
some varying quantity and the rate at which that quantity is
changing, and perhaps the rate at which that rate is changing, and so
on. The numerical solution of a differential equation is a table of
values of the varying quantity, that to a good approximation satisfy
both the differential equation and some given conditions on the
initial values of this quantity and of its rates of change.

[2] The automaton must tell you the color of a cell in one row for
each of the 2 x 2 x 2 = 8 possible color patterns of the three
neighboring cells in the row above, and the number of ways of making
these eight independent decisions between two colors is 28 = 256. In
the same way, if there were 3 possible colors, then the number of
coloring decisions that would have to be specified by an elementary
cellular automaton would be 3 x 3 x 3 = 27, and the number of
automata (calculated using Mathematica) would be 327 = 7625597484987.

[3] Wolfram says that the main elements of the proof were found in
1994 by one of his assistants, Matthew Cook, and he gives an
unreadable updated version in this book, along with an admission that
a few errors may still remain. I gather that the proof has not been
published in a refereed journal. An article in Nature by Jim Giles
titled "What Kind of Science Is This?" (May 16, 2002) reports that
when Cook left his job with Wolfram in 1998, he gave a talk on his
work at the Santa Fe Institute, but the talk did not appear in the
conference proceedings; Wolfram took legal action against Cook,
arguing that Cook was in breach of agreements that prevented him from
publishing until after the publication of Wolfram's book.

[4] Turing took pains to point out that this issue had not been
settled by the famous 1931 theorem of Kurt G�del, which states that
there are statements in the general system of mathematics presented
in the Principia Mathematica of Bertrand Russell and Alfred North
Whitehead that can be neither proved nor disproved by following the
rules of that system.

----------------------------------------------------------------------
--Home � Your account � Current issue � Archives � Subscriptions �
Calendar � Newsletters � Gallery � Books

Copyright � 1963-2002 NYREV, Inc. All rights reserved. Nothing in
this publication may be reproduced without the permission of the
publisher. Illustrations copyright � David Levine unless otherwise
noted; unauthorized use is strictly prohibited. Please contact
web@nybooks.com with any questions about this site. The cover date of
the next issue will be November 7, 2002.
Received on Sun Oct 13 18:26:56 2002

This archive was generated by hypermail 2.1.8 : Mon 07 Mar 2005 - 10:24:46 CET