[Fis] Consilience and Understanding

From: Malcolm Forster <mforster@wisc.edu>
Date: Mon 25 Oct 2004 - 04:53:04 CEST

Dear All,

Aleks suggests that the goal of consilience (in all its various forms) is
not to capture complexity, but to get rid of unnecessary complexity.

I'd like to run with that idea a little. When one thinks of getting rid of
unnecessary complexity, one thinks of the ubiquitous use of idealization in
science, and in describing science. So, running with Aleks's idea, that
suggests that deliberate oversimplification might be useful because it helps
uncover and expose underlying consiliences. Here are three putative
examples: (1) The number of individuals in a biological population is
obviously a discrete number, but it is commonplace in population genetics to
treat it as a continuous quantity (because it uncovers similarities with
non-biological phenomena?). (2) Modeling the planets as point masses helps
expose the fact that all motions obey the same underlying laws. (3)
Philosophers of science treat science as if it were a non-social enterprise
because it exposes the fact that scientific inference is similar to
perceptual information processing in the brain (an instance of "associative
learning" as Pedro suggests).

If this idea is correct, then getting the broad relational facts right may
be more important than getting all the details right. Why might this be so?
One conjecture is that consilience is the key to explanation and human
understanding, and these goals are more important than the goal of "fitting
all the facts".

Putnam's peg example is worth mentioning here. If we want to explain why a
round peg can't fit through square hole, we point to macroscopic features of
the peg and board such as rigidity and shape, rather than solving the
equations for the detailed motion of all the atoms in the peg and the board.
Even if you could write actually write down those solutions, such a
"micro-explanation" would include a lot of details that are irrelevant to
the question, and it would fail to provide any meaningful kind of
understanding.

Macroproperties, such as rigidity and shape, abstract away from the details
in a way that makes them broadly applicable to a wide range of otherwise
disparate phenomena. That is, they help provide a useful kind of
consilience of separate inductions.

Perhaps that is why the concept of consilience is so elusive: Nobody
understands consilience because nobody understands the nature of human
understanding.

Presumably, we all agree that consilience is important (for otherwise we
wouldn't be discussing it). But, I wonder whether we have a common
agreement about why consilience is important? I say consilience (in its
various forms) is important because it's an essential ingredient in human
understanding. Are there other reasons why consilience is important?

Malcolm

_______________________________________________
fis mailing list
fis@listas.unizar.es
http://webmail.unizar.es/mailman/listinfo/fis
Received on Mon Oct 25 05:02:37 2004

This archive was generated by hypermail 2.1.8 : Mon 07 Mar 2005 - 10:24:47 CET