This post is by Joe Felsenstein and Tom English
Back in October, one of us (JF) commented at Pandaâs Thumb on William Dembskiâs seminar presentation at the University of Chicago, Conservation of Information in Evolutionary Search. In his reply at the Discovery Instituteâs Evolution News and Views blog, Dembski pointed out that he had referred to three of his own papers, and that Joe had mentioned only two. He generously characterized Joeâs post as an âargument by misdirectionâ, the sort of thing magicians do when they are deliberately trying to fool you. (Thanks, how kind).
Dembski is right that Joe did not cite his most recent paper, and that he should have. The paper, âA General Theory of Information Cost Incurred by Successful Searchâ, by Dembski, Winston Ewert, and Robert J. Marks II (henceforth DEM), defines search differently than do the other papers. However, it does not jibe with the âSeven Components of Searchâ slide of the presentation (details here). One of us (TE) asked Dembski for technical clarification. He responded only that he simplified for the talk, and stands by the approach of DEM.
Whatever our skills at prestidigitation, we will not try to untangle the differences between the talk and the DEM paper. Rather than guess how Dembski simplified, we will regard the DEM paper as his authoritative source. Studying that paper, we found that:
They address âsearchâ in a space of points. To make this less abstract, and to have an example for discussing evolution, we assume a space of possible genotypes. For example, we may have a stretch of 1000 bases of DNA in a haploid organism, so that the points in the space are all 41000 possible sequences.
A âsearchâ generates a sequence of genotypes, and then chooses one of them as the final result. The process is random to some degree, so each genotype has a probability of being the outcome. DEM ultimately describe the search in terms of its results, as a probability distribution on the space of genotypes.
A set of genotypes is designated the âtargetâ. A âsearchâ is said to succeed when its outcome is in the target. Because the outcome is random, the search has some probability of success.
DEM assume that there is a baseline âsearchâ that does not favor any particular âtargetâ. For our space of genotypes, the baseline search generates all outcomes with equal probability. DEM in fact note that on average over all possible searches, the probability of success is the same as if we simply drew randomly (uniformly) from the space of genotypes.
They calculate the âactive informationâ of a âsearchâ by taking the ratio of its probability of success to that of the baseline search, and then taking the logarithm of the ratio. The logarithm is not essential to their argument.
Contrary to what Joe said in his previous post, DEM do not explicitly consider all possible fitness surfaces. He was certainly wrong about that. But as we will show, the situation is even worse than he thought. There are âsearchesâ that go downhill on the fitness surface, ones that go sideways, and ones that pay no attention at all to fitnesses.
If we make a simplified model of a âgreedyâ uphill-climbing algorithm that looks at the neighboring genotypes in the space, and which prefers to move to a nearby genotype if that genotype has higher fitness than the current one, its search will do a lot better than the baseline search, and thus a lot better than the average over all possible searches. Such processes will be in an extremely small fraction of all of DEMâs possible searches, the small fraction that does a lot better than picking a genotype at random.
So just by having genotypes that have different fitnesses, evolutionary processes will do considerably better than random choice, and will be considered by DEM to use substantial values of Active Information. That is simply a result of having fitnesses, and does not require that a Designer choose the fitness surface. This shows that even a search which is evolution on a white-noise fitness surface is very special by DEMâs standards.
Searches that are like real evolutionary processes do have fitness surfaces. Furthermore, these fitness surfaces are smoother than white-noise surfaces âbecause physicsâ. That too increases the probability of success, and by a large amount.
Arguing whether a Designer has acted by setting up the laws of physics themselves is an argument one should have with cosmologists, not with biologists. Evolutionary biologists are concerned with how an evolving system will behave in our present universe, with the laws of physics that we have now. These predispose to fitness surfaces substantially smoother than white-noise surfaces.
Although moving uphill on a fitness surface is helpful to the organism, evolution is not actually a search for a particular small set of target genotypes; it is not only successful when it finds the absolutely most-fit genotypes in the space. We almost certainly do not reach optimal genotypes or phenotypes, and thatâs OK. Evolution may not have made us optimal, but it has at least made us fit enough to survive and flourish, and smart enough to be capable of evaluating DEMâs arguments, and seeing that they do not make a case that evolution is a search actively chosen by a Designer.
This is the essence of our argument. It is a lot to consider, so letâs explain this in more detail below:
As usual I will pa-troll the comments, and send off-topic stuff by our usual trolls and replies to their off-topic stuff to the Bathroom Wall