**Introduction**. In the beginning of March 2005 William Dembski sent an email message to several critics of his output, including me. Dembski wrote:

“Dear Critics,

Attached is a paper that fills in the details of chapter 4 of No Free Lunch, which David Wolpert referred to as “written in jello.” The key result is a displacement theorem. Along the way I prove and (sic) measure-theoretic variant of the No Free Lunch theorems.”

Dembski concluded his message as follows:

”… I expect that Ken Miller’s public remarks about intelligent design being a ‘total, dismal failure scientifically’ will become increasingly difficult to sustain. This paper, and subsequent revisions, can be found on my website www.designinference.com. I welcome any constructive comments about it.”

Dembski’s new paper (in PDF format) is found at http://www.designinference.com/docu[…]e_Spaces.pdf, where its text has already undergone some modifications compared to its initial version. Perhaps these modifications were prompted by critical comments that appeared on the internet, in particular those made by a contributor to the ARN website who signs his posts as RBH, as well as by Tom English and David Wolpert (whose remarks appeared on a certain internet forum).

In his essay at http://www.designinference.com/docu[…]Backlash.htm Dembski wrote,

“I’m not going to give away all my secrets, but one thing I sometimes do is post on the web a chapter or section from a forthcoming book, let the critics descend, and then revise it so that what appears in book form preempts the critics’ objections. An additional advantage with this approach is that I can cite the website on which the objections appear, which typically gives me the last word in the exchange. And even if the critics choose to revise the objections on their website, books are far more permanent and influential than webpages.”

While Dembski’s frank admission of the tricks he resorts to in order to “win” the cultural war may sound commendable, the tricks themselves are hardly in tune with what normally is considered intellectual integrity. He should have added that making the described revisions in his texts, he usually does not acknowledge the input from critics.

While I make a note of Dembski’s invitation to offer “constructive comments about it,” it seems proper to point out that Dembski’s earlier rendition of the topics covered in his new paper have been extensively discussed and critiqued but he has not deemed it necessary to respond to critique. In particular, in a chapter which I authored in the anthology *Why Intelligent Design Fails* (editors Matt Young and Taner Edis, Rutgers Univ. Press, 2004) I specifically addressed Dembski’s [mis]interpretation of the No Free Lunch theorems and his “displacement problem” as it was rendered in chapter 4 of his *No Free Lunch book*. Dembski’s new paper in no way answers my critique of his earlier output where he discussed the same notions in a less mathematical rendition. None of my earlier critical comments regarding Dembski’s misinterpretation and misuse of the NFL theorems and his displacement problem (in its original presentation) is deflected by anything in his new paper.

I’ll try to answer the question - does Dembski’s new paper justify his assertion that

“Ken Miller’s public remark about intelligent design being a ‘total, dismal failure scientifically’ will become increasingly difficult to sustain”?

**Not quite consistent.** In the same vein as my chapter in the anthology *Why Intelligent Design Fails*, I’ll discuss Dembski’s new paper without delving into mathematical symbolism as this essay is addressing a general audience rather than only mathematically prepared readers.

Dembski states in his new paper that it mathematically formalizes the ideas previously outlined in a less rigorous form in chapter 4 of his book *No Free Lunch* (in his words, his new paper “fills in the details of chapter 4 in No Free Lunch”).

This statement seems to be, first, aimed at asserting the supposed consistency of Dembski’s discourse, and, second, at providing a sort of answer to the well known characterization (by David Wolpert) of Dembski’s treatment of the No Free Lunch theorems as “written in jello” ( www.talkreason.org/articles/jello.cfm). Wolpert is a co-originator of the No Free Lunch theorems, hence his opinion of Dembski’s treatment of these theorems carried considerable weight. Some of Dembski’s admirers tried to play down the significance of Wolpert’s critique by posting letters to that effect on various internet fora. However, Dembski himself has maintained a deafening silence, as if Wolpert’s critique did not exist. It seems that the statement in his new paper which points to the supposed consistency between chapter 4 in his 2002 book and his new paper is a device designed to blunt the sharpness of Wolpert’s characterization.

However, the comparison of Dembski’s new paper with chapter 4 in his book shows that the assertion of consistency is not quite true. In fact, Dembski’s new paper introduces certain substantial modifications of the basic concepts suggested in the “jello” chapter in his earlier book. The displacement problem No 1 (as rendered in the No Free Lunch book) and the displacement problem No 2 (as discussed in the new paper) seem to be not quite the same problem (as I’ll discuss below), although Dembski’s position is that the two displacement problems are identical.

Dembski’s new paper is heavily mathematical and is obviously designed to impress readers with his mathematical sophistication. Dembski’s colleagues (some of whom may even not have a proper background to comprehend his mathematical ruminations) have promptly acclaimed this new paper as “splendid” and allegedly “displacing” that perfidious offshoot of materialistic philosophy, “Darwinism.”

I believe the delight of Dembski and his colleagues is premature.

**Fallacious assumptions**. First a very general observation. Dembski’s delight is based on the implicit assumption that fundamental concepts of biological science can be “proved” or “disproved” mathematically. Dembski has adhered to similar notions previously, for example, suggesting in his book *The Design Inference* that representing certain notions in a mathematically symbolic form somehow “proves” them. In my book *Unintelligent Design* (chapter 1, pages 26-28) I have demonstrated the fallacy of such a supposition, using as an example Dembski’s presentation of an argument for design in two versions - once expressed in plain words, and once in a mathematically symbolic form. As is evident from that juxtaposition of two renditions of the same argument, using mathematical symbolism does not provide any additional insight and, in Dembski’s case, only served to embellish his discourse. In his recent papers, including the paper I am discussing here, Dembski makes a further step on the same road. Now his overall approach seems to be implicitly based on the idea that a purely mathematical discourse is capable of “displacing Darwinism.”
Of course, this is wishful thinking. “Mathematics is a language,” said the great American physicist Josiah Willard Gibbs. Indeed. Mathematics is an extremely powerful tool. However, no mathematical theorem or equation “proves” or “disproves” anything beyond the logical connection between a premise and a conclusion. If a premise is false, so is the conclusion, regardless of how sophisticated and impeccably correct the applied mathematical apparatus is.
Since Dembski’s proclaimed goal is to prove “Darwinism” false, all of his mathematical exercise is irrelevant as it in principle cannot achieve such a goal. Evolutionary biology is an experimental science and “Darwinian” mechanisms of evolution have been supported by an immense empirical material. No mathematical theorems or equations can “displace” evolutionary biology. Its successes and failures can only stem from empirical research and observations bolstered by proper theorizing, wherein mathematics, however important and enlightening, is always only a tool.

Closely connected to this fallacious approach to mathematics as allegedly capable of “disproving” evolution, there is another serious (I would say fatal) drawback to Dembski’s approach. He seems to first implicitly define his goal (in this case to prove that “Darwinian” mechanisms cannot explain evolution) and then apply what is an analog of “reverse engineering” to find a premise from which his already chosen conclusion can be mathematically derived. The premise deliberately chosen to lead to a pre-determined conclusion has little chance to be true.

**Dembski’s premise.** To be more specific, let us see what Dembski’s premise implies. Here there is indeed a certain consistency between his earlier discourse in his *No Free Lunch* book and his new paper. He considers biological evolution as the search for a certain small target within a very large search space (which in his book was referred to as the “phase space”).

As I pointed out in *Why Intelligent Design Fails*, in his *No Free Lunch* book Dembski did not suggest a definition of a target. In his new paper this lacuna has been filled. Now Dembski provides a definition of a target. Without delving into Dembski’s mathematical formalism, he defines target T as a particular small region somewhere within the very large search space Q. He asserts next that finding the target using a blind search is an endeavor whose success has a very small probability. For the search to be successful, it has to be “assisted,” which means the search algorithm needs to get information about the structure of the search space. For example, the search algorithm may be assisted by feedback, letting the algorithm know whether each step brings it closer to the target or pushes it farther from the target. The source of such information lies in a “higher-order” information space. In Dembski’s *No Free Lunch* this higher-order space was denoted J. From that earlier discourse seemed to follow that space J contained (perhaps besides some other things) all possible fitness functions. In his new paper the “higher-order” space is denoted M and seems now to contain not the fitness functions but rather all possible “searches,” that is all possible search algorithms. This seems to be a substantial change of the entire concept of the “displacement problem” which Dembski claims to be the main element of his discourse. It seems to be at odds with Dembski’s claim asserting that his new paper is just “filling in details” in his earlier rendition of his ideas, this time on a more rigorous level, so it is no longer “written in jello.”
In fact, his new rendition is substantially different from the original version found in his book.

**Dembski’s assisted searches and NFL regress.** Let us recall once again the problem Dembski discusses in his new paper. It is a search for a small target within a large search space. Mathematically analyzing this problem, Dembski concludes that only an “assisted search” has a reasonable chance of success and such an “assisted search” is only possible if, first, a “search for a search” is conducted in a higher-order information-resource space. The latter, however, in turn requires information from an even higher-order space, etc. Dembski calls this situation “No Free Lunch Regress.” He maintains that “stochastic processes” (and biological evolution belongs in this category) are incapable of getting out of the regress in question, so for the search to be successful, input from intelligence is necessary (which seems to be in fact his *a priori* conviction, just not expressed explicitly).
Dembski’s claim that the “assistance” can only come from an intelligent source reflects his antecedent belief but is not supported by any argument, either mathematical or heuristic. A detailed discussion of this point is not necessary, however, because, as I will show, Dembski’s entire schema is irrelevant for real-life searches.
Another comment that immediately comes to mind is that if a search is assisted by information from a higher-order space, the search algorithm that has acquired such information is not a “black-box” algorithm any more, so the No Free Lunch theorems, at least in the form they were proven by Wolpert and Macready, are invalid for such algorithms. (Wolpert-Macready’s proof was valid for black-box algorithms. A black-box algorithm has no advance knowledge of the fitness landscape and acquires such knowledge step-by-step, extracting it from the fitness landscape in such a way that it accumulates information about the already visited points in the landscape but still has no knowledge of any points not yet visited; it possesses no knowledge of a target either, if the search is target-directed.)

Although this simple consideration casts doubts on Dembski’s entire discourse (unless he can prove that the requirements for an algorithms to be of a “black-box” variety can be invalidated) it is of a secondary importance because the No Free Lunch theorems are only about the average “performance” of search algorithms and are irrelevant to the actual problem of a *specific* search algorithm facing a *specific* fitness landscape. In that, Dembski’s new paper is not an improvement over his earlier discourse and fails to account for the irrelevance of the NFL theorems for biological evolution.

Moreover, regardless of the NFL theorems, Dembski’s discourse seems to be, again, irrelevant to real-life problems for a more universal reason. Here is why.

Do we need to analyze all of Dembski’s convoluted mathematics in order to see whether his conclusion is substantiated? No. There are several reasons to ignore Dembski’s mathematical exercise but I will now point to only one such reason which, I believe, is fully sufficient to reject Dembski’s conclusion.

**Is biological evolution a search for a target?** Biological evolution has nothing to do with the problem Dembski analyzes in his new paper - the problem of a search for a small target in a large search space.

Let us grant Dembski the assumptions and derivations he offered in his mathematical exercise. They may be perfectly correct or partially defective, but either way it will not affect our general conclusion.

Biological evolution is not a search for a target in a search space. It knows of no target. It is blind and its results are not predetermined, unlike the results of a targeted search employed in certain artificially designed evolutionary algorithms (such as in Dawkins’s “weasel” algorithm).
Look at Dembski’s example of a “search” for a particular protein. He calculates that the probability of “finding” a particular protein which is 100 amino acids in length via a random search in a space of all possible proteins of such a length, assuming uniform distribution of probabilities in this space, is so small ( 10^{-130} ) as to be practically hopeless. The arithmetic here may be perfectly correct, but it has no relevance to real biological evolution. Evolution does not search for a particular protein determined in advance as a target. It conducts a variety of blind “searches,” the number of which is immense and some of them result in a spontaneous emergence of certain biologically useful proteins, whose biological role was not foreseen. The probabilities of such occurrences are irrelevant: because of the very large number of such “searches,” the overall likelihood of emergence of some useful proteins is by many orders of magnitude larger than the number Dembski calculates. Moreover, imposing upon “random searches” a non-random factor - natural selection may serve as an example of such a non-random factor - drastically accelerates the process. There are other natural factors ignored in Dembski’s schema which naturally “assist” the “search,” so it is “assisted” without input from intelligence and without a need to search a “higher-order” space. Dembski’s model of protein’s components randomly assembling all at once is very far from realistic scenarios discussed in evolutionary biology.
Dembski schema is utterly arbitrary insofar as it relates to a natural biological process.

Therefore all Dembski’s theorems and equations, as well as his conclusions, have no relevance to evolutionary biology.

**Conclusion: is Dembski’s mathematics relevant to intelligent design?** Are Dembski’s mathematical exercises relevant to intelligent design in general? I don’t think so. Indeed, let us assume that Dembski’s thesis is valid for targeted genetic algorithms like Dawkins’s “weasel” algorithm. Even if this is true, it has no relevance to the question of the validity of intelligent design. We know anyway that such artificially designed algorithms receive input from an intelligent agent - a human programmer who supplies “assistance” to the algorithm in the way of a feedback: it tells the algorithm at every step of the search whether it comes closer to the target, stays the same distance from it, or moves farther from the target. The same may be true for many other artificially programmed genetic algorithms.

However, this in no way means that an analogous situation exists in the biosphere where the search is not target-oriented and where therefore no input from an “assisting” agent is required. In fact, in biological evolution no “assistance” from a “higher-order” information space is possible because the outcome of a search is not known in advance, so the “search” (if we agree to apply this term, which is in fact a misnomer) is in all cases spontaneous and undirected.
All this has no relation to the No Free Lunch theorems, either those for fixed landscapes or those for co-evolution, which all are irrelevant to the actual encounters of *specific* natural genetic algorithms with *specific* fitness landscapes, either fixed, or co-evolving in the course of the search. Hence the question of whether Dembski’s mathematical exercise is formally correct or contains errors is irrelevant to the question of intelligent design’s validity. Even if artificial genetic computer programs indeed require an input from intelligence (although even some of such algorithms work without such an input) this is not a question of concern if intelligent design is discussed. The latter’s validity is predicated upon whether or not intelligent input is required for natural non-targeted searches.

Neither Dembski nor anybody else has so far suggested any evidence that such input is necessary for non-targeted searches. Dembski’s new paper has not done anything like that by a long shot. In my view this paper is an exercise whose heavy mathematical embellishment serves no other purpose than showing once more that Dembski, on the one hand, knows a lot of mathematical symbols, and on the other hand has problems with overall consistency and logic.

As the matter now stands, Ken Miller’s statement quoted by Dembski remains fully valid. (Besides Miller, it is shared by most scientists who happened to come across Dembski’s numerous publications - a good example is perhaps the anthology *Why Intelligent Design Fails* - which Dembski fails to even mention, let alone to reply to). Dembski’s new mathematical exercise does nothing to make the statement about the abject scientific futility of intelligent design any less true than it has been until now.

*PS. I have a surgery scheduled for tomorrow morning, so I’ll be unable to respond to critical comments, if any, at least for a while. MP*

Pretty bad logic from Dembski. He assumes that the target is extraordinarily unlikely, and the conclusion he wants follows.

In a way, his argument is orthogonal to the cosmological ID arguments. He calculates the probability of getting a specific protein, but has no way of estimating which other ones are functional. The cosmological IDiots can (in some cases) tell you what other results would be functional, but they have no way of estimating the probability of getting the observed result. So each type of IDist just assumes the missing information to be such that they get the desired result.

I think these guys are having crises of faith. Their exposure to science tells them that faith isn’t good enough, and they’re looking for evidence to justify their beliefs to themselves.

I have to agree, I believe that fundies in general [those that have to find “evidence” for the “inerrant Bible”] have a weak faith; and the science-bashing is done as a result in order to justify emotionally what cannot be supported rationally.

I think part of the problem, from Dembski’s point of view, is that he really

doesbelieve that narrow targets exist - that mankind, for example, is a target, or the bacterium flagellum is a target. If the ‘bull’s-eyes’ he uses in support of his non-mathematical arguments for CSI are sufficiently large, then the probabilities drop off tremendously. But he envisions very tight targets. Why? Is it purely derived from his religious convictions? Is there any way to actually determine the target size given a specific ecological niche?I’m going to reprise my comment #20301 from yesterday, since I hope Mark, PZ, or someone else really can tell me if I’m on the wrong track. Yesterday’s collected only the usual suspect, and the interchange got moved to the Bathroom wall. My bad. :(

Thoughts?

I like the way H. Allen Orr put it in his review of “No Free Lunch”, originally published in the Summer 2002 issue of Boston Review (over 2 and 1/2 years ago):

How many times will people need to point this out to Dembski before he finally gets it?

Tim commented,

I suspect never. Given Dembski’s theological mindset, it is

impossiblefor him to envision a universe without prespecified targets, just as it is impossible for me to envision a universe in which A != A.The problem with Dembski is that he cannot seem to admit that he is/was wrong. The observation that NFL was written in Jello by a fellow mathematician must have been a shock to Dembski. Thus the followup argument which is based on much of the same errors as the original argument shows how mathematics can be correct but the conclusion incorrect. That Dembski believes in a target goal for evolution shows how a mathematician’s viewpoint can be clouded by lack of understanding of the underlying biology. The NFL nor his displacement problem have much relevance to evolution but ID proponents are likely to be swayed by Dembski’s mathematics which are not dissimilar from his flagellar protein calculations: Garbage in, Garbage out.

Dembski’s article is garbage.

Don’t make the mistake of publically critiquing this trash until after it appears in some permanent form.

That’s just what Dembski wants so that he “can have the last word” – he has publicly articulated this very dishonest strategy of “putting critics to use” http://www.designinference.com/docu[…]Backlash.htm, http://www.pandasthumb.org/pt-archi[…]/000409.html. He describes this, in part, as follows:

In other words, Dembski has no interest in the truth, just his Christian right agenda.

Don’t play along.

Re: comment 20526 by Tim Tesar: I agree that Allen Orr’s critique od Dembski’s NFL book which Tim quotes was fine, but perhaps some additional light upon it sheds Perakh’s post at http://www.talkreason.org/articles/orr.cfm.

Don’t worry–these critiques aren’t trivial details he can fix. They’re fundamental errors which, like IC, decades of patching can’t repair.

In answer to RGD, I suggest that for the creationist, everything that exists does so through God’s Will. If God had willed something different, we’d have that instead. As a result, it is absoutely necessary and required that everything that exists must have been the specific target to begin with. And so Dembski can’t address the general question “What is the probability that evolutionary processes can happen on something of any utility whatsoever?” For Dembski, the only possible question is “What is the probability of evolution blundering onto God’s Will in every single instance?” And of course, these odds are beyond calculation to the point where even trying to describe them stretches Dembski’s math to the breaking point.

And so Dembski’s problem is one creationists suffer in general. When you start with an answer that is both wrong and rigidly unmodifiable, you are highly unlikely to happen on the right questions to ask. When better questions are pointed out, they are instantly absurd within the creationist context. To deny that observation only shows us God’s Will and nothing else, is to deny that God actually DOES anything – a thought more incomprehensible than the canonical sound of one hand clapping.

steve

I agree with your conclusion. I would urge the Pandas Thumb contributors to make certains those conclusions are up front and plain as paint.

Otherwise one without the skills to understand the arguments in detail sees just a complicated argument. That’s the whole point of the “creationism in a cheap tuxedo” strategy – if Dembski can pretend that his critics “need” to engage in lengthy (a relative term, of course) arguments to find “holes” in his theory, then Big Bill gets some satisfaction for his efforts.

Mark’s post is excellent and Dembski is obviously a deluded hack but I might spend more time emphasizing the latter and describing in the plainest ways possible why that is the case. It seems apparent that Dembksi could just as easily choose to show that the moon or the Grand Canyon must have been intelligently designed using his bizarre “definitions” and warped view of reality. Of course, it’s obvious why he chooses to spit at evolutionary biologists rather than geologists or astronomers. But surely they are next in line according to the Grand Fundie Plan.

Correct me if I’m wrong, but don’t most molecular biologists at this point agree that early life almost certainly didn’t have the current 22 “letter” vocabulary of amino acids? If so, why is Dembski still using 22 possible amino acids in his search space for proteins?

In just a second of googling, I’ve found all sorts of recent articles discussing this sort of thing. Does Dembski not have PubMed access?

http://www.ncbi.nlm.nih.gov/entrez/[…]ids=15214800 From the abstract: “It reveals two important features: the amino acids synthesized in imitation experiments of S. Miller appeared first, while the amino acids associated with codon capture events came last. The reconstruction of codon chronology is based on the above consensus temporal order of amino acids, supplemented by the stability and complementarity rules first suggested by M. Eigen and P. Schuster, and on the earlier established processivity rule. At no point in the reconstruction the consensus amino-acid chronology was in conflict with these three rules. The derived genealogy of all 64 codons suggested several important predictions that are confirmed. The reconstruction of the origin and evolutionary history of the triplet code becomes, thus, a powerful research tool for molecular evolution studies, especially in its early stages.”

I have argued for years that casting biological evolution as a target oriented search process is a lethally misleading metaphor.

Mark wrote

But as Mark remarks in an aside later in his essay, it is not true of

allartificially programmed GAs. My company uses genetic algorithms, but they are not search algorithms in the sense in which Dembski uses that term, single-target-seeking algorithms. If we used them as single-target-seeking algorithms in Dembski’s sense we would have been out of business years ago. The GAs my company uses employ a fitness function (that we write), but we have no idea at all whether there is one ‘target’, many ‘targets’, or no ‘targets’ in the space(s) in which our GAs evolve.The evolving artificial agents in our GAs are limited to purely local information about their relative fitnesses, and they know nothing of targets. We don’t know the topography of the fitness landscape(s), and we don’t know whether, where, or how many ‘targets’ there might be on those landscapes. And what’s more,And we don’t care!we have no way of even knowing if one of our artificial agents has reached a “target”!And, like biological evolution, we don’t care.We evolve populations of

satisficing‘solutions – good enough solutions – in our GAs, and thosepopulationsform the application for which we get paid. Those populations have to operate in a vicious negative-sum game in the real world (they autonomously control tens of millions of dollars of risk in the derivatives markets), and we dare not fool ourselves into believing that there is a single magic bullet “target”. We truly have no idea whether a ‘target’ was even found in the search space. A good enough population of solutions, where “good enough” is measured in terms of a functional performance metric that has no information about what the structure of a good solution might be, is fine with us and with our clients regardless of where the members of the population are in the space of interest. As long as we can write a fitness function that identifies a desirable (to us and our clients) performance measure for the task our artificial agents perform, we neither know nor care where a specific target might be in the space in which our entities evolve.One other note: Dembski’s last stand for the force of his mathematical analysis must be on the sparseness and distribution of potentially functional entities in the space(s) of interest. His misrepresentations of Douglas Axe’s 2000 paper on mutagenesis experiments are intended by Dembski to suggest that there is in fact just one tiny target space out there, that biological evolution must somehow search for it, and that search must fail (according to Dembski) in the absence of intelligent guidance. Matt Inlay dissected that misrepresentation here on PT last month.

RBH

What an interesting thread, from top to bottom!

RBH

And thus we glimpse Dembski’s other non-scientific shoe that never seems to drop: why did the mysterious aliens who Demski alleges designed life on earth have such a fascination and appreciation for necrophilic and waste-eating microorganisms? Perhaps Dembski’s alleged alien designers might have been working for some microscopic clients who enjoyed a good creepshow.

They [the organisms] can be seen to make functional sense within the context of a larger system. But this moves on to the point I’ve been unable to discern, namely the

Whole-System Design Intentof the hypothetical Designers. In other words, granting for the sake of argument that various individual organs are “designed”, and the organisms that possess them are “designed”,what is the whole system itself intended to accomplish? In any of the engineering environments in which I have worked, you’d never get past the first stages of Design Review without revealing the Design Intent.We can discern the design intent of a designed object such as the Antikythera Device even when substantial portions thereof have been destroyed by the ravages of time and salt water.

So what, exactly, is Life On Earth, considered as a complete designed system, supposed to accomplish?

If we can “recognise design” in the world, we should also be able to discern Design Intent.

God’s Will. What else?

Nice article. Just out of curiosity, does Demsbki acknowledge that evolutionary analogues to the landscapes describe herein are non-targeted? I’m guessing the answer is either “no” or that he conflates his answer in such a way as to try and snow us, but I don’t know.

Oh, but you see, creating a non-targeted landscape for biological life would take a feat of even GREATER intelligence! :)

Nice point RBH. I think its generally acknowledged that GAs measure up poorly against intelligence in the respect that they are often much slower to reach a particular, very specific solution or target. On the other hand, they are well known for often finding surprising and unexpected solutions for problems no one was even aware of, or of finding a wide range of different traits that help achieve a very broad non-specific goal.

Surprise surprise: what is life? What is the history of life? A long, roundabout walk that’s ended up with a wide range of solutions to a VERY broad target indeed (the prolonged existence in time of varius patterns).

I can kinda see how Demski’s work as a general case description of an unbound, near infinite, absolutely random combinatorial space for the emergence of organic proteins may be a useful starting point for understanding bio-genesis if it can be de-bugged. Any mapping of real world, bound, finite, symmetrical or partially symmetrical conditions on that space could theoretically point to potentially useful lines of research. I dunno.

I can, though, envision RGD’s universe where A!=A. It contains no more than than 2 discrete quanta and is spatially and temporally constrained to a point singularity. It happens virtually all the time.

The Cat only grinned when it saw Alice. It looked good-natured, she thought: still it had verylong claws and a great many teeth, so she felt that it ought to be treated with respect.

‘Cheshire Puss,’ she began, rather timidly, as she did not at all know whether it would like the name: however, it only grinned a little wider. ‘Come, it’s pleased so far,’ thought Alice, and she went on. ‘Would you tell me, please, which way I ought to go from here?’

‘That depends a good deal on where you want to get to,’ said the Cat.

‘I don’t much care where–‘ said Alice.

‘Then it doesn’t matter which way you go,’ said the Cat.

‘–so long as I get somewhere,’ Alice added as an explanation.

‘Oh, you’re sure to do that,’ said the Cat, ‘if you only walk long enough.’

Next question.

I can kinda see how Demski’s work as a general case description of an unbound, near infinite, absolutely random combinatorial space for the emergence of organic proteins may be a useful starting point for understanding bio-genesis if it can be de-bugged. Mapping of real world, bound, finite, symmetrical or partially symmetrical conditions on that space could theoretically point to potentially useful lines of research. I dunno.

I can, though, envision RGD’s universe where A!=A. It contains no more than than 2 discrete quanta and is spatially and temporally constrained to a point singularity. It happens virtually all the time.

Good luck with your surgery, Mark.

Jim Harrison, excellent post. It left me grinning, just like a .… well, you know!

First post by a Lurker.. hi all!

RBH Wrote:

Speaking of GAs, in computer science, algorithms are classified by their complexity (or “Order”) into Logarithmic, Linear, Quadratic, Exponential, etc. In other words, how efficiently the algorithm achieves its objective given N number of entities to operate on. Different algorithms can have the same results but have radically different performance.

An example in search algorithms is a Linear search vs a Binary search. The former, the performance drops directly proportional to the number of entities searched. The latter actually has higher performance relative to the number of entities you add as they increase because the “cost” for each additional entity only increases logarithmically. (half of the ordered search space is repeatedly thrown away on each test until the match is found).

Evolution has no target, but even GAs without a target should be classifiable by their order of complexity..

In addition to creationist assertions about evolution having a “target”, the math I’ve seen used to “disprove” evolution, typically asserts that the process of evolution is as inefficient as they can make it: An exponential order algorithm. A typical example is to calculate the chances of the human genome being acheived at random from a series of die rolls (4 values per base pair to the power of the number of base pairs in the human genome.. beyond astronomical).

Of course this doesn’t accurately model the process of evolution at all and my suspicion is that evolution is closer to Logarithmic or Linear than Exponential. My question is, have GAs given us a good idea what the “Order” of the actual biological process of evolution is, or some outer limits on it?

Thanks, MSM Software Engineer

Let’s say I flip a coin one million times and get the following sequence:

H T T H H T T T H etc…

According to Dembski, I must not have flipped a coin to obtain that sequence because the odds of getting that particular sequence are extremely small.

That lances the boil effectively, I think.

What I find most disconcerting about ID is the idea of a target, whether it is a whole organism (H. sapiens, etc) or a macromolecular sequence (protein or DNA). This seemingly necessary end product of the processes of a creative intelligence smacks of one of the most contentious issues in theology - predestination. As an opposite viewpoint, evolution is free will.

The single biggest fallacy in Dembski’s paper is the idea that evolution is searching for a target, T. T is a sequence of DNA that can build and operate a new organism, which will go on to build and operate another organism and so on.

But evolution only works when you already have a population of organisms that self-reproduce with heredity. Any such organism doesn’t have to search for T, it’s already inside it.

To make it worse for Dembski’s theory, when an organism reproduces with mutations, it

doessearch a new space, which may or may not be in T, but since the vast majority of the DNA will be the same in parent and offspring, the space searched will be very close to the parent’s position in T. It will NOT be doing a random search of the entire search space of the human genome, which Dembski labels Omega.Dembski should re-title his paper, “Missing the Point” or “Misunderstanding the Most Basic Things About How Evolution Works, With Mathematics.”

Dembski presented his theory on www.arn.org in the Intelligent Design Forum thread “Fundamental Theorem of Intelligent Design”. It’s also being discussed in the “Displacing Darwinism” thread. Read them for more comments.

He also discusses this thread and the one by Wesley Elsberry in the “Displacement at the Panda’s Thumb” thread.

P.S. RBH gets credit for first pointing out that living organisms don’t have to search for T because they are already in it.

Let me restate part of my last message. Dembski

mentionsthe threads on Panda’s Thumb and gives links to them. He doesn’t discuss them.It’s like pouring sand out of a bucket. You’ll get a pile of sand with this particular grain at the top and those particular ones at the bottom. How amazingly unlikely is that! You could pour the same bucket out until the heat death of the universe and you’ll never get the same configuration again. But you’ll always get a pile of sand in the same shape, more or less.

Dembski’s trying to show that because you can’t predict what grain of sand will go where, God is placing each grain to ensure that the pile is roughly conical.

Doesn’t wash, does it?

R

Dembski represents assisted searches as the set of probability distributions over a search space such that the density at the target is greater than what it would be as a uniform probability. He starts with a finite approximation, in which the search space is represented as a collection of targets and non targets. Then he simulates a probability distribution by choosing any M of those elements (allowing for repetition) such that the ratio of targets to M is at least q. Now do this infinitely many times (i.e. let M to range from 0 to infinity), and add up all the success. Then find the average rate of success by dividing this number by the total number of ways one can choose any M elements with repetition.

All of this is elementary combinatorial analysis (balls into boxes). Say you 20 black and red balls in an urn. You’re really interested in the red ones. So I let you pick a ball, you record which color it is and put it back. Shake up the urn, and repeat the process M times. Now, keep two tallies. One is the number of times you have done these experiments (this number carries over as you vary M). The other is the number of times you successfully pick M balls such that the ratio of red balls to M exceeds a prespecified threshold. This number also carries over as your vary M from 0 to infinity.

Any time you pick red balls at a rate greater than q (which in turn is greater than uniform probability), Dembski claims that the trial represents an assisted search. In the actual formalism, the balls are actually probability distributions with point-like distributions (diracs). So by picking balls, you’re in essence creating a probability distribution such that at least q ratio of those points are targets. According to Dembski’s displacement theorem, then, the ratio of successful trials (i.e. assisted searches) to total number of trials (i.e. space of all probabiltiy distribution) approaches zero as the number of balls in the urn approaches infinity (i.e. the search space becomes infinite in size). This is the strongest conclusion – everything else seems to me to be philosophy disguised as mathematics.

Well, if you made it this far through the hand-waving analogies, the obvious question arises – can you represent the space of assisted searches the way Dembski proposes? In other words, is there a good one-to-one mapping of all probability distributions to algorithms which accomplish the task? Methinks this is a suspect analogy. I have seen algorithms represented by a string that collects all the points visited in a function space along some path. So for instance, a steepest descent search would be represented as the list of points that lies on the path of steepest descent during each iteration until termination. For each target, then, there are potentially infinitely many paths that can reach that target at a specified rate higher than blind search. However, by treating the searches probabilistically (or rather combinatorially), I think some accounting is amiss. In other words, what if there are much more than one algorithm that can induce a particular probability distribution with the specified target probability that is better than uniform?

Working out the Ultimate Question to Life, the Universe, and Everything?

Not expecting to help,

Grey Wolf

I didn’t mean at all that things living today are better adapted to their present environment, no. But to the degree that the present environment has commonalities with the past environment, the fact that everything living today comes from the small percentage of living things that prospered and reproduced in the previous generation(s) does show an advantage.

I was using ‘best’ only in the sense that all of our ancestors were members of that very small percentage of living things that successfully reproduced. Lots of the business of living has to do with how well we are internally constituted, though, and those demands aren’t typically abated by changes in environment.

Just to ensure me that I’m not entirely off with my maths.

Chance you strike a target T in huge Ω, if you choose with uniform distribution U is some miniscule p And, the chance you strike T by first randomly-uniformly choosing a distribution out of all distributions and then choose in Ω according to this distribution is still the same : p.

Regardless whether the chance, that the distribution you pick first, has a success probability higher than q, is (as dembski has proven ) virtually nil.

So, even if we grant him his postulated ultimate target T, the thing he has proven is almost completely useless crap.

Isn’t it about time Dembski applied the same level of scrutiny to the Bible that he applies to “Darwinism”? Oh wait, unless one resorts to Bible codes the hammer of maths can’t easily be applied to the nail of 2000 year old literature.

Re “suggesting in his book The Design Inference that representing certain notions in a mathematically symbolic form somehow “proves” them”

Hmm. If he thinks that works, why don’t we write the basics of evolution theory in mathematical symbols, thus “proving” it. Think that would work?

Henry

I find Dembski’s whole line of argumentation amusing.

Among the most successful search algorithms intelligent designers (the ones we know for sure to exist) use are ones that are based on concepts borrowed from biological evolution.

Perhaps instead of cloaking ID in symbols, Dembski should venture out into the real world and see how “searches” are actually done

What target? Dembski presumes there’s a target, a measurable point in the “space”. No, each point in the space is the target; at that point. Just as each birth in a species establishes it’s own point in time. I am currently the end point for my little branch of the tree of life. I am target, hear me roar.

Thomas Aquinas, as I recall, and I confess it’s been a long time, started his proof of God with the assertion that God exists, therefore the proof will follow. It seems to me that Dembski is using the same false logic.

Data! Data! Data!

Mark Perakh posted

I fully agree that mathematics alone is not sufficient to learn something about biological evolution. We need data.

A very interesting example is a graphical representation of protein structure space by Jingtong Hou, Se-Ran Jun, Chao Zhang and Sung-Hou Kim (2005) “Global mapping of the protein structure space and application in structure-based inference of protein function” published online before print February 10, 2005, 10.1073/pnas.0409772102 PNAS March 8, 2005 vol. 102 no. 10 3651-3656 OPEN ACCESS ARTICLE.

Two very intriguing features of the protein structure space are: (1) protein structures are centered around diverging axes that originate from one region. That region is called “the origin of protein structure space”. (2) the structures of small protein or peptides are mapped close to the origin. These features are very suggestive for descent with modification of all protein structures.

Knowing this, is it really that difficult for random mutation and non-random natural selection to discover all proteins we find in nature?“Well, if you made it this far through the hand-waving analogies, the obvious question arises — can you represent the space of assisted searches the way Dembski proposes? In other words, is there a good one-to-one mapping of all probability distributions to algorithms which accomplish the task? Methinks this is a suspect analogy.”—lurker

No. No one can “represent the space of assisted searches the way Dembski proposes.” There is more to it (intelligence). But there is no such thing as a “one-to-one mapping of all probability distributions to algorithms which accomplish the task.” And Dembski does not assume it. Uh, that’s what the paper is about, isn’t it? I mean most of it. If there was such a map … Whoa! You’ve got Creation! “Hand-waving analogies”? The first half of the paper is a trivial (“elementary”) refutation of the “Darwin algorithm” as “blind search. The second half, anticipated in the first is an argument that no purely mundane intelligence can do this either!

Not using these techniques! (The only ones we know!)

Dembski doesn’t use the term “Darwin algorithm”. Does Rock contend that “blind search”, as defined by Dembski in this paper, is a reasonable representation of evolutionary theory?

I think this is a mistaken argument. By an

assisted search, Dembski means a pair (s,j) of what he calls astrategy functitons and aninformation functionj. The function j takes all previously visited points in the search space and returns a value. The function s takes all previously visited points and all previous j-values, and returns the next point to be visited in the search space. Any GA are “assisted searches” in this sense, including the ones used by your company.As for targets, it is important to realize that the target is something imposed externally for our amusement. It need not have any relation or relevance what so ever to the dynamics of the GA or the optimization problem to be solved. (At least no such constraints have been mentioned by Dembski.) Indeed, the less relation and relevance, the stronger Dembski’s case for the target being hard to find! A “target” in Dembski’s sense is whatever region of the search space Dembski chooses to refer by the word “target”. It is therefore not possible to say that your GAs lack a target in Dembski’s unorthodox sense.

This unorthodox notion of a target comes at the price of departing from the usual concepts of search and optimization. In optimization, the j-function would be a function describing the optimization problem and the “target” would be the (probably unknown) region of the search space containing satisficing solutions. Hence, the target would be completely determined by the j-function, and there would be absolutely no freedom for Dembski to pick the target himself. In decoupling the target from the j-function and the j-function from the problem-to-be-solved, Dembski has made his notion of search so general that most of his searches lack sensible interpretations and only a tiny subset correspond to anything encountered in optimization or biological evolution.

Rock claims, “But there is no such thing as a “one-to-one mapping of all probability distributions to algorithms which accomplish the task.” And Dembski does not assume it.”

But Dembski writes, “Ω must therefore be embedded in a larger environment capable of delivering an assisted search that induces μ0, which then in turn is capable of delivering a solution from the target T. But if this larger environment is driven by stochastic mechanisms and if a stochastic mechanism within Ω0 is responsible for delivering μ0, then μ0 is itself the solution of a stochastically driven search…

But if μ0, in representing an assisted search that effectively locates the original target T, is a solution for some new search, to what new target qua solution space does μ0 belong? Clearly, the solutions in this new target will need to comprise all other probability measures on Ω that represent assisted searches at least as effective as the search for T represented by μ0 …

Since any probability measure ν on Ω for which ν(T ) > μ0(T ) represents an assisted search at least as effective in locating T as the assisted search that induced μ0, the new target is therefore properly defined as follows: T* =def {ν ∈M(Ω) : ν(T ) > μ0(T )}.

Here M(Ω) denotes the set of all Borel probability measures on Ω. Note that any probability measure within T is at least as effective as μ0 for locating T*, whereas any probability measure outside will be strictly less effective than μ0.”

So back to my question, to which Rock does not provide a satisfactory answer, what is the proper higher order search space: the space of all assisted searches or the space of all induced probability measures? If one argues they are equivalent spaces, then Dembski does implicitly argue that there is a good mapping from one space to another.

Rock then asserts, if such a mapping exists, then it implies Creation. Please elaborate.

=========================================================

Erik brings up an interesting point. Just how useful is Dembski’s result? Just as NFL does not speak to the utility of optimization searches in “practical” problems, why is Dembski’s result relevant to evolutionary biology? Or is this another facet of his mathematical turtles-all-the-way-down?

Erik12345:

Dembski has made his notion of search so general that most of his searches lack sensible interpretations and only a tiny subset correspond to anything encountered in optimization or biological evolution.That’s what I thought. Has anyone come up with a more realistic mathematical model of biological evolution?

Another morning, another read of Dembski’s paper. I am more and more troubled by how he motivates his substitution of assisted searches for probability distribution. It turns out that most of my concerns voiced thus far in these blog commentaries is found in section 3, entitled appropriately enough “A Simplification”.

Here Dembski argues that there is any one good instance of a blind search. In other words, this is “baseline” – all assisted searches presumably work better than this. In what sense better? In the sense that assisted searches may be thought of sampling the search space with an induced probability distribution, and that the probability of finding the target T is q, which greater than p for a uniform distribution.

This seems at first glance a rather peculiar criterion. Consider for instance a simple assisted search – a Newton descent method. I have a complicated function within which a small target resides. What is the probability that a Newton descent method “finds” the target? According to Dembski, I can substitute each step of the Newton optimizer with an i.i.d. random variable, and calculate the probability that at any step the target is reached.

Let’s apply Dembski’s formulation. For instance, what is this induced probability function for the Newton optimizer operating on some function? I suppose one could partition the search space according to which zero-gradient target point an initial condition in that partition would deterministically converge to. Then, one would probabilistically select an initial point and lookup which partition that point falls within. So, for instance, for y = x^2, you are always guaranteed to find the target. But for some arbitrary polynomial with many zero-derivative points, you have to calculate the size of the interval on the real number line where the Newton optimizer will give you the zero-derivative point of interest.

But after all this analysis, what is the probability function induced by a Newton optimizer on some space? Well, Dembski offers no guidance except to excuse himself from having to deal with it: “Does this simplification adequately capture the key features of blind and assisted searches that enable their relative effectiveness to be accurately gauged? To be sure, these simplifications dispense with a lot of the structure of U and focus on a particular type of blind search, namely, uniform random sampling. But representing blind search as uniform random sampling is, as argued at the start of this section, well warranted. Moreover, any information lost by substituting μ for U is irrelevant for gauging U’s effectiveness in locating T since its effectiveness coincides with the probability of U locating T in m steps, and this probability is captured by μ.”

The notion that one can calculate the probability of an assisted search at a particular step is strange. Especially for the model problem I suggested above, what is the probability of U (the Newton optimizer) locating a particular T (i.e. a zero-derivative point) in m steps? Well, it is deterministically 1 (sometimes not, if it takes more than m steps) whenever the initial condition lies within a partition that always converges to that target point. Otherwise it is zero. Strangely enough, with no a priori knowledge about my function space, the performance of a Newton-optimizer is equivalent to a blind search on my initial conditions. Yet, the probability of finding those initial conditions may as well be zero with a finite target space in an infinite search space. Is a Newton-optimizer then as effective as a blind search?? I guess this is where NFL rears its ugly head. Yet, with the recognition that Newton-optimizers are a mainstay of modern optimization techniques, one should realize that NFL really does not matter in practical problems.

This analysis emphasizes how ad hoc Dembski’s conclusions must be. With no a priori information about my search space, I really have no knowledge about how good my search result is. Equivalently, I really have no knowledge of what a target is. Yet Dembski merely dismisses this problem. He insists upon a target, and defines it in such an ad hoc manner that the induced probability measure is zero for his only instance of blind search – random uniform sampling. Using Dembski’s jargon, he has no rational for relabeling a “candidate solution” as _the_ target solution. This echoes a lot of the criticisms already cited on this blog – namely, that the notion of a target must be defined post-hoc. It seems to me, in order to defeat Dembski’s argument, one merely considers a search process in which every point is a target (or even vice-versa, a non-target). Then what does his analysis provide? Well, p = 1. Any assisted search would result in q = 1. And thus, suddenly there is no problem! Metaphysically, this is “everything is designed” turned into “everything is designed once we figure out its purpose”.

=========================================

Finally returning to a point made by Rock: “The first half of the paper is a trivial (“elementary”) refutation of the “Darwin algorithm” as “blind search.”

I will simply quote Dembski: “To be sure, these simplifications dispense with a lot of the structure of U and focus on a particular type of blind search, namely, uniform random sampling.”

Sure, the Darwinian algorithm is not a purely blind search, as I don’t see this as a controversial point. But wouldn’t it be interesting if the structures of U (the assisted search) in the end rests upon a “blind” process much like a Newton optimizer depending on a blind choice of initial conditions?

Round and round and round.

I asked this about 20 times some time ago, but got bored with waiting for an answer.

Forget looking for 1 in 20^100 sequences of proteins - that’s an arbitrary suggestion for a typical protein anyway. If you don’t like the specification of 1 in 20^100 for a 100 amino acid protein, then what *do* you like? Any base being one of four possible; only 50% of sites being significant? Well done. You have reduced the specification to 1 in 5^50. That represents a pretty wide range of possible sequences - but still not more than a probability of one corresponding protein in 8*10^34. The estimate for a protein with *any* functionality that was cited a few months ago was 1 in 10^11. What is your estimate for production of a random protein that has new functionality *useful* to an organism? And what is the engine for production and testing of random proteins anyway? How does it search the dominant space of useless proteins without drowning in rubbish strings of amino acid residues?

Now, knowing your propensity here to focus on irrelevant details to show how you have dealt with an argument, somebody will doubtless post a reply saying, “Actually, aCTa is wrong, because he only asked it 13 times.”

… come to think of it, human DNA is supposedly 99% similar to chimp DNA - a fact of which we are regularly reminded.

Which means that presumably there is significance in the 99% correspondence (if neutral change was widespread - as implied in the argument above relating to a specification of 100 amino acids being artifically restrictive - then why should the similarity be so high? How can it be possible to say that this is human DNA and that is chimp DNA when there is only 1% difference between them?).

And also there is significance in the 1% difference (if a specification of 1 in 100 is artificial, how is it possible to say that there is a consistent, identifiable 1% difference between chimp and human DNA?).

Just asking .…

Lurker,

What puzzles me about Dembski’s paper - apart from the content - is why he hasn’t submitted this article, minus the evolutionary speculation, to a mathematics or numerical methods (optimization) journal. For example,

SIAM Journal on Optimization.I’m sure that if there is something new and correct in his ideas then they would be of general interest. By separating what is an abstract mathematical problem in searching for global minima from speculations as to the consequences of his conclusions, Dembski has the opportuinty to get a reputable publication. If it holds up then he can certainly make hay.So, anyone know if Dembski plans to do this? Perhaps contributors at ARN could ask him.

DF

My “estimate for production…”? I’m going to assume you mean my estimate of the “

probabilityof production…”. (But really: if you’re going to complain that folks are avoiding your questions, it behooves you to state them more clearly.)The probability that an organism will “randomly” produce

anyparticular protein is negligibly small. Organisms generally produce the proteins that are encoded in their genomes.The engine of production and testing? Why, that’s random mutation and natural selection. Surely you’ve heard of it. But a key point that your question seems to be missing is that “random” proteins do not spring into being to be sampled for some, any, functionality. Proteins, which generally have some functionality in the first place, are continuously sampled for any additional survival/reproduction enhancing functionality they might possess.

Hope that answers your question. Thanks for asking.

Why would one drown? For Dembski and Behe’s blabbering to become real science, they’d have to have a reliable way to estimate the percentage of strings which are biological rubbish. Since they can’t do that, they use Creationist Statistics, which is to say, the sneak extreme improbability in through their assumtions, then discover it.

Steve: Conversely, if random mutation and natural selection are to become real science, you also need to have a reliable way to estimate the percentage of strings that aren’t biological rubbish. If you can’t do that, then your argument is just handwaving - you don’t actually have a mechanism, just a story. This is what I am (still) waiting for. It would also be nice to know where you see this trial and error going on in organisms (to make new proteins, not in something completely different like the immune system), but we need to start somewhere. Incidentally, I suspect that in (say) humans, if the engine is new DNA sequences at the time of germ cell production, with the exception of drastic small changes, like the almost universally quoted sickle-cell anaemia mutation, the functionality of any new proteins would be irrelevant compared to all the other traits that are present in the organism, and environmental factors.

Russell: Sorry if I’m not making my argument clear. I tend to come here when I ought to have already gone to bed. I have already said that I am not interested in any particular protein. I have offered very generous terms - only 50% of amino acids in the protein sequence being significant, and each of those being one of four possible amino acids. Note that if it is true that human DNA is only 1% different from chimp DNA, then this lack of specificity doesn’t correspond to real life - if changes are “that neutral” then I would have thought you would expect a much greater diversity than 1% within each member of a species (which was the thrust of my second post).

However, put that to one side for now. The point I am making is, what level of specificity do you want before a new protein expresses functionality? If you don’t like the creationist or ID estimates of improbability of specification, then what level of specification is more accurate? If you don’t have an estimate of this, again, you don’t have a theory that is open to falsification/demonstration, so it doesn’t merit being called science yet.

Also, you can’t keep saying “proteins that already exist” - I know that from scientific papers on new protein functionality, most times new functionality is derived from already existing proteins. But at some stage, new proteins have to appear - again, if you can’t explain (for example) where proto-bodgase comes from, then it is almost irrelevant that you can get from chimp-bodgase to human-bodgase, or that gdobase looks like bodgase with a transposed section.

Tell us what part of “Gene duplication and subsequent divergence is a common occurrence” you have trouble with.

Syntax Error:mismatched tag at line 1, column 351, byte 351 at /usr/local/lib/perl5/site_perl/mach/5.18/XML/Parser.pm line 187.DavidF,

I can only speculate about Dembski’s motivations. After Shalizi recently tore into Dembski’s first article (of 7) about some supposedly ‘novel’ measure of information, I remember Dembski complaining that he had become disillusioned with rigorous mathematical research, and his primary passion now was in the philosophical (and some may say, apologetic) aspects of it. Having said that, I doubt that even SIAM J Opt would be an appropriate place for Dembski’s article, since it really does not have any practical applications. That is, I cannot think of a good practical problem that involves searching for search methods (a metasearch) of the peculiar properties that Dembski cares about. Dembski would likely argue that such an example of a practical problem is the materialist conspiracy to treat the origin of the universe as a blind process. In my view, that would be a philosophical strawman. In terms of NFL type analysis regarding general properties of search algorithms, Dembski brushes off, in my opinion, significant criticisms of his premises (e.g. English) and his conclusions (e.g. Wolpert) with simple assertions and without proof. This hardly makes for good mathematic, but as we can see, it is great for polemics.

Still, underlying all of his articles are these peculiar premises. Dembski’s main failure is to treat a ‘target’ as a static, ahistorical, context-independent, concept. Dembski’s game is simply that once he has your agreement on a prespecified target, stripped of any context, he shows you that it is nearly impossible to find it by just doing random walks. I think Dembski sees the fallacy of this argument. In many ways, Dembski has already conceded the ground by formulating his thesis as a regress. Note that his argument isn’t really that evolutionary mechanisms cannot create complicated designs. Rather it is that those mechanisms (or assisted searches) that can create complicated designs can’t be found by blind processes. Regresses always seem to me to be desperate arguments.

Let me give another example that hopefully people can relate to. Let’s say that you’ve created a specification that is your spouse. Now obviously you’ve found this person. But what were the chances that you were successful in acquiring this target? Did you in fact always have this specification for all of your existence, such that it is an unambiguous match to your current spouse? If we applied Dembski’s methodology, your chances are pretty grim: 1 in say, roughly, 3 billion people. Not only that, you have to be at the right place at the right time, i.e. chances are inversely proportional to the surface area of the Earth and the amount of time you’ve been alive. According to Dembski, if you had your specification of a spouse before you met your spouse, then it was by Design that you found this person. In other words, you employed an assisted search by Design that allowed you single-handedly to tip the probabilities so that the induced probability space is very high for your spouse compared to all the other targets in the world.

OK. For those of you who don’t believe this Pygmalionesque account of your Designing your own spouse since birth, I guess you have some soul searching to do on how you two eventually hooked up. =)

Simple biology would tell you there’s no reason to think, a priori, that the useful strings are problematically rare. So no, you don’t need this. The burden of showing a problem exists is on dembski, much like the burden of Dorothy’s house was on the Wicked Witch of the East.

Lurker,

Thanks! So it’s the typical Creationist argument that the life we actually see is the only target possible and what are the odds of that happening? Equally what are the odds that any of us is alive today - infinitesimally small given the chances of all the events that have had to occur, all the way down from Adam and Eve, to produce us :-)

My suggestion that Dembski try to get his work published in a mathematical journal was a bit tongue in cheek in that I assumed he wouldn’t be able to get it published even sans the evolutionary speculations.

So it’s basically using fancy language as smoke and mirrors to hide the same ol same ol creationist arguments.

There is no evidence that God exists but there is no question whatsoever that a God or Gods must have existed. Otherwise we would not be here.

“Science commits suicide when she adopts a creed.” Thomas Henry Huxley

John A. Davison

Update