**Introduction**. In the beginning of March 2005 William Dembski sent an email message to several critics of his output, including me. Dembski wrote:

âDear Critics,

Attached is a paper that fills in the details of chapter 4 of No Free Lunch, which David Wolpert referred to as âwritten in jello.â The key result is a displacement theorem. Along the way I prove and (sic) measure-theoretic variant of the No Free Lunch theorems.â

Dembski concluded his message as follows:

ââ¦ I expect that Ken Millerâs public remarks about intelligent design being a âtotal, dismal failure scientificallyâ will become increasingly difficult to sustain. This paper, and subsequent revisions, can be found on my website www.designinference.com. I welcome any constructive comments about it.â

Dembskiâs new paper (in PDF format) is found at http://www.designinference.com/docu[…]e_Spaces.pdf, where its text has already undergone some modifications compared to its initial version. Perhaps these modifications were prompted by critical comments that appeared on the internet, in particular those made by a contributor to the ARN website who signs his posts as RBH, as well as by Tom English and David Wolpert (whose remarks appeared on a certain internet forum).

In his essay at http://www.designinference.com/docu[…]Backlash.htm Dembski wrote,

âIâm not going to give away all my secrets, but one thing I sometimes do is post on the web a chapter or section from a forthcoming book, let the critics descend, and then revise it so that what appears in book form preempts the criticsâ objections. An additional advantage with this approach is that I can cite the website on which the objections appear, which typically gives me the last word in the exchange. And even if the critics choose to revise the objections on their website, books are far more permanent and influential than webpages.â

While Dembskiâs frank admission of the tricks he resorts to in order to âwinâ the cultural war may sound commendable, the tricks themselves are hardly in tune with what normally is considered intellectual integrity. He should have added that making the described revisions in his texts, he usually does not acknowledge the input from critics.

While I make a note of Dembskiâs invitation to offer âconstructive comments about it,â it seems proper to point out that Dembskiâs earlier rendition of the topics covered in his new paper have been extensively discussed and critiqued but he has not deemed it necessary to respond to critique. In particular, in a chapter which I authored in the anthology *Why Intelligent Design Fails* (editors Matt Young and Taner Edis, Rutgers Univ. Press, 2004) I specifically addressed Dembskiâs [mis]interpretation of the No Free Lunch theorems and his âdisplacement problemâ as it was rendered in chapter 4 of his *No Free Lunch book*. Dembskiâs new paper in no way answers my critique of his earlier output where he discussed the same notions in a less mathematical rendition. None of my earlier critical comments regarding Dembskiâs misinterpretation and misuse of the NFL theorems and his displacement problem (in its original presentation) is deflected by anything in his new paper.

Iâll try to answer the question - does Dembskiâs new paper justify his assertion that

âKen Millerâs public remark about intelligent design being a âtotal, dismal failure scientificallyâ will become increasingly difficult to sustainâ?

**Not quite consistent.** In the same vein as my chapter in the anthology *Why Intelligent Design Fails*, Iâll discuss Dembskiâs new paper without delving into mathematical symbolism as this essay is addressing a general audience rather than only mathematically prepared readers.

Dembski states in his new paper that it mathematically formalizes the ideas previously outlined in a less rigorous form in chapter 4 of his book *No Free Lunch* (in his words, his new paper âfills in the details of chapter 4 in No Free Lunchâ).

This statement seems to be, first, aimed at asserting the supposed consistency of Dembskiâs discourse, and, second, at providing a sort of answer to the well known characterization (by David Wolpert) of Dembskiâs treatment of the No Free Lunch theorems as âwritten in jelloâ ( www.talkreason.org/articles/jello.cfm). Wolpert is a co-originator of the No Free Lunch theorems, hence his opinion of Dembskiâs treatment of these theorems carried considerable weight. Some of Dembskiâs admirers tried to play down the significance of Wolpertâs critique by posting letters to that effect on various internet fora. However, Dembski himself has maintained a deafening silence, as if Wolpertâs critique did not exist. It seems that the statement in his new paper which points to the supposed consistency between chapter 4 in his 2002 book and his new paper is a device designed to blunt the sharpness of Wolpertâs characterization.

However, the comparison of Dembskiâs new paper with chapter 4 in his book shows that the assertion of consistency is not quite true. In fact, Dembskiâs new paper introduces certain substantial modifications of the basic concepts suggested in the âjelloâ chapter in his earlier book. The displacement problem No 1 (as rendered in the No Free Lunch book) and the displacement problem No 2 (as discussed in the new paper) seem to be not quite the same problem (as Iâll discuss below), although Dembskiâs position is that the two displacement problems are identical.

Dembskiâs new paper is heavily mathematical and is obviously designed to impress readers with his mathematical sophistication. Dembskiâs colleagues (some of whom may even not have a proper background to comprehend his mathematical ruminations) have promptly acclaimed this new paper as âsplendidâ and allegedly âdisplacingâ that perfidious offshoot of materialistic philosophy, âDarwinism.â

I believe the delight of Dembski and his colleagues is premature.

**Fallacious assumptions**. First a very general observation. Dembskiâs delight is based on the implicit assumption that fundamental concepts of biological science can be âprovedâ or âdisprovedâ mathematically. Dembski has adhered to similar notions previously, for example, suggesting in his book *The Design Inference* that representing certain notions in a mathematically symbolic form somehow âprovesâ them. In my book *Unintelligent Design* (chapter 1, pages 26-28) I have demonstrated the fallacy of such a supposition, using as an example Dembskiâs presentation of an argument for design in two versions - once expressed in plain words, and once in a mathematically symbolic form. As is evident from that juxtaposition of two renditions of the same argument, using mathematical symbolism does not provide any additional insight and, in Dembskiâs case, only served to embellish his discourse. In his recent papers, including the paper I am discussing here, Dembski makes a further step on the same road. Now his overall approach seems to be implicitly based on the idea that a purely mathematical discourse is capable of âdisplacing Darwinism.â
Of course, this is wishful thinking. âMathematics is a language,â said the great American physicist Josiah Willard Gibbs. Indeed. Mathematics is an extremely powerful tool. However, no mathematical theorem or equation âprovesâ or âdisprovesâ anything beyond the logical connection between a premise and a conclusion. If a premise is false, so is the conclusion, regardless of how sophisticated and impeccably correct the applied mathematical apparatus is.
Since Dembskiâs proclaimed goal is to prove âDarwinismâ false, all of his mathematical exercise is irrelevant as it in principle cannot achieve such a goal. Evolutionary biology is an experimental science and âDarwinianâ mechanisms of evolution have been supported by an immense empirical material. No mathematical theorems or equations can âdisplaceâ evolutionary biology. Its successes and failures can only stem from empirical research and observations bolstered by proper theorizing, wherein mathematics, however important and enlightening, is always only a tool.

Closely connected to this fallacious approach to mathematics as allegedly capable of âdisprovingâ evolution, there is another serious (I would say fatal) drawback to Dembskiâs approach. He seems to first implicitly define his goal (in this case to prove that âDarwinianâ mechanisms cannot explain evolution) and then apply what is an analog of âreverse engineeringâ to find a premise from which his already chosen conclusion can be mathematically derived. The premise deliberately chosen to lead to a pre-determined conclusion has little chance to be true.

**Dembskiâs premise.** To be more specific, let us see what Dembskiâs premise implies. Here there is indeed a certain consistency between his earlier discourse in his *No Free Lunch* book and his new paper. He considers biological evolution as the search for a certain small target within a very large search space (which in his book was referred to as the âphase spaceâ).

As I pointed out in *Why Intelligent Design Fails*, in his *No Free Lunch* book Dembski did not suggest a definition of a target. In his new paper this lacuna has been filled. Now Dembski provides a definition of a target. Without delving into Dembskiâs mathematical formalism, he defines target T as a particular small region somewhere within the very large search space Q. He asserts next that finding the target using a blind search is an endeavor whose success has a very small probability. For the search to be successful, it has to be âassisted,â which means the search algorithm needs to get information about the structure of the search space. For example, the search algorithm may be assisted by feedback, letting the algorithm know whether each step brings it closer to the target or pushes it farther from the target. The source of such information lies in a âhigher-orderâ information space. In Dembskiâs *No Free Lunch* this higher-order space was denoted J. From that earlier discourse seemed to follow that space J contained (perhaps besides some other things) all possible fitness functions. In his new paper the âhigher-orderâ space is denoted M and seems now to contain not the fitness functions but rather all possible âsearches,â that is all possible search algorithms. This seems to be a substantial change of the entire concept of the âdisplacement problemâ which Dembski claims to be the main element of his discourse. It seems to be at odds with Dembskiâs claim asserting that his new paper is just âfilling in detailsâ in his earlier rendition of his ideas, this time on a more rigorous level, so it is no longer âwritten in jello.â
In fact, his new rendition is substantially different from the original version found in his book.

**Dembskiâs assisted searches and NFL regress.** Let us recall once again the problem Dembski discusses in his new paper. It is a search for a small target within a large search space. Mathematically analyzing this problem, Dembski concludes that only an âassisted searchâ has a reasonable chance of success and such an âassisted searchâ is only possible if, first, a âsearch for a searchâ is conducted in a higher-order information-resource space. The latter, however, in turn requires information from an even higher-order space, etc. Dembski calls this situation âNo Free Lunch Regress.â He maintains that âstochastic processesâ (and biological evolution belongs in this category) are incapable of getting out of the regress in question, so for the search to be successful, input from intelligence is necessary (which seems to be in fact his *a priori* conviction, just not expressed explicitly).
Dembskiâs claim that the âassistanceâ can only come from an intelligent source reflects his antecedent belief but is not supported by any argument, either mathematical or heuristic. A detailed discussion of this point is not necessary, however, because, as I will show, Dembskiâs entire schema is irrelevant for real-life searches.
Another comment that immediately comes to mind is that if a search is assisted by information from a higher-order space, the search algorithm that has acquired such information is not a âblack-boxâ algorithm any more, so the No Free Lunch theorems, at least in the form they were proven by Wolpert and Macready, are invalid for such algorithms. (Wolpert-Macreadyâs proof was valid for black-box algorithms. A black-box algorithm has no advance knowledge of the fitness landscape and acquires such knowledge step-by-step, extracting it from the fitness landscape in such a way that it accumulates information about the already visited points in the landscape but still has no knowledge of any points not yet visited; it possesses no knowledge of a target either, if the search is target-directed.)

Although this simple consideration casts doubts on Dembskiâs entire discourse (unless he can prove that the requirements for an algorithms to be of a âblack-boxâ variety can be invalidated) it is of a secondary importance because the No Free Lunch theorems are only about the average âperformanceâ of search algorithms and are irrelevant to the actual problem of a *specific* search algorithm facing a *specific* fitness landscape. In that, Dembskiâs new paper is not an improvement over his earlier discourse and fails to account for the irrelevance of the NFL theorems for biological evolution.

Moreover, regardless of the NFL theorems, Dembskiâs discourse seems to be, again, irrelevant to real-life problems for a more universal reason. Here is why.

Do we need to analyze all of Dembskiâs convoluted mathematics in order to see whether his conclusion is substantiated? No. There are several reasons to ignore Dembskiâs mathematical exercise but I will now point to only one such reason which, I believe, is fully sufficient to reject Dembskiâs conclusion.

**Is biological evolution a search for a target?** Biological evolution has nothing to do with the problem Dembski analyzes in his new paper - the problem of a search for a small target in a large search space.

Let us grant Dembski the assumptions and derivations he offered in his mathematical exercise. They may be perfectly correct or partially defective, but either way it will not affect our general conclusion.

Biological evolution is not a search for a target in a search space. It knows of no target. It is blind and its results are not predetermined, unlike the results of a targeted search employed in certain artificially designed evolutionary algorithms (such as in Dawkinsâs âweaselâ algorithm).
Look at Dembskiâs example of a âsearchâ for a particular protein. He calculates that the probability of âfindingâ a particular protein which is 100 amino acids in length via a random search in a space of all possible proteins of such a length, assuming uniform distribution of probabilities in this space, is so small ( 10^{-130} ) as to be practically hopeless. The arithmetic here may be perfectly correct, but it has no relevance to real biological evolution. Evolution does not search for a particular protein determined in advance as a target. It conducts a variety of blind âsearches,â the number of which is immense and some of them result in a spontaneous emergence of certain biologically useful proteins, whose biological role was not foreseen. The probabilities of such occurrences are irrelevant: because of the very large number of such âsearches,â the overall likelihood of emergence of some useful proteins is by many orders of magnitude larger than the number Dembski calculates. Moreover, imposing upon ârandom searchesâ a non-random factor - natural selection may serve as an example of such a non-random factor - drastically accelerates the process. There are other natural factors ignored in Dembskiâs schema which naturally âassistâ the âsearch,â so it is âassistedâ without input from intelligence and without a need to search a âhigher-orderâ space. Dembskiâs model of proteinâs components randomly assembling all at once is very far from realistic scenarios discussed in evolutionary biology.
Dembski schema is utterly arbitrary insofar as it relates to a natural biological process.

Therefore all Dembskiâs theorems and equations, as well as his conclusions, have no relevance to evolutionary biology.

**Conclusion: is Dembskiâs mathematics relevant to intelligent design?** Are Dembskiâs mathematical exercises relevant to intelligent design in general? I donât think so. Indeed, let us assume that Dembskiâs thesis is valid for targeted genetic algorithms like Dawkinsâs âweaselâ algorithm. Even if this is true, it has no relevance to the question of the validity of intelligent design. We know anyway that such artificially designed algorithms receive input from an intelligent agent - a human programmer who supplies âassistanceâ to the algorithm in the way of a feedback: it tells the algorithm at every step of the search whether it comes closer to the target, stays the same distance from it, or moves farther from the target. The same may be true for many other artificially programmed genetic algorithms.

However, this in no way means that an analogous situation exists in the biosphere where the search is not target-oriented and where therefore no input from an âassistingâ agent is required. In fact, in biological evolution no âassistanceâ from a âhigher-orderâ information space is possible because the outcome of a search is not known in advance, so the âsearchâ (if we agree to apply this term, which is in fact a misnomer) is in all cases spontaneous and undirected.
All this has no relation to the No Free Lunch theorems, either those for fixed landscapes or those for co-evolution, which all are irrelevant to the actual encounters of *specific* natural genetic algorithms with *specific* fitness landscapes, either fixed, or co-evolving in the course of the search. Hence the question of whether Dembskiâs mathematical exercise is formally correct or contains errors is irrelevant to the question of intelligent designâs validity. Even if artificial genetic computer programs indeed require an input from intelligence (although even some of such algorithms work without such an input) this is not a question of concern if intelligent design is discussed. The latterâs validity is predicated upon whether or not intelligent input is required for natural non-targeted searches.

Neither Dembski nor anybody else has so far suggested any evidence that such input is necessary for non-targeted searches. Dembskiâs new paper has not done anything like that by a long shot. In my view this paper is an exercise whose heavy mathematical embellishment serves no other purpose than showing once more that Dembski, on the one hand, knows a lot of mathematical symbols, and on the other hand has problems with overall consistency and logic.

As the matter now stands, Ken Millerâs statement quoted by Dembski remains fully valid. (Besides Miller, it is shared by most scientists who happened to come across Dembskiâs numerous publications - a good example is perhaps the anthology *Why Intelligent Design Fails* - which Dembski fails to even mention, let alone to reply to). Dembskiâs new mathematical exercise does nothing to make the statement about the abject scientific futility of intelligent design any less true than it has been until now.

*PS. I have a surgery scheduled for tomorrow morning, so Iâll be unable to respond to critical comments, if any, at least for a while. MP*

Pretty bad logic from Dembski. He assumes that the target is extraordinarily unlikely, and the conclusion he wants follows.

In a way, his argument is orthogonal to the cosmological ID arguments. He calculates the probability of getting a specific protein, but has no way of estimating which other ones are functional. The cosmological IDiots can (in some cases) tell you what other results would be functional, but they have no way of estimating the probability of getting the observed result. So each type of IDist just assumes the missing information to be such that they get the desired result.

I think these guys are having crises of faith. Their exposure to science tells them that faith isnât good enough, and theyâre looking for evidence to justify their beliefs to themselves.

I have to agree, I believe that fundies in general [those that have to find âevidenceâ for the âinerrant Bibleâ] have a weak faith; and the science-bashing is done as a result in order to justify emotionally what cannot be supported rationally.

I think part of the problem, from Dembskiâs point of view, is that he really

doesbelieve that narrow targets exist - that mankind, for example, is a target, or the bacterium flagellum is a target. If the âbullâs-eyesâ he uses in support of his non-mathematical arguments for CSI are sufficiently large, then the probabilities drop off tremendously. But he envisions very tight targets. Why? Is it purely derived from his religious convictions? Is there any way to actually determine the target size given a specific ecological niche?Iâm going to reprise my comment #20301 from yesterday, since I hope Mark, PZ, or someone else really can tell me if Iâm on the wrong track. Yesterdayâs collected only the usual suspect, and the interchange got moved to the Bathroom wall. My bad. :(

Thoughts?

I like the way H. Allen Orr put it in his review of âNo Free Lunchâ, originally published in the Summer 2002 issue of Boston Review (over 2 and 1/2 years ago):

How many times will people need to point this out to Dembski before he finally gets it?

Tim commented,

I suspect never. Given Dembskiâs theological mindset, it is

impossiblefor him to envision a universe without prespecified targets, just as it is impossible for me to envision a universe in which A != A.The problem with Dembski is that he cannot seem to admit that he is/was wrong. The observation that NFL was written in Jello by a fellow mathematician must have been a shock to Dembski. Thus the followup argument which is based on much of the same errors as the original argument shows how mathematics can be correct but the conclusion incorrect. That Dembski believes in a target goal for evolution shows how a mathematicianâs viewpoint can be clouded by lack of understanding of the underlying biology. The NFL nor his displacement problem have much relevance to evolution but ID proponents are likely to be swayed by Dembskiâs mathematics which are not dissimilar from his flagellar protein calculations: Garbage in, Garbage out.

Dembskiâs article is garbage.

Donât make the mistake of publically critiquing this trash until after it appears in some permanent form.

Thatâs just what Dembski wants so that he âcan have the last wordâ â he has publicly articulated this very dishonest strategy of âputting critics to useâ http://www.designinference.com/docu[…]Backlash.htm, http://www.pandasthumb.org/pt-archi[…]/000409.html. He describes this, in part, as follows:

In other words, Dembski has no interest in the truth, just his Christian right agenda.

Donât play along.

Re: comment 20526 by Tim Tesar: I agree that Allen Orrâs critique od Dembskiâs NFL book which Tim quotes was fine, but perhaps some additional light upon it sheds Perakhâs post at http://www.talkreason.org/articles/orr.cfm.

Donât worryâthese critiques arenât trivial details he can fix. Theyâre fundamental errors which, like IC, decades of patching canât repair.

In answer to RGD, I suggest that for the creationist, everything that exists does so through Godâs Will. If God had willed something different, weâd have that instead. As a result, it is absoutely necessary and required that everything that exists must have been the specific target to begin with. And so Dembski canât address the general question âWhat is the probability that evolutionary processes can happen on something of any utility whatsoever?â For Dembski, the only possible question is âWhat is the probability of evolution blundering onto Godâs Will in every single instance?â And of course, these odds are beyond calculation to the point where even trying to describe them stretches Dembskiâs math to the breaking point.

And so Dembskiâs problem is one creationists suffer in general. When you start with an answer that is both wrong and rigidly unmodifiable, you are highly unlikely to happen on the right questions to ask. When better questions are pointed out, they are instantly absurd within the creationist context. To deny that observation only shows us Godâs Will and nothing else, is to deny that God actually DOES anything â a thought more incomprehensible than the canonical sound of one hand clapping.

steve

I agree with your conclusion. I would urge the Pandas Thumb contributors to make certains those conclusions are up front and plain as paint.

Otherwise one without the skills to understand the arguments in detail sees just a complicated argument. Thatâs the whole point of the âcreationism in a cheap tuxedoâ strategy â if Dembski can pretend that his critics âneedâ to engage in lengthy (a relative term, of course) arguments to find âholesâ in his theory, then Big Bill gets some satisfaction for his efforts.

Markâs post is excellent and Dembski is obviously a deluded hack but I might spend more time emphasizing the latter and describing in the plainest ways possible why that is the case. It seems apparent that Dembksi could just as easily choose to show that the moon or the Grand Canyon must have been intelligently designed using his bizarre âdefinitionsâ and warped view of reality. Of course, itâs obvious why he chooses to spit at evolutionary biologists rather than geologists or astronomers. But surely they are next in line according to the Grand Fundie Plan.

Correct me if Iâm wrong, but donât most molecular biologists at this point agree that early life almost certainly didnât have the current 22 âletterâ vocabulary of amino acids? If so, why is Dembski still using 22 possible amino acids in his search space for proteins?

In just a second of googling, Iâve found all sorts of recent articles discussing this sort of thing. Does Dembski not have PubMed access?

http://www.ncbi.nlm.nih.gov/entrez/[…]ids=15214800 From the abstract: âIt reveals two important features: the amino acids synthesized in imitation experiments of S. Miller appeared first, while the amino acids associated with codon capture events came last. The reconstruction of codon chronology is based on the above consensus temporal order of amino acids, supplemented by the stability and complementarity rules first suggested by M. Eigen and P. Schuster, and on the earlier established processivity rule. At no point in the reconstruction the consensus amino-acid chronology was in conflict with these three rules. The derived genealogy of all 64 codons suggested several important predictions that are confirmed. The reconstruction of the origin and evolutionary history of the triplet code becomes, thus, a powerful research tool for molecular evolution studies, especially in its early stages.â

I have argued for years that casting biological evolution as a target oriented search process is a lethally misleading metaphor.

Mark wrote

But as Mark remarks in an aside later in his essay, it is not true of

allartificially programmed GAs. My company uses genetic algorithms, but they are not search algorithms in the sense in which Dembski uses that term, single-target-seeking algorithms. If we used them as single-target-seeking algorithms in Dembskiâs sense we would have been out of business years ago. The GAs my company uses employ a fitness function (that we write), but we have no idea at all whether there is one âtargetâ, many âtargetsâ, or no âtargetsâ in the space(s) in which our GAs evolve.The evolving artificial agents in our GAs are limited to purely local information about their relative fitnesses, and they know nothing of targets. We donât know the topography of the fitness landscape(s), and we donât know whether, where, or how many âtargetsâ there might be on those landscapes. And whatâs more,And we donât care!we have no way of even knowing if one of our artificial agents has reached a âtargetâ!And, like biological evolution, we donât care.We evolve populations of

satisficingâsolutions â good enough solutions â in our GAs, and thosepopulationsform the application for which we get paid. Those populations have to operate in a vicious negative-sum game in the real world (they autonomously control tens of millions of dollars of risk in the derivatives markets), and we dare not fool ourselves into believing that there is a single magic bullet âtargetâ. We truly have no idea whether a âtargetâ was even found in the search space. A good enough population of solutions, where âgood enoughâ is measured in terms of a functional performance metric that has no information about what the structure of a good solution might be, is fine with us and with our clients regardless of where the members of the population are in the space of interest. As long as we can write a fitness function that identifies a desirable (to us and our clients) performance measure for the task our artificial agents perform, we neither know nor care where a specific target might be in the space in which our entities evolve.One other note: Dembskiâs last stand for the force of his mathematical analysis must be on the sparseness and distribution of potentially functional entities in the space(s) of interest. His misrepresentations of Douglas Axeâs 2000 paper on mutagenesis experiments are intended by Dembski to suggest that there is in fact just one tiny target space out there, that biological evolution must somehow search for it, and that search must fail (according to Dembski) in the absence of intelligent guidance. Matt Inlay dissected that misrepresentation here on PT last month.

RBH

What an interesting thread, from top to bottom!

RBH

And thus we glimpse Dembskiâs other non-scientific shoe that never seems to drop: why did the mysterious aliens who Demski alleges designed life on earth have such a fascination and appreciation for necrophilic and waste-eating microorganisms? Perhaps Dembskiâs alleged alien designers might have been working for some microscopic clients who enjoyed a good creepshow.

They [the organisms] can be seen to make functional sense within the context of a larger system. But this moves on to the point Iâve been unable to discern, namely the

Whole-System Design Intentof the hypothetical Designers. In other words, granting for the sake of argument that various individual organs are âdesignedâ, and the organisms that possess them are âdesignedâ,what is the whole system itself intended to accomplish? In any of the engineering environments in which I have worked, youâd never get past the first stages of Design Review without revealing the Design Intent.We can discern the design intent of a designed object such as the Antikythera Device even when substantial portions thereof have been destroyed by the ravages of time and salt water.

So what, exactly, is Life On Earth, considered as a complete designed system, supposed to accomplish?

If we can ârecognise designâ in the world, we should also be able to discern Design Intent.

Godâs Will. What else?

Nice article. Just out of curiosity, does Demsbki acknowledge that evolutionary analogues to the landscapes describe herein are non-targeted? Iâm guessing the answer is either ânoâ or that he conflates his answer in such a way as to try and snow us, but I donât know.

Oh, but you see, creating a non-targeted landscape for biological life would take a feat of even GREATER intelligence! :)

Nice point RBH. I think its generally acknowledged that GAs measure up poorly against intelligence in the respect that they are often much slower to reach a particular, very specific solution or target. On the other hand, they are well known for often finding surprising and unexpected solutions for problems no one was even aware of, or of finding a wide range of different traits that help achieve a very broad non-specific goal.

Surprise surprise: what is life? What is the history of life? A long, roundabout walk thatâs ended up with a wide range of solutions to a VERY broad target indeed (the prolonged existence in time of varius patterns).

I can kinda see how Demskiâs work as a general case description of an unbound, near infinite, absolutely random combinatorial space for the emergence of organic proteins may be a useful starting point for understanding bio-genesis if it can be de-bugged. Any mapping of real world, bound, finite, symmetrical or partially symmetrical conditions on that space could theoretically point to potentially useful lines of research. I dunno.

I can, though, envision RGDâs universe where A!=A. It contains no more than than 2 discrete quanta and is spatially and temporally constrained to a point singularity. It happens virtually all the time.

The Cat only grinned when it saw Alice. It looked good-natured, she thought: still it had verylong claws and a great many teeth, so she felt that it ought to be treated with respect.

âCheshire Puss,â she began, rather timidly, as she did not at all know whether it would like the name: however, it only grinned a little wider. âCome, itâs pleased so far,â thought Alice, and she went on. âWould you tell me, please, which way I ought to go from here?â

âThat depends a good deal on where you want to get to,â said the Cat.

âI donât much care whereââ said Alice.

âThen it doesnât matter which way you go,â said the Cat.

ââso long as I get somewhere,â Alice added as an explanation.

âOh, youâre sure to do that,â said the Cat, âif you only walk long enough.â

Next question.

I can kinda see how Demskiâs work as a general case description of an unbound, near infinite, absolutely random combinatorial space for the emergence of organic proteins may be a useful starting point for understanding bio-genesis if it can be de-bugged. Mapping of real world, bound, finite, symmetrical or partially symmetrical conditions on that space could theoretically point to potentially useful lines of research. I dunno.

I can, though, envision RGDâs universe where A!=A. It contains no more than than 2 discrete quanta and is spatially and temporally constrained to a point singularity. It happens virtually all the time.

Good luck with your surgery, Mark.

Jim Harrison, excellent post. It left me grinning, just like a .â¦ well, you know!

First post by a Lurker.. hi all!

RBH Wrote:

Speaking of GAs, in computer science, algorithms are classified by their complexity (or âOrderâ) into Logarithmic, Linear, Quadratic, Exponential, etc. In other words, how efficiently the algorithm achieves its objective given N number of entities to operate on. Different algorithms can have the same results but have radically different performance.

An example in search algorithms is a Linear search vs a Binary search. The former, the performance drops directly proportional to the number of entities searched. The latter actually has higher performance relative to the number of entities you add as they increase because the âcostâ for each additional entity only increases logarithmically. (half of the ordered search space is repeatedly thrown away on each test until the match is found).

Evolution has no target, but even GAs without a target should be classifiable by their order of complexity..

In addition to creationist assertions about evolution having a âtargetâ, the math Iâve seen used to âdisproveâ evolution, typically asserts that the process of evolution is as inefficient as they can make it: An exponential order algorithm. A typical example is to calculate the chances of the human genome being acheived at random from a series of die rolls (4 values per base pair to the power of the number of base pairs in the human genome.. beyond astronomical).

Of course this doesnât accurately model the process of evolution at all and my suspicion is that evolution is closer to Logarithmic or Linear than Exponential. My question is, have GAs given us a good idea what the âOrderâ of the actual biological process of evolution is, or some outer limits on it?

Thanks, MSM Software Engineer

Letâs say I flip a coin one million times and get the following sequence:

H T T H H T T T H etcâ¦

According to Dembski, I must not have flipped a coin to obtain that sequence because the odds of getting that particular sequence are extremely small.

That lances the boil effectively, I think.

What I find most disconcerting about ID is the idea of a target, whether it is a whole organism (H. sapiens, etc) or a macromolecular sequence (protein or DNA). This seemingly necessary end product of the processes of a creative intelligence smacks of one of the most contentious issues in theology - predestination. As an opposite viewpoint, evolution is free will.

The single biggest fallacy in Dembskiâs paper is the idea that evolution is searching for a target, T. T is a sequence of DNA that can build and operate a new organism, which will go on to build and operate another organism and so on.

But evolution only works when you already have a population of organisms that self-reproduce with heredity. Any such organism doesnât have to search for T, itâs already inside it.

To make it worse for Dembskiâs theory, when an organism reproduces with mutations, it

doessearch a new space, which may or may not be in T, but since the vast majority of the DNA will be the same in parent and offspring, the space searched will be very close to the parentâs position in T. It will NOT be doing a random search of the entire search space of the human genome, which Dembski labels Omega.Dembski should re-title his paper, âMissing the Pointâ or âMisunderstanding the Most Basic Things About How Evolution Works, With Mathematics.â

Dembski presented his theory on www.arn.org in the Intelligent Design Forum thread âFundamental Theorem of Intelligent Designâ. Itâs also being discussed in the âDisplacing Darwinismâ thread. Read them for more comments.

He also discusses this thread and the one by Wesley Elsberry in the âDisplacement at the Pandaâs Thumbâ thread.

P.S. RBH gets credit for first pointing out that living organisms donât have to search for T because they are already in it.

Let me restate part of my last message. Dembski

mentionsthe threads on Pandaâs Thumb and gives links to them. He doesnât discuss them.Itâs like pouring sand out of a bucket. Youâll get a pile of sand with this particular grain at the top and those particular ones at the bottom. How amazingly unlikely is that! You could pour the same bucket out until the heat death of the universe and youâll never get the same configuration again. But youâll always get a pile of sand in the same shape, more or less.

Dembskiâs trying to show that because you canât predict what grain of sand will go where, God is placing each grain to ensure that the pile is roughly conical.

Doesnât wash, does it?

R

Dembski represents assisted searches as the set of probability distributions over a search space such that the density at the target is greater than what it would be as a uniform probability. He starts with a finite approximation, in which the search space is represented as a collection of targets and non targets. Then he simulates a probability distribution by choosing any M of those elements (allowing for repetition) such that the ratio of targets to M is at least q. Now do this infinitely many times (i.e. let M to range from 0 to infinity), and add up all the success. Then find the average rate of success by dividing this number by the total number of ways one can choose any M elements with repetition.

All of this is elementary combinatorial analysis (balls into boxes). Say you 20 black and red balls in an urn. Youâre really interested in the red ones. So I let you pick a ball, you record which color it is and put it back. Shake up the urn, and repeat the process M times. Now, keep two tallies. One is the number of times you have done these experiments (this number carries over as you vary M). The other is the number of times you successfully pick M balls such that the ratio of red balls to M exceeds a prespecified threshold. This number also carries over as your vary M from 0 to infinity.

Any time you pick red balls at a rate greater than q (which in turn is greater than uniform probability), Dembski claims that the trial represents an assisted search. In the actual formalism, the balls are actually probability distributions with point-like distributions (diracs). So by picking balls, youâre in essence creating a probability distribution such that at least q ratio of those points are targets. According to Dembskiâs displacement theorem, then, the ratio of successful trials (i.e. assisted searches) to total number of trials (i.e. space of all probabiltiy distribution) approaches zero as the number of balls in the urn approaches infinity (i.e. the search space becomes infinite in size). This is the strongest conclusion â everything else seems to me to be philosophy disguised as mathematics.

Well, if you made it this far through the hand-waving analogies, the obvious question arises â can you represent the space of assisted searches the way Dembski proposes? In other words, is there a good one-to-one mapping of all probability distributions to algorithms which accomplish the task? Methinks this is a suspect analogy. I have seen algorithms represented by a string that collects all the points visited in a function space along some path. So for instance, a steepest descent search would be represented as the list of points that lies on the path of steepest descent during each iteration until termination. For each target, then, there are potentially infinitely many paths that can reach that target at a specified rate higher than blind search. However, by treating the searches probabilistically (or rather combinatorially), I think some accounting is amiss. In other words, what if there are much more than one algorithm that can induce a particular probability distribution with the specified target probability that is better than uniform?

Working out the Ultimate Question to Life, the Universe, and Everything?

Not expecting to help,

Grey Wolf

I didnât mean at all that things living today are better adapted to their present environment, no. But to the degree that the present environment has commonalities with the past environment, the fact that everything living today comes from the small percentage of living things that prospered and reproduced in the previous generation(s) does show an advantage.

I was using âbestâ only in the sense that all of our ancestors were members of that very small percentage of living things that successfully reproduced. Lots of the business of living has to do with how well we are internally constituted, though, and those demands arenât typically abated by changes in environment.

Just to ensure me that Iâm not entirely off with my maths.

Chance you strike a target T in huge Î©, if you choose with uniform distribution U is some miniscule p And, the chance you strike T by first randomly-uniformly choosing a distribution out of all distributions and then choose in Î© according to this distribution is still the same : p.

Regardless whether the chance, that the distribution you pick first, has a success probability higher than q, is (as dembski has proven ) virtually nil.

So, even if we grant him his postulated ultimate target T, the thing he has proven is almost completely useless crap.

Isnât it about time Dembski applied the same level of scrutiny to the Bible that he applies to âDarwinismâ? Oh wait, unless one resorts to Bible codes the hammer of maths canât easily be applied to the nail of 2000 year old literature.

Re âsuggesting in his book The Design Inference that representing certain notions in a mathematically symbolic form somehow âprovesâ themâ

Hmm. If he thinks that works, why donât we write the basics of evolution theory in mathematical symbols, thus âprovingâ it. Think that would work?

Henry

I find Dembskiâs whole line of argumentation amusing.

Among the most successful search algorithms intelligent designers (the ones we know for sure to exist) use are ones that are based on concepts borrowed from biological evolution.

Perhaps instead of cloaking ID in symbols, Dembski should venture out into the real world and see how âsearchesâ are actually done

What target? Dembski presumes thereâs a target, a measurable point in the âspaceâ. No, each point in the space is the target; at that point. Just as each birth in a species establishes itâs own point in time. I am currently the end point for my little branch of the tree of life. I am target, hear me roar.

Thomas Aquinas, as I recall, and I confess itâs been a long time, started his proof of God with the assertion that God exists, therefore the proof will follow. It seems to me that Dembski is using the same false logic.

Data! Data! Data!

Mark Perakh posted

I fully agree that mathematics alone is not sufficient to learn something about biological evolution. We need data.

A very interesting example is a graphical representation of protein structure space by Jingtong Hou, Se-Ran Jun, Chao Zhang and Sung-Hou Kim (2005) âGlobal mapping of the protein structure space and application in structure-based inference of protein functionâ published online before print February 10, 2005, 10.1073/pnas.0409772102 PNAS March 8, 2005 vol. 102 no. 10 3651-3656 OPEN ACCESS ARTICLE.

Two very intriguing features of the protein structure space are: (1) protein structures are centered around diverging axes that originate from one region. That region is called âthe origin of protein structure spaceâ. (2) the structures of small protein or peptides are mapped close to the origin. These features are very suggestive for descent with modification of all protein structures.

Knowing this, is it really that difficult for random mutation and non-random natural selection to discover all proteins we find in nature?âWell, if you made it this far through the hand-waving analogies, the obvious question arises â can you represent the space of assisted searches the way Dembski proposes? In other words, is there a good one-to-one mapping of all probability distributions to algorithms which accomplish the task? Methinks this is a suspect analogy.ââlurker

No. No one can ârepresent the space of assisted searches the way Dembski proposes.â There is more to it (intelligence). But there is no such thing as a âone-to-one mapping of all probability distributions to algorithms which accomplish the task.â And Dembski does not assume it. Uh, thatâs what the paper is about, isnât it? I mean most of it. If there was such a map â¦ Whoa! Youâve got Creation! âHand-waving analogiesâ? The first half of the paper is a trivial (âelementaryâ) refutation of the âDarwin algorithmâ as âblind search. The second half, anticipated in the first is an argument that no purely mundane intelligence can do this either!

Not using these techniques! (The only ones we know!)

Dembski doesnât use the term âDarwin algorithmâ. Does Rock contend that âblind searchâ, as defined by Dembski in this paper, is a reasonable representation of evolutionary theory?

I think this is a mistaken argument. By an

assisted search, Dembski means a pair (s,j) of what he calls astrategy functitons and aninformation functionj. The function j takes all previously visited points in the search space and returns a value. The function s takes all previously visited points and all previous j-values, and returns the next point to be visited in the search space. Any GA are âassisted searchesâ in this sense, including the ones used by your company.As for targets, it is important to realize that the target is something imposed externally for our amusement. It need not have any relation or relevance what so ever to the dynamics of the GA or the optimization problem to be solved. (At least no such constraints have been mentioned by Dembski.) Indeed, the less relation and relevance, the stronger Dembskiâs case for the target being hard to find! A âtargetâ in Dembskiâs sense is whatever region of the search space Dembski chooses to refer by the word âtargetâ. It is therefore not possible to say that your GAs lack a target in Dembskiâs unorthodox sense.

This unorthodox notion of a target comes at the price of departing from the usual concepts of search and optimization. In optimization, the j-function would be a function describing the optimization problem and the âtargetâ would be the (probably unknown) region of the search space containing satisficing solutions. Hence, the target would be completely determined by the j-function, and there would be absolutely no freedom for Dembski to pick the target himself. In decoupling the target from the j-function and the j-function from the problem-to-be-solved, Dembski has made his notion of search so general that most of his searches lack sensible interpretations and only a tiny subset correspond to anything encountered in optimization or biological evolution.

Rock claims, âBut there is no such thing as a âone-to-one mapping of all probability distributions to algorithms which accomplish the task.â And Dembski does not assume it.â

But Dembski writes, âÎ© must therefore be embedded in a larger environment capable of delivering an assisted search that induces Î¼0, which then in turn is capable of delivering a solution from the target T. But if this larger environment is driven by stochastic mechanisms and if a stochastic mechanism within Î©0 is responsible for delivering Î¼0, then Î¼0 is itself the solution of a stochastically driven searchâ¦

But if Î¼0, in representing an assisted search that effectively locates the original target T, is a solution for some new search, to what new target qua solution space does Î¼0 belong? Clearly, the solutions in this new target will need to comprise all other probability measures on Î© that represent assisted searches at least as effective as the search for T represented by Î¼0 â¦

Since any probability measure Î½ on Î© for which Î½(T ) > Î¼0(T ) represents an assisted search at least as effective in locating T as the assisted search that induced Î¼0, the new target is therefore properly defined as follows: T* =def {Î½ âM(Î©) : Î½(T ) > Î¼0(T )}.

Here M(Î©) denotes the set of all Borel probability measures on Î©. Note that any probability measure within T is at least as effective as Î¼0 for locating T*, whereas any probability measure outside will be strictly less effective than Î¼0.â

So back to my question, to which Rock does not provide a satisfactory answer, what is the proper higher order search space: the space of all assisted searches or the space of all induced probability measures? If one argues they are equivalent spaces, then Dembski does implicitly argue that there is a good mapping from one space to another.

Rock then asserts, if such a mapping exists, then it implies Creation. Please elaborate.

=========================================================

Erik brings up an interesting point. Just how useful is Dembskiâs result? Just as NFL does not speak to the utility of optimization searches in âpracticalâ problems, why is Dembskiâs result relevant to evolutionary biology? Or is this another facet of his mathematical turtles-all-the-way-down?

Erik12345:

Dembski has made his notion of search so general that most of his searches lack sensible interpretations and only a tiny subset correspond to anything encountered in optimization or biological evolution.Thatâs what I thought. Has anyone come up with a more realistic mathematical model of biological evolution?

Another morning, another read of Dembskiâs paper. I am more and more troubled by how he motivates his substitution of assisted searches for probability distribution. It turns out that most of my concerns voiced thus far in these blog commentaries is found in section 3, entitled appropriately enough âA Simplificationâ.

Here Dembski argues that there is any one good instance of a blind search. In other words, this is âbaselineâ â all assisted searches presumably work better than this. In what sense better? In the sense that assisted searches may be thought of sampling the search space with an induced probability distribution, and that the probability of finding the target T is q, which greater than p for a uniform distribution.

This seems at first glance a rather peculiar criterion. Consider for instance a simple assisted search â a Newton descent method. I have a complicated function within which a small target resides. What is the probability that a Newton descent method âfindsâ the target? According to Dembski, I can substitute each step of the Newton optimizer with an i.i.d. random variable, and calculate the probability that at any step the target is reached.

Letâs apply Dembskiâs formulation. For instance, what is this induced probability function for the Newton optimizer operating on some function? I suppose one could partition the search space according to which zero-gradient target point an initial condition in that partition would deterministically converge to. Then, one would probabilistically select an initial point and lookup which partition that point falls within. So, for instance, for y = x^2, you are always guaranteed to find the target. But for some arbitrary polynomial with many zero-derivative points, you have to calculate the size of the interval on the real number line where the Newton optimizer will give you the zero-derivative point of interest.

But after all this analysis, what is the probability function induced by a Newton optimizer on some space? Well, Dembski offers no guidance except to excuse himself from having to deal with it: âDoes this simplification adequately capture the key features of blind and assisted searches that enable their relative effectiveness to be accurately gauged? To be sure, these simplifications dispense with a lot of the structure of U and focus on a particular type of blind search, namely, uniform random sampling. But representing blind search as uniform random sampling is, as argued at the start of this section, well warranted. Moreover, any information lost by substituting Î¼ for U is irrelevant for gauging Uâs effectiveness in locating T since its effectiveness coincides with the probability of U locating T in m steps, and this probability is captured by Î¼.â

The notion that one can calculate the probability of an assisted search at a particular step is strange. Especially for the model problem I suggested above, what is the probability of U (the Newton optimizer) locating a particular T (i.e. a zero-derivative point) in m steps? Well, it is deterministically 1 (sometimes not, if it takes more than m steps) whenever the initial condition lies within a partition that always converges to that target point. Otherwise it is zero. Strangely enough, with no a priori knowledge about my function space, the performance of a Newton-optimizer is equivalent to a blind search on my initial conditions. Yet, the probability of finding those initial conditions may as well be zero with a finite target space in an infinite search space. Is a Newton-optimizer then as effective as a blind search?? I guess this is where NFL rears its ugly head. Yet, with the recognition that Newton-optimizers are a mainstay of modern optimization techniques, one should realize that NFL really does not matter in practical problems.

This analysis emphasizes how ad hoc Dembskiâs conclusions must be. With no a priori information about my search space, I really have no knowledge about how good my search result is. Equivalently, I really have no knowledge of what a target is. Yet Dembski merely dismisses this problem. He insists upon a target, and defines it in such an ad hoc manner that the induced probability measure is zero for his only instance of blind search â random uniform sampling. Using Dembskiâs jargon, he has no rational for relabeling a âcandidate solutionâ as _the_ target solution. This echoes a lot of the criticisms already cited on this blog â namely, that the notion of a target must be defined post-hoc. It seems to me, in order to defeat Dembskiâs argument, one merely considers a search process in which every point is a target (or even vice-versa, a non-target). Then what does his analysis provide? Well, p = 1. Any assisted search would result in q = 1. And thus, suddenly there is no problem! Metaphysically, this is âeverything is designedâ turned into âeverything is designed once we figure out its purposeâ.

=========================================

Finally returning to a point made by Rock: âThe first half of the paper is a trivial (âelementaryâ) refutation of the âDarwin algorithmâ as âblind search.â

I will simply quote Dembski: âTo be sure, these simplifications dispense with a lot of the structure of U and focus on a particular type of blind search, namely, uniform random sampling.â

Sure, the Darwinian algorithm is not a purely blind search, as I donât see this as a controversial point. But wouldnât it be interesting if the structures of U (the assisted search) in the end rests upon a âblindâ process much like a Newton optimizer depending on a blind choice of initial conditions?

Round and round and round.

I asked this about 20 times some time ago, but got bored with waiting for an answer.

Forget looking for 1 in 20^100 sequences of proteins - thatâs an arbitrary suggestion for a typical protein anyway. If you donât like the specification of 1 in 20^100 for a 100 amino acid protein, then what *do* you like? Any base being one of four possible; only 50% of sites being significant? Well done. You have reduced the specification to 1 in 5^50. That represents a pretty wide range of possible sequences - but still not more than a probability of one corresponding protein in 8*10^34. The estimate for a protein with *any* functionality that was cited a few months ago was 1 in 10^11. What is your estimate for production of a random protein that has new functionality *useful* to an organism? And what is the engine for production and testing of random proteins anyway? How does it search the dominant space of useless proteins without drowning in rubbish strings of amino acid residues?

Now, knowing your propensity here to focus on irrelevant details to show how you have dealt with an argument, somebody will doubtless post a reply saying, âActually, aCTa is wrong, because he only asked it 13 times.â

â¦ come to think of it, human DNA is supposedly 99% similar to chimp DNA - a fact of which we are regularly reminded.

Which means that presumably there is significance in the 99% correspondence (if neutral change was widespread - as implied in the argument above relating to a specification of 100 amino acids being artifically restrictive - then why should the similarity be so high? How can it be possible to say that this is human DNA and that is chimp DNA when there is only 1% difference between them?).

And also there is significance in the 1% difference (if a specification of 1 in 100 is artificial, how is it possible to say that there is a consistent, identifiable 1% difference between chimp and human DNA?).

Just asking .â¦

Lurker,

What puzzles me about Dembskiâs paper - apart from the content - is why he hasnât submitted this article, minus the evolutionary speculation, to a mathematics or numerical methods (optimization) journal. For example,

SIAM Journal on Optimization.Iâm sure that if there is something new and correct in his ideas then they would be of general interest. By separating what is an abstract mathematical problem in searching for global minima from speculations as to the consequences of his conclusions, Dembski has the opportuinty to get a reputable publication. If it holds up then he can certainly make hay.So, anyone know if Dembski plans to do this? Perhaps contributors at ARN could ask him.

DF

My âestimate for productionâ¦â? Iâm going to assume you mean my estimate of the â

probabilityof productionâ¦â. (But really: if youâre going to complain that folks are avoiding your questions, it behooves you to state them more clearly.)The probability that an organism will ârandomlyâ produce

anyparticular protein is negligibly small. Organisms generally produce the proteins that are encoded in their genomes.The engine of production and testing? Why, thatâs random mutation and natural selection. Surely youâve heard of it. But a key point that your question seems to be missing is that ârandomâ proteins do not spring into being to be sampled for some, any, functionality. Proteins, which generally have some functionality in the first place, are continuously sampled for any additional survival/reproduction enhancing functionality they might possess.

Hope that answers your question. Thanks for asking.

Why would one drown? For Dembski and Beheâs blabbering to become real science, theyâd have to have a reliable way to estimate the percentage of strings which are biological rubbish. Since they canât do that, they use Creationist Statistics, which is to say, the sneak extreme improbability in through their assumtions, then discover it.

Steve: Conversely, if random mutation and natural selection are to become real science, you also need to have a reliable way to estimate the percentage of strings that arenât biological rubbish. If you canât do that, then your argument is just handwaving - you donât actually have a mechanism, just a story. This is what I am (still) waiting for. It would also be nice to know where you see this trial and error going on in organisms (to make new proteins, not in something completely different like the immune system), but we need to start somewhere. Incidentally, I suspect that in (say) humans, if the engine is new DNA sequences at the time of germ cell production, with the exception of drastic small changes, like the almost universally quoted sickle-cell anaemia mutation, the functionality of any new proteins would be irrelevant compared to all the other traits that are present in the organism, and environmental factors.

Russell: Sorry if Iâm not making my argument clear. I tend to come here when I ought to have already gone to bed. I have already said that I am not interested in any particular protein. I have offered very generous terms - only 50% of amino acids in the protein sequence being significant, and each of those being one of four possible amino acids. Note that if it is true that human DNA is only 1% different from chimp DNA, then this lack of specificity doesnât correspond to real life - if changes are âthat neutralâ then I would have thought you would expect a much greater diversity than 1% within each member of a species (which was the thrust of my second post).

However, put that to one side for now. The point I am making is, what level of specificity do you want before a new protein expresses functionality? If you donât like the creationist or ID estimates of improbability of specification, then what level of specification is more accurate? If you donât have an estimate of this, again, you donât have a theory that is open to falsification/demonstration, so it doesnât merit being called science yet.

Also, you canât keep saying âproteins that already existâ - I know that from scientific papers on new protein functionality, most times new functionality is derived from already existing proteins. But at some stage, new proteins have to appear - again, if you canât explain (for example) where proto-bodgase comes from, then it is almost irrelevant that you can get from chimp-bodgase to human-bodgase, or that gdobase looks like bodgase with a transposed section.

Tell us what part of âGene duplication and subsequent divergence is a common occurrenceâ you have trouble with.

Syntax Error:mismatched tag at line 1, column 351, byte 351 at /usr/local/lib/perl5/site_perl/5.16/mach/XML/Parser.pm line 187.DavidF,

I can only speculate about Dembskiâs motivations. After Shalizi recently tore into Dembskiâs first article (of 7) about some supposedly ânovelâ measure of information, I remember Dembski complaining that he had become disillusioned with rigorous mathematical research, and his primary passion now was in the philosophical (and some may say, apologetic) aspects of it. Having said that, I doubt that even SIAM J Opt would be an appropriate place for Dembskiâs article, since it really does not have any practical applications. That is, I cannot think of a good practical problem that involves searching for search methods (a metasearch) of the peculiar properties that Dembski cares about. Dembski would likely argue that such an example of a practical problem is the materialist conspiracy to treat the origin of the universe as a blind process. In my view, that would be a philosophical strawman. In terms of NFL type analysis regarding general properties of search algorithms, Dembski brushes off, in my opinion, significant criticisms of his premises (e.g. English) and his conclusions (e.g. Wolpert) with simple assertions and without proof. This hardly makes for good mathematic, but as we can see, it is great for polemics.

Still, underlying all of his articles are these peculiar premises. Dembskiâs main failure is to treat a âtargetâ as a static, ahistorical, context-independent, concept. Dembskiâs game is simply that once he has your agreement on a prespecified target, stripped of any context, he shows you that it is nearly impossible to find it by just doing random walks. I think Dembski sees the fallacy of this argument. In many ways, Dembski has already conceded the ground by formulating his thesis as a regress. Note that his argument isnât really that evolutionary mechanisms cannot create complicated designs. Rather it is that those mechanisms (or assisted searches) that can create complicated designs canât be found by blind processes. Regresses always seem to me to be desperate arguments.

Let me give another example that hopefully people can relate to. Letâs say that youâve created a specification that is your spouse. Now obviously youâve found this person. But what were the chances that you were successful in acquiring this target? Did you in fact always have this specification for all of your existence, such that it is an unambiguous match to your current spouse? If we applied Dembskiâs methodology, your chances are pretty grim: 1 in say, roughly, 3 billion people. Not only that, you have to be at the right place at the right time, i.e. chances are inversely proportional to the surface area of the Earth and the amount of time youâve been alive. According to Dembski, if you had your specification of a spouse before you met your spouse, then it was by Design that you found this person. In other words, you employed an assisted search by Design that allowed you single-handedly to tip the probabilities so that the induced probability space is very high for your spouse compared to all the other targets in the world.

OK. For those of you who donât believe this Pygmalionesque account of your Designing your own spouse since birth, I guess you have some soul searching to do on how you two eventually hooked up. =)

Simple biology would tell you thereâs no reason to think, a priori, that the useful strings are problematically rare. So no, you donât need this. The burden of showing a problem exists is on dembski, much like the burden of Dorothyâs house was on the Wicked Witch of the East.

Lurker,

Thanks! So itâs the typical Creationist argument that the life we actually see is the only target possible and what are the odds of that happening? Equally what are the odds that any of us is alive today - infinitesimally small given the chances of all the events that have had to occur, all the way down from Adam and Eve, to produce us :-)

My suggestion that Dembski try to get his work published in a mathematical journal was a bit tongue in cheek in that I assumed he wouldnât be able to get it published even sans the evolutionary speculations.

So itâs basically using fancy language as smoke and mirrors to hide the same ol same ol creationist arguments.

There is no evidence that God exists but there is no question whatsoever that a God or Gods must have existed. Otherwise we would not be here.

âScience commits suicide when she adopts a creed.â Thomas Henry Huxley

John A. Davison

Update