# Durston’s devious distortions

A few people (actually, a lot of people) have written to me asking me to address Kirk Durston’s probability argument that supposedly makes evolution impossible. I’d love to. I actually prepared extensively to deal with it, since it’s the argument he almost always trots out to debate for intelligent design, but — and this is a key point — Durston didn’t discuss this stuff at all! He brought out a few of the slides very late in the debate when there was no time for me to refute them, but otherwise, he was relying entirely on vague arguments about a first cause, accusations of corruption against atheists, and very silly biblical nonsense about Jesus. So this really isn’t about revisiting the debate at all — this is the stuff Durston sensibly avoided bringing up in a confrontation with somebody who’d be able to see through his smokescreen.

If you want to see Durston’s argument, it’s on YouTube. I notice the clowns on Uncommon Descent are crowing that this is a triumphant victory, but note again — Durston did not give this argument at our debate. In a chance to confront a biologist with his claims, Durston tucked his tail between his legs and ran away.

Let’s start with his formula for functional complexity. He took this from a paper by Hazen, Griffin, Carothers, and Szostack; I know Hazen and Szostak’s work quite well, and one thing you have to understand right away is that both are well-known for their work on the origins of life. They are not creationists by any means, and would probably be very surprised to see this paper being touted by a creationist as evidence that evolution is nearly impossible.

Here’s the formula that Durston cites:

Doesn’t that look impressively sciencey? It’s a very simple equation, though, used to quantify the amount of what Szostak calls “functional information”,. It’s being calculated with respect to a specific degree of a particular function, x. If we’re looking at a function x like catalyzing a phosphorylation reaction, for instance, we might want to know how likely a random protein would be at that job. The rest of the equation, then, is very straightforward — we just assess how many different protein sequences, meet the criterion of carrying out function x to some specified degree, and then we divide by the total number of possible protein sequences, N. N can easily be very large — if we ask how many possible protein sequences that are 10 amino acids long, with 20 different possible amino acids, the answer is 2010, or 1 x 1013, a very big number. And it gets even bigger very rapidly if you use longer protein sequences.

This big number can be misleading, though. We also want to know what fraction of all those sequences can carry out our function of interest, x, to some degree. This is the value of . In the trivial case, maybe catalyzing phosphorylation is incredibly easy, and any protein has a level of activity that meets our criterion. Then we’d say that 1013 out of 1013 proteins can do it, the sequence doesn’t matter, and any 10-amino acid protein you show me has no functional information relative to the function we’re measuring. On the other hand, if there was one and only one sequence that could carry out that catalysis, the functional information of our 10 amino acid sequence is at a maximum.

To reduce the metric a little more, Hazen takes the negative log base 2 of this number, which simplify specifies the number of bits necessary to specify the functional configuration of the system. In our example of any protein doing the job, the answer is , or , which is 0 — no information is required. If only one sequence works, the answer is , which, if you plug that into your calculators, is a bit more than 43 bits.

It’s very easy and cheesily fun to churn out big numbers with these kinds of calculations. For instance, here’s part of the first sentence of the Hazen paper:

Complex emergent systems, in which interactions among numerous components or agents produce patterns or behaviors not obtainable by individual components, are ubiquitous at every scale of the physical universe

If you strip out the punctuation and spaces from that sentence, there are a total of 181 alphabetic characters there. How many possible arrangements of 26 letters in a sequence 181 characters long can there be? 26181, or 1.3 x 10256. It’s huge! If we take the , we just produce something more manageable: you could encode that one specific sentence in 851 bits. But it still means the same thing, that this is a very large and improbable number.

What the Hazen equation does, though, is include that important parameter. There is obviously more than one way to communicate the meaning of the sentence than just that one specific arrangement of letters. I rewrote Hazen’s sentence a little less elegantly (the hard part was writing it so it came out to be exactly 181 characters long) here:

Complicated stuff that is built up by many smaller components interacting with each other to make novel arrangements, arrangements that cannot be seen in the single pieces, are common everywhere in the known universe.

How many sentences like that are there? I don’t know, but I’m sure there are a lot; it’s also the case that we don’t even need to be grammatical or elegant to get the basic message across. This works, too:

There xxxxxx arre l0ts of xxxx big thijngs xxx xxxxxxx xx made of xxxxxxx littler x thangs xx xxxxxx stuck togther xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxxxxxx

Hazen is making the point that all 3 of those 181 character sentences are functionally equivalent. To measure the functional complexity of the sequence, you need to at least estimate the number of functional variants and divide by the total number of possible arrangements of letters. This measurement is also only applicable in the context of a specific function, in this case getting across the message of the ubiquity of emergent complexity. This sentence fragment, for instance, would not satisfy the requirements of , but you know, it might just carry a different functional message.

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was th

Keep this in mind. Hazen’s formula is used to calculate the information content of a specific function, not all possible functions.

Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree.

Get it? I know, it’s a lot of background and a lot of numbers being thrown around, but this is a computational tool they are using in artificial life simulations. Basically, they are asking, if we make a random letter sequence, what is the probability that it will say something about patterns? Or, if they make a random peptide, what is the probability that it will catalyze a particular reaction they are measuring? Or, if they create a random program in an artificial life simulator like Avida, how likely is it that they’ll get something that can add two numbers together?

I’m not going to try to give you the details of Hazen’s results, since they’re largely tangential to my point here — they look at the distribution of solutions, for instance. But they do observe that in Avida, with an instruction set of 26 commands, and randomly generating 100-instruction programs, they find programs that carry out one logic or arithmetic function in about 1 in a thousand cases. There are about 3 x 10141 possible arrangements of 26 instructions taken 100 at a time; any one specific sequence has a of 470 bits.

Kirk Durston loves the Hazen paper. He has cited it many times in the various debates recorded on the web. It’s wonderful because it’s a real scientific citation, it talks about measuring the functional complexity of things, and it’s got math — simple math, but it’s enough to wow an uninformed crowd. Just watch how he abuses this simple formula!

Start here:

That’s it, 3 terms: , , and . He misuses them all. We start with N, and here’s how he calculates it.

Hang on. N, as Hazen defines it, is the number of possible configurations of n possible elements. Durston doesn’t have a way to calculate that directly, so he invents a kind of proxy: an estimate of the total number of unique individuals that have ever existed. This is wrong. Here we have a simple metric that we could use, for instance, to calculate the number of different possible poker hands that can be dealt from a deck, and instead, Durston is trying to use the number of times a hand has been dealt. Right away, he’s deviated so far from the Hazen paper that his interpretations don’t matter.

Now you might say that this is actually a change in our favor. It makes the number N much smaller than it should be, which means the probability of a specific result out of N possibilities is improved. But that’s not even how Durston uses it! Suddenly, he tells us that N is a limit on an evolutionary search (again, that’s not at all how Hazen is using it).

Here’s the game he’s playing. Durston shows up with a deck of cards for a game of poker; he knows, and you know, that the odds of getting a specific sequence of cards in a 5-hand deal are really low (about 1 in 3 x 108). Then he tells you he only has time to deal out 100 hands to you, and wants to know if you want to just give him the money he’d win right now, since with only 102 trials to test over 108 possibilities, you are going to fall far short of exhausting the search space, and are highly unlikely to find the one specific hand he has in mind…which is true. Of course, none of that has any bearing on how poker is played.

So, he’s basically abandoned the Hazen paper altogether — it was a veneer of scientific respectablity that he initially holds up in front of us, and then he ignores it to plug numbers he wants into the equation. Then he lowballs his irrelevant version of the number N, and redefines it to be a limit on the number of trials. Sneaky.

What about the next parameter? is a rather important value in Hazen’s paper, defined as “the number of different configurations that achieves or exceeds the specified degree of function x”. One of the points in that work is that there are many different ways to accomplish function x, so this can be a fairly significant number. To continue our poker analogy, the goal of a hand is to beat the other hands — that’s our function x, to have a combination of cards that has a greater rarity than every other player’s hand. is actually rather large, since the average poker hand will beat half of all other poker hands (and need I add, every round of poker will have one hand that wins!). How does Durston handle ?

He ignores it. He simply sets it to 1.

He slides right over this rather significant fact. The next thing we see is that he announces that 140 bits (which is the log base 2 of 1042) is the upper bound of information that can be generated by an evolutionary search, and suggests that anything above this magic number number is unreachable by evolution, and anything below it could be reached by random processes.

What that means is that he only accepts one possible solution in an evolutionary lineage. He is estimating the probability that an organism will have precisely the genetic sequence it has, as derived from a purely random sequence, within a limited amount of trials. No incremental approach is allowed, and worse, it is the one and only sequence that is functionally relevant. The only way he imagines a sequence can be reached is by randomization, and all he considers is the conclusion. It really is a gussied-up version of the ‘747 in a junkyard’ argument that the old school creationists still use.

To summarize, what we’re dealing with is a guy who drones on about basic mathematics and pretends that his conclusions have all the authority of fundamental math and physics behind them. He waves a paper or two around to claim the credibility of the scientific literature, and then ignores the content of that paper to willfully distort the parameters to reach his desired conclusion, in contradiction to the actual positions of the authors. And then he further ignores the actual explanations of evolutionary biology to use a hopelessly naive and fallacious model of his own invention to claim that evolution is false. He’s a pseudoscientific fraud.

I understand he’s actually in a doctoral program in Canada. I hope that, before his thesis defense, a few competent people look over his thesis and check his interpretations of his sources. I just looked over the Hazen paper and compared it to what he’s claiming about it, and his version is completely bogus.

Hazen RM, Griffin PL, Carothers JM, Szostak JW (2007) Functional information and the emergence of biocomplexity. Proc Natl Acad Sci U S A 104 Suppl 1:8574-81.

PZ Myers Wrote:

What that means is that he only accepts one possible solution in an evolutionary lineage. He is estimating the probability that an organism will have precisely the genetic sequence it has, as derived from a purely random sequence, within a limited amount of trials. No incremental approach is allowed, and worse, it is the one and only sequence that is functionally relevant. The only way he imagines a sequence can be reached is by randomization, and all he considers is the conclusion. It really is a gussied-up version of the ‘747 in a junkyard’ argument that the old school creationists still use.

Wow; I could see that coming even before I got to that paragraph. Whenever I see an ID/Creationist messing with probabilities, I already know where it is going no matter what the calculation.

This misconception, that the final outcome of a stochastic process is where the process was headed, is probably one of the most fundamental misconceptions of the ID/Creationists. It is amazing that as many times as this bogus argument is debunked, they continue to put more lipstick on it and try to pass it off in a new venue. Publication high jacking seems to be a favorite trick to make the rubes think the speaker is a knowledgeable insider to the science community.

In fact, I would suggest that this misconception is the fundamental misconception of ID/Creationism. It probably relates to the fact that we humans are embedded in the current products of evolution and we have a long history of projecting our inner selves onto the universe when we have tried to explain it in the past. Unfortunately, authoritarian religions lock this misconception into place, and then everything else must be bent to fit it no matter how grotesque the result.

This is why ID/Creationism is pseudo-science forever.

KIRK DURSTON, B.Sc (Physics), B.Sc. (Mech. Eng.), M.A. (Philosophy), Ph.D. Candidate (Biophysics) at the University of Guelph Kirk Durston is the National Director of the New Scholars Society. He is currently a Ph.D. candidate in Biophysics at the University of Guelph, specializing in the application of information to biopolymers.

It really is a gussied-up version of the ‘747 in a junkyard’ argument that the old school creationists still use.

It sounds very mysteriousn and technically sophisticated when you first hear of it. There is the implication that if you see a fair amount of “functional information” that this proves it could not come from natural selection. But it could. In William Dembski’s hands a similar argument involves observing Specified Complexity (in effect the same thing). He goes one better than Durston in also having a theorem (his Law of Conservation of Complex Specified Information) that is supposed to show that natural selection cannot improve the degree of adaptation of a species. In fact, Dembski’s LCCSI is incorrect, and even if it were mathematically correct it is stated in a way that makes it irrelevant to the question. It sounds as if Durston is not nearly as sophisticated as this, and just assumes natural selection carries out a random search, exactly as you say – the “tornado in a junkyard”.

Obviously you heard Durston speak and I only have the Youtube video plus his comments on UD but I think you are being unfair. His presentation is erroneous - but not for the reasons you give. Peter Olafsson (and to a less extent myself) has commented on his presentation on UD and explained the statistical errors which are severe but not the ones you describe.

(1) His 10^42 number appears to be an estimate of the total number of DNA/RNA mutations that have taken place since life began. He does not use this as a substitute for N in Hazen’s formula. He uses it to estimate the chances of nature stumbling across something that meets the criteria of Hazen’s formula. In one sense this is quite reasonable. If you want to estimate the chances of dealing a Straight Flush during a poker marathon - you need to know how many hands were dealt. Of course the rub here is the role of natural selection. See (3) below.

(2) In the example of Venter’s watermarks I think he does set M(Ex)to 1. However, in the next example, folding proteins, he clearly doesn’t. He considers all folding proteins.

(3) Initially he ducks the role of natural selection in reducing the odds but he does come back to it in two ways. One is a fleeting reference to the NFL theorem about fitness functions having to be more complex than the target they are leading to. This is absolute rubbish as in evolution the fitness function comes first and defines the target. But I don’t think he understood what he was talking about anyway. The second relies on a knowledge of biochemistry which I am unable to challenge. In the case of folding proteins he claims that:

a. Only folding proteins (plus a small amount of others) are any use in life.

b. These comprise a minuscule proportion of all possible proteins

c. The folding proteins are in clusters of similar proteins but these clusters have no relationship to each other - they are scattered across the space of possible proteins.

If all of these are true I think it presents an interesting problem for evolutionary biology. How did replication and mutation get from the original RNA/DNA across the wide open spaces to these clusters of folding proteins. There does not appear to be any fitness advantage in generating non-folding proteins so natural selection cannot have driven replication that way. And the chances of stumbling on another cluster through genetic drift seem to be negligible.

However, I am not aware of this being considered a big problem in evolutionary biology so I am sure there is something wrong with this argument. But I think the fault lies in the biochemistry. Are a, b and c really true?

Even if this does present a genuine problem for evolutionary biology it is another giant step to deduce an intelligent source. It is really just a problem to be addressed.

point b listed above has been shown experimentally with alpha helix bundles to be nonsense. I don’t not have the citation handy, but a reasonably robust number of random sequences where capable of folding into 4 helix barrels.

point c is guesswork because we do not actually have good data on the extent of possible overlap between different folding motiffs. It’s a good question to investigate seriously.

“Probability calculations are the last refuge of a scoundrel.” – Jeff Shallit

Cheers – MrG / http://gvgpd.proboards.com

PZ Myers Wrote:

It really is a gussied-up version of the ‘747 in a junkyard’ argument that the old school creationists still use.

This is possibly better known as the “lottery winner fallacy”: the odds of winning the national lottery are so small that if somebody wins it, obviously the results had to have been rigged.

(Of course this is bogus, I’ve won national lotteries several times, or at least that’s what I’ve been told by emails sent to me by strangers on occasion.)

Incidentally, the odds of the formation of a salt crystal 100 atoms on a side are 2^500000 = 10^150,515 – or at least those are the odds if the laws of chemistry are ignored. It’s nice to cite this number back to Darwin-bashers: “What, a mere 10^42?! You think too small.”

I admit that I haven’t carefully read every word of the original article, so I might have missed it. But from a quick read I see no indication of what Durston proposes instead, other than “some designer did something at some time.” The obvious questions are: Did the designer - or a yet-undiscovered “naturalistic” mechanism that Durston did not rule out - operate in-vivo, as Michael Behe thinks? Or does Durston have an “in vitro” alternative that no major IDer has yet to propose, let alone test? And when did those “blessed events” occur? Last Thursday? Over the course of billions of years? Once at the beginning of life ~4 billion years ago, as Behe once suggested? At the beginning of the Universe ~14 billion years ago, as Dembski once suggested?

Just because his misleading negative arguments need to be addressed is no reason to let him off the hook from providing anything meaningful about his own alternative.

Frank J said:

I admit that I haven’t carefully read every word of the original article, so I might have missed it. But from a quick read I see no indication of what Durston proposes instead, other than “some designer did something at some time.” The obvious questions are: Did the designer - or a yet-undiscovered “naturalistic” mechanism that Durston did not rule out - operate in-vivo, as Michael Behe thinks? Or does Durston have an “in vitro” alternative that no major IDer has yet to propose, let alone test? And when did those “blessed events” occur? Last Thursday? Over the course of billions of years? Once at the beginning of life ~4 billion years ago, as Behe once suggested? At the beginning of the Universe ~14 billion years ago, as Dembski once suggested?

Just because his misleading negative arguments need to be addressed is no reason to let him off the hook from providing anything meaningful about his own alternative.

You are right. He is extremely confusing (and I suspect confused) about what the ID hypothesis is. I have an outstanding question to him on this very topic on UD.

Regarding the assertion that only folded proteins are of any utility: There is quite a bit of recent research that has revealed that many proteins (fesselin and synaptopodin, for example - involved in muscle contraction; several transcription factors; etc.) are in fact natively UNFOLDED. They adopt structure when interacting with a particular partner - an “induced fit” model of folding. In fact, it appears that this may be more the rule than the exception, for at least a portion of most proteins. What this appears to accomplish is it allows for divergent functions of a single polypeptide depending on the substrate or interaction partners. This dramatically expands the functional utility and freedom of single protein, and in my mind further diminishes these types of already erroneous probability arguments.

If interested, search “natively unfolded protein” on Medline or equivalent.

Doesn’t the cumulative nature of natural selection preclude a totally random search?

Dave Wisker said:

Doesn’t the cumulative nature of natural selection preclude a totally random search?

That’s kind of the point. It’s not searching for anything, it’s not trying to fulfull some predetermined spec, that’s just where it ended up. “But every step in its evolution had to be beneficial (or at least not harmful) – that’s impossible!”

No, silly, if there was a wrong step that branch died out. From the result of any good step in the evolutionary sequence, there are a wide number of next steps, some good, some indifferent, some bad. One might as well proclaim that a river can’t flow thousands of kilometers because it has to flow consistently downhill every step of the way. “What are the odds?”

Cheers – MrG / http://gvgpd.proboards.com

Mark Frank Wrote:

You are right. He is extremely confusing (and I suspect confused) about what the ID hypothesis is. I have an outstanding question to him on this very topic on UD.

You better check if it’s still there. You might know that UD regularly deletes comments and questions that are inconvenient to their propaganda.

So at best, even if all of his arbitrary and biologically nonsensical assumptions are considered to be correct, this guy has shown that evolution couldn’t possibly occur without natural selection. Great. Just 150 years behind the times. That should be good enough to get a PhD.

That’s kind of the point. It’s not searching for anything,

Well, that’s not exactly what I was getting at in my comment, but I agree that ‘search’ is a poor analogy. If anything, natural selection is more a matching algorithm, a matching of variation to environment.

Frank J said:

You better check if it’s still there. You might know that UD regularly deletes comments and questions that are inconvenient to their propaganda.

Actually that has improved hugely since Barry Arrington took over. I have been posting completely unmoderated under my own name for several weeks - nothing deleted or altered.

Mark Frank,

A large part of Durston’s argument relies on the isolation of new protein folds in sequence space. It seems to me that there remains a fair bit to learn about this aspect of protein evolution. In this sense he is taking advantage of a gap in understanding. Having said that, when he discusses this topic, for example here:

http://www.newscholars.com/papers/O[…]anations.pdf

he doesn’t present the full picture. The gap in our knowledge isn’t quite so extreme. For example:

Tuinstra, R.L., Peterson, F.C., Kutlesa, S., Elgin, E.S., Kron, M.A., and Volkman, B.F. (2008) Interconversion between two unrelated protein folds in the lymphotactin native state. Proc. Nat. Acad. Sci. (USA) 105:5057-5062

One thing that he uses to support his argument is work by Douglas Axe, research that IDers (Axe is one) have picked up on to support the notion of isolation in sequence space. However, Arthur Hunt does a very good job of showing how such an interpretation is flawed:

http://pandasthumb.org/archives/200[…]d-st-fa.html

Note that I’m not an expert and shouldn’t be considered as such!

but I agree that ‘search’ is a poor analogy. If anything, natural selection is more a matching algorithm, a matching of variation to environment.

I think “explore” is better than “search”. Evolution has no idea what it is going to find it just explores opportunities.

The bit about NS seems to involve some rather furious handwaving. OK: he’s shown that random search can’t generate more than X bits of FI in the allowed number of trials. But he seems to simply assert without justification that no selection algorithm can make up the deficit. If we start with an initially low value for Ex (maybe we don’t much care about the substrate, as long as something gets catalyzed, a little bit), then M(Ex) might be quite large, and the FI correspondingly low. Then iterate using a slightly higher Ex threshold (or more specific x), and generating new candidates only from those with the highest Ex of the previous round (which is of course classically Darwinian.…).

(3) … The second relies on a knowledge of biochemistry which I am unable to challenge. In the case of folding proteins he claims that: a. Only folding proteins (plus a small amount of others) are any use in life. b. These comprise a minuscule proportion of all possible proteins c. The folding proteins are in clusters of similar proteins but these clusters have no relationship to each other - they are scattered across the space of possible proteins. …. However, I am not aware of this being considered a big problem in evolutionary biology so I am sure there is something wrong with this argument. But I think the fault lies in the biochemistry. Are a, b and c really true?

(a) As indicated above, not true. On UD, Durston mentions the fact that unstructured proteins are known to exist, and mentions that they may assume some semblance of structure when they bind other proteins. But he chooses to ignore the contribution of all of this to his arguments.

(b) If by miniscule, Durston means “too small to get to in 10^42 trials” (or whatever line in the sand Durston chooses to draw), he is wrong. This has also been mentioned above.

(c ) Well, it’s possible to think of families of folds as being isolated in some senses in sequence space, but it’s also known that the gulfs that separate such families may be traversed by mutation without sampling all of the intervening, presumably non-functional or deleterious, space (something I analogize to myself with electron tunneling). Moreover, a very large amount of functional diversity can be had within single sequence and structural classes. So, all in all, there are no problems here for evolution.

Mark Frank said:

Frank J said:

You better check if it’s still there. You might know that UD regularly deletes comments and questions that are inconvenient to their propaganda.

Actually that has improved hugely since Barry Arrington took over. I have been posting completely unmoderated under my own name for several weeks - nothing deleted or altered.

Lucky you. Not only am I still in moderation, I had a long comment on that thread deleted without explanation, and the comment complaining about it also deleted. I was pointing out that ID would seem more honest and scientific if it admitted to and rejected its creationist leanings, you see. :p

Durston’s argument almost sounds similar to Behe’s assertion that he’s found the “mathematical limits to Darwinism”, as noted in his second pathetic exercise in mendacious intellectual pornography, “The Edge of Evolution”. Thankfully a few others, most notably Mark Chu-Carroll, were among the first to “deconstruct” Behe’s acute ignorance of probability theory, in his effort to explain why certain mutations had to occur simultaneously in the Plasmodium malarial parasite. Wonder when Durston and Behe will try calculating the probability of the existence of an Intelligent Designer (IMHO, such an entity was most likely a Klingon scientist who travelled backward in time to the primordial Earth and seeded it with life, billions and billions of years ago.).

Mark Frank said:

However, I am not aware of this being considered a big problem in evolutionary biology so I am sure there is something wrong with this argument. But I think the fault lies in the biochemistry. Are a, b and c really true?

Actually not. In fact, emergent phenomena are ubiquitous at every level of complexity in condensed matter, and this simply increases more rapidly in the systems leading to and involving life.

At even the most elementary level of atoms of a single element condensing into a solid, emergent phenomena come rapidly. For example atoms like copper form a metal, meaning that the electrons within it are free to move easily under the influence of an electric field. The reflectivity of the metal and its color are determined by energy states of electrons that weren’t there when the copper atoms were simply individuals.

Things like ductility, thermal conductivity, hardness, and all the mechanical properties we associate with copper the metal are not properties of the individual atoms.

This becomes even more important when we start viewing these emergent properties with respect to the environment in which the metal finds itself and look at these at various temperatures. Then we see things like a positive or negative meniscus in the presence of melted lead for example. These characteristics are dependent on the presence of oxides of copper and the temperature. They come from subtle electrical potentials, e.g., Van der Waals forces, between the metal and other elements, solid, liquid, or gaseous, in its environment.

It goes on and on; and this is just the simple stuff.

Get into organic structures, and the emergent properties become even more complicated. And the sensitivity of these emergent properties to the surrounding environment becomes even more important. And this is exactly the point of natural selection when it comes to deciding what properties are important at any given level of complexity and how the next stages of complexity will develop.

While I applaud the attempts to get a mathematical handle on the stepping stones to complexity, defining terms and enumerating the paths to these defined terms is extremely difficult. The difficulty arises precisely because of the rapidly emerging subtle phenomena emersed in a changing environment, most of which will become the major determiners in subsequent steps.

Stepping back a bit and taking a more distant view of Dunston’s argument, the irony is that it is operating at a level of technical elaboration – I was tempted to say “sophistication” but concluded I would regret it – such that the only audience with enough background to seriously examine it consists of the people who know perfectly well it’s bogus.

Of course the real intent is to hand the charmless visitors who show up on PT to pick fights (and their like) yet another exercise in windy muddying of the waters.

Cheers – MrG / http://gvgpd.proboards.com

Mark Frank said:

but I agree that ‘search’ is a poor analogy. If anything, natural selection is more a matching algorithm, a matching of variation to environment.

I think “explore” is better than “search”. Evolution has no idea what it is going to find it just explores opportunities.

I will repeat yet again what I’ve said innumerable times over the last half dozen years: Analyzing biological evolution as though it is a search process is a snare and a deception. Dembski’s latest papers with Marks are precisely in that class: snares and deceptions. Durston’s blathering is in the same class.

Venus Mousetrap Wrote:

I was pointing out that ID would seem more honest and scientific if it admitted to and rejected its creationist leanings, you see. :p

The irony is that the DI’s target audience mostly doesn’t care about it’s “creationist leanings,” either the “cdesign proponentsists” history, or the fact that even the designer-free “replacement scam” still effectively promotes Biblical literalism by exploiting public misconceptions. Plus they don’t even try that hard to hide it from the courts any more - if that’s even possible after Dover and “Expelled”.

That said, may I recommend a different approach? Specifically alerting any YEC and OEC followers desperately seeking validation of their childhood fairy tale that nothing in ID offers the slightest support of either a YEC account or even a progressive OEC one that denies common descent. Remind them that Behe and a few others (e.g. DaveScot) even conceded common descent outright, and ask how many major IDers have ever challenged them directly.

Why do we allow the ID-as-science crowd to continue to successfully conflate supernatural “intelligent design” with natural “intelligent design?”

The IDers (on purpose!) get away with this conflation all the time in front of the general public in their writings, their public presentations, and even in their public debates on the subject of what should be included in public science education curricula. They argue: “There are scientific ways and methods of discerning “intelligent design” that are employed in archaeology and even in the SETI program, and so what’s the big deal, we [IDers] merely want to do the same thing as applies to biology [biologically functional processes and structures].” And whenever they say things like that and somebody does not promptly and emphatically point out that IDers want to employ such scientific “ways” to discern NOT natural “intelligent design” (which is the sort of “design” that archaeology and the SETI program seek to discern/discover), but rather supernatural “intelligent design,” their purposeful conflation succeeds and the public fails to recognize the (perhaps subtle but nonetheless) vital difference between what scientific methods for discerning “intelligent design” can legitimately succeed in discerning (namely, natural “intelligent design”) and cannot legitimately succeed in discerning (namely, supernatural “intelligent design”).

And so I ask again: Why do we let the ID crowd to continue to successfully conflate supernatural “intelligent design” with natural “intelligent design?”

Every time an IDer speaks of “intelligent design” we should interrupt and ask: “Excuse me, you mean supernatural “intelligent design,” right?” Make them confess and clarify (sorting-out and thwarting their hoped-for conflation), or make them lie. Each and every time.

Frank Lovell said:

Why do we allow the ID-as-science crowd to continue to successfully conflate supernatural “intelligent design” with natural “intelligent design?”

The real irony is that they’re hanged either way. “Well, possibly the Designer was a gang of alien visitors.”

“But that is proposing a very elaborate mechanism – an entire alien species, its culture, and its technology – and has no credibility unless you can provide some hard evidence for the existence of said aliens.”

“Well, OK, maybe it was a supernatural entity.”

“So you claim it JUST HAPPENED in an unexplained and unexplainable fashion … well, maybe it did, but it would be difficult to say that was an explanation.” It’s somewhat bizarre to see two equally bad proposals overlaid on each other in something of a state of “quantum indeterminacy” in hopes of the result being more than the sum of its defective parts. It’s Taking the Fifth: “I don’t see the need to specify the Designer, and I refuse to do so on the grounds that I might incriminate myself.”

Cheers – MrG / http://gvgpd.proboards.com

RBH said:

I will repeat yet again what I’ve said innumerable times over the last half dozen years: Analyzing biological evolution as though it is a search process is a snare and a deception.

The other interesting irony is that the critics complain about the absence of intent and determinism in Darwinian evolution – and then read intent and determinism into it to show why it won’t work.

Cheers – MrG / http://gvgpd.proboards.com

Nicely done, PZ.

My discovery of your blog came just right in time! I am asking your permission to quote some things here, I need to show its irony, mathematically.