Does CSI enable us to detect Design? A reply to William Dembski

| 359 Comments

In response to our post and comments on Stephen Meyer needs your help William Dembski has replied at Evolution News and Views. He is upset that we were “attempting to disparage” Meyer’s book without having seen it. (More on that below).

In the post, I made my suggestion of content for Meyer’s book. I suggested that Meyer acknowledge in his book that Dembski’s Design Inference using Complex Specified Information (CSI) had failed, because the theorem that Dembski needed did not exist. Dembski disagrees:

Felsenstein’s request for clarification could just as well have been addressed to me, so let me respond, making clear why criticisms by Felsenstein, Shallit, et al. don’t hold water.

If Dembski has refuted these criticisms, that is worth careful attention; you would need to understand why Shallit and I were wrong. Were we? Even if we were right, has Dembski supplanted his earlier arguments with newer ones that that do a better job of arguing against the effectiveness of natural selection?

As the argument needs more than a few lines, I will place most of it below the fold. There I will argue that

  • When Shallit and Elsberry found a hole in Dembski’s theorem, and when I pointed out that the theorem was unable to refute the effectiveness of natural selection (because its specification changed in midstream) we were right.
  • Dembski’s more recent reformulation of his CSI argument in 2005 adds a term that calculates the probability that natural selection and mutation could do the job; this simply made the rest of the Design Inference redundant, and
  • The more recent Search For a Search arguments of Dembski and Marks are arguments about a Designer being needed in order to make the pattern of fitnesses be one in which natural selection does work. Thus they are not arguments against the effectiveness of natural selection.

Let’s see .…

CSI

Dembski’s argument depends on Complex Specified Information. In my 2007 article I accepted the validity of CSI (though many other critics of Dembski have argued that it is meaningless or unusable). In effect, it uses a scale – in my case I made this the ultimate scale, fitness. In Dembski’s original formulation in his books The Design Inference: Eliminating Chance through Small Probabilities (1998) and No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002) CSI is present if the population is far enough out on the fitness scale that there would be fewer than 1 individual in 10150 there in the original population.

Seeing a population that is that fit would be astronomically unlikely if the process of evolution were random mutation – say, monkeys typing out genome sequences on 4-letter typewriters. And yet it is obvious that real organisms have CSI: you could type trillions of random genomes trillions of times, and never make a fish that could swim or a bird that could fly.

But what about natural selection? Is it unable to get the genome to contain Complex Specified Information? For the observation of CSI to imply that a process like Design is needed, we have to be able to rule out that natural selection could get the population to have CSI.

The Conservation Law

That is what Dembski’s Law of Conservation of Complex Specified Information (LCCSI) was supposed to do. It assumes that we are in a space of genomes, and models evolution as a 1-1 transformation in that space. Dembski then argues that the genome cannot come to have CSI unless it starts out having it – it cannot get into the extreme top tail of the distribution of possible fitnesses unless it started there.

If any theorem of this sort were valid, this would be a Big Problem for evolutionary biology. But is Dembski’s theorem valid? There are two problems. Dembski sketches a proof in No Free Lunch: For the case where the evolutionary process is deterministic, he argues that after the 1-1 evolutionary process has operated, the strength of the specification is the same afterwards as it was before. He does this by defining a new specification and showing that it is just as strong as the one we started with.[Actually, I erred here (and in my 2007 paper). Dembski does not restrict dterministic evolutionary causes to be 1-1 transforms. He allows many-to-one transforms as well. But the remainder of my critique still works for those, See below (at the end of this post for the details of the correction.]

The method is simple: in place of “in the top 10-150 of the original fitness distribution” he replaces it by a specification, “when transformed backwards through the 1-1 transformation, in the top 10-150 of the original fitness distribution.” Thus after the evolutionary process operates, we just go backwards through the 1-1 mapping and the population finds itself ourselves back where it started, and thus are in a region that is just as strongly specified.

That argument would work fine but for two problems:

  • Elsberry and Shallit have pointed out a problem: Dembski himself required that the specification be defined independently of the 1-1 transformation. Yet the new specification uses the transformation.
  • I pointed out (in my 2007 article) that even if Dembski’s theorem were proven, it would be of the wrong form to refute the effectiveness of natural selection. Recall that we would need a theorem that shows you cannot get into the set of high-fitness genotypes unless you start within it. That means we need a theorem that applies the same specification after evolution acts as it did before. Dembski’s version changes the specification before and after. He has no LCCSI theorem, proven, sketched, or otherwise, that uses the same specification (say “in the top 10-150 of the original fitness distribution”) before and after.

It is very easy to come up with models of natural selection acting in populations that move the population to regions of higher fitness, and if this goes on long enough at enough sites, the population comes to be in the top 10-150 of the original distribution fitnesses. So no theorem like the LCCSI seems possible, once we require that the specification stay the same throughout the process.

Dembski has nowhere argued that Elsberry and Shallit were wrong about the technical mistake, and he has nowhere argued that I was wrong about the problem of changing the specification. So does that mean that he now concedes that he was mistaken? He doesn’t seem to do that either. Instead he points to new and different arguments of his.

Dembski’s revised LCCSI argument

In 2005, in his paper Specification, the Pattern That Signifies Intelligence Dembski put forward a new version of his measure of CSI. Admittedly, I did not deal with this revision in my 2007 paper, so let me comment on it now.

His formula includes the probability P(T|H) that a target region T is reached given a “chance hypothesis” H. In section 8 of the 2005 paper, Dembski makes it clear when Design is detected, the “chance hypothesis” should be one that includes all natural biological processes, including not only mutation but also natural selection.

So detection of Design from some adaptation using this new formula works like this:

  1. Work out the probability that this good an adaptation could arise by natural processes including mutation and natural selection.
  2. If this is small enough (in the new case, less than 10-120), then we declare Design to have been detected.
  3. From that we can conclude that the adaptation could not have arisen by mutation and natural selection.

I think that the reader will see the problem: the new form of specified complexity cannot simply be determined by how improbable the adaptation would be in genomes produced by monkeys with four-letter typewriters. Now a calculation must also be made of the probability that such a mutational process together with natural selection could produce the adaptation. Simply showing that one is in the top 10-120 of all the fitnesses in the original pool of genotypes is not enough to declare CSI.

Given that, the declaration that Specified Complexity is observed in nature is not obvious (it was obvious under the previous definition of CSI). To compute Dembski’s quantity we need to determine whether natural processes could produce the observed adaptations – which is the very thing we were trying to decide.

The Search For A Search arguments

Dembski points out in his reply that his CSI argument

has since been reconceptualized and significantly expanded in its scope and power through my subsequent joint work with Baylor engineer Robert Marks.

He states that Shallit and I think

that having seen my earlier work on conservation of information, they need only deal with it (meanwhile misrepresenting it) and can ignore anything I subsequently say or write on the topic.

He declares that

Felsenstein betrays a thoroughgoing ignorance of this literature.

Actually, my ignorance is not quite as thoroughgoing as that. I have commented on the Dembski/Marks papers, and done so at Panda’s Thumb, in two postings in August, 2009 here and here). I commend them to Dembski. Let me make the point again that I made there.

I am skeptical of the scientific usefulness of the measures that Dembski and Marks introduce in these SFS papers, but for the CSI/Design argument that question is mostly irrelevant – the issue is whether these papers provide us with a method for detection of Design, ruling out that the adaptations could be produced by natural selection. Very explicitly, they do not. For the whole point of these papers is to measure whether the fitness surface (the association of fitnesses with genotypes) is sufficiently smooth that natural selection is able to move uphill and effectively produce the adaptation.

If Dembski and Marks see such a fitness surface, they argue that it would be extremely unlikely in a universe where fitnesses are randomly associated with genotypes. Therefore, they argue, a Designer must have chosen that fitness surface. Chosen it to be one in which natural selection works.

I disagree. I think that ordinary physics, with its weakness of long-range interactions, predicts smoother-than-random fitness surfaces. But whether I am right about that or not, Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve. In their scheme, ordinary mutation and natural selection can bring about the adaptation. Far from reformulating the Design Inference, they have pushed it back to the formation of the universe.

Conclusion

  • Dembski has not dealt, anywhere, with Shallit and Elsberry’s criticism of the original LCCSI argument, nor with my criticism of it. Nor has he admitted that these criticisms were valid. They were valid and still are.
  • Dembski’s reformulated (2005) measure of Specified Complexity requires us to have already settled the question of whether whether natural evolutionary processes could produce the adaptation before the measure can be computed.
  • Dembski and Marks’s later papers do not contain any argument that rules out natural selection and mutation as the agents producing adaptation, any argument that requires a Designer to have intervened in the evolutionary process.

Attempting to disparage?

Was the post about Meyer’s forthcoming book unfair criticism of it, without any of us having read it? Neither I nor any of the commenters claim to have read the book. Perhaps Meyer will prove to have dealt with all the points we raised. Perhaps he will have brilliantly made, or brilliantly refuted, the arguments we made. But if he ignores important points we raised, then he cannot argue that no one pointed them out to him.

And if

we can expect Meyer’s 2013 book Darwin’s Doubt to show full cognizance of the conservation of information as it exists currently

(as defined by Dembski) then Meyer’s book will not contain any valid argument that Complex Specified Information can be used to detect the intervention of a Designer in the evolutionary process.

_______________________________________________________________

Correction: (10 April 2013) commenter diogeneslamp0 asked where in NFL Dembski said that the evolutionary change is modeled by a 1-1 transform. On closer examination, nowhere, I was wrong about that. He allows many-to-one transforms as well. However it is still true that his argument is, as I said in the 2007 paper and above, that the Before and After states must satisfy equivalently strong specifications, so that both have CSI or both don’t have CSI. And it is still true that these specifications are not required by him to be the same. The one before is still constructed from the one afterwards using the transform. And it is still true that if you require the specifications evaluated before and after to be the same, then there are lots of examples where a conservation law would not work.

359 Comments

Folks, I am going to patrol (or “patroll”) this thread aggressively. Our usual trolls and our usual troll-chasing will not be welcome and all that will be sent to the Wall. I hope that we can discuss the science and not spend time on denunciations of the motivation of our opponents.

Bravo, Joe!

Very succinct, very clear. You may be right that in his Specification paper he intended people to a probability for evolutionary processes. It didn’t occur to me that he meant that, as it would have undermined his entire argument. I still think that at that stage he was under the illusion that the NFL theorems meant that he didn’t need to worry about evolutionary processes - they wouldn’t increase the probability above blind search.

Either way, it makes no sense.

And, in any case, as you say, he has now moved design back to the origin of the universe. For which he doesn’t have a pdf, so he can’t say whether the universe we observe was inevitable or infinitessimally probable. And as he equates Information with probability under the null of random search (with a fancy -log transform) then he has no way of computing how much Information a Designer would have had to put there.

The real motive behind Dembski’s CSI criterion is and was to confuse biology with technology, thereby to claim design for the former without even addressing all of the decidedly undesign-like structure and function of life.

Why would we even have biology as a science if Dembski’s assumptions and presumptions were correct? Life could just be studied as engineering and styling, while in fact IDists almost don’t bother to study life at all. Dembski’s “conclusions” are only denial of thoroughgoing aspects of life, like its slavish derivation from ancestors, at least in most plants and animals.

Glen Davidson

It is obvious that Dembski is just trying desperately to come up with something that he thinks evolution cannot accomplish. But he has no idea how mutation and natural selection works, he has no idea what it is capable of. All he can do is misrepresent the science and beg the question until he has everyone confused enough to believe he might be right. It takes a special kind of dedication to pay enough attention to charlatans and posers to be able to call them on their shenanigans. Thanks for your diligence Joe.

And of course, even if someone could somehow prove that there was something that our current conception of mutation and natural selection could not accomplish that is actually observed, it would not in the slightest provide any kind of evidence for any kind of god. That would just be wishful thinking, or in this case, non thinking. It’s a solution in search of a problem.

Joe Felsenstein is correct about long range forces contributing to the smoothing of a fitness landscape.

There are also other reasons for the smoothing that have to do with the exploration process being seen as a “sampling” of that terrain. I mentioned this in a thread by Elizabeth Liddle over at The Skeptical Zone.

The field of signal and imaging processing has a nice mathematical description that explains the process of smoothing. It comes under the heading of “dithering” or, equivalently, under the concept of “convolution.” Look at convolution first.

If we are sampling a signal feature in either time or space, the width of the sampling window will place a limit on how much detail we will see in the feature we are sampling. If the window is narrow – either in time or in spatial extent – we will sample sharp features with sharp, distinct edges. If the window is wide, the features will have rounded and smoothed edges. Everything inside a sampling window is averaged; and then the window is moved and another sample takes place, and so on.

Here is a simple demonstration.

Take a piece of paper and make several sampling slots 1, 2, 3, 4, and 5 digits wide. Place a slot over the following set of numbers, average the numbers that appear within the slot window, and plot the results below the set of numbers.

The set is: {0,0,0,0,0,0,10,10,0,0,0,0,0,0}. With each slot width, you are plotting a running average that smoothes the “spike” represented by 10’s in the set. The wider the slot, the smoother the result.

From the “dithering” perspective, we are looking at the features of the signal in the Fourier transform domain; and the idea that is involved here is called the “shift theorem.” Let’s use a spatially distributed signal.

If the signal S(x) has a Fourier transform F{S(x)}, then the spatially shifted signal S(x - a)has a Fourier transform eikaF{S(x)}, where k is a spatial frequency (e.g., in lines per mm).

In other words, the Fourier transform of the spatially shifted signal is the Fourier transform of the unshifted signal multiplied by a phase factor; and here is trick: jiggling the sampling point back and forth randomly over the signal will produce larger phase shifts for the higher spatial frequencies for a given shift a. This “washes out” (they tend to phase cancel) the higher spatial frequencies leaving only the lower frequencies. When we do the inverse Fourier transform of the dithered result, we get an image with smoothed edges and all the fine details (higher spatial frequencies) are gone.

So both perspectives give the same result.

The “dithering” that appears in the sampling of the fittness landscape of a phenotype is provided by the random variations going on at the genetic level. If the landscape is also changing, the “dithering” folds in those changes as well.

Atoms and molecules interact. The more complex the system, the more that slight variations in the interactions smooth things out.

Joe Felsenstein: “Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve.”

—The quote pasted above (and others in the OP) assumes and implies Dembski and Marks accept the existence of natural (non-supernatural/Intelligent) causation operating in nature.

—Like Darwin and all original Darwinists, Joe Felsenstein and his colleagues completely reject supernatural or Intelligent causation operating in nature. The preceding fact means Darwinism accepts causation mutual exclusivity.

—Joe Felsenstein’s acceptance of causation mutual exclusivity should allow him to dispense with the claims of Dembski and Marks based solely on their acceptance of the existence of natural causation.

Ray Martinez said:

Joe Felsenstein: “Dembski and Marks have not provided any new argument that shows that a Designer intervenes after the population starts to evolve.”

—The quote pasted above (and others in the OP) assumes and implies Dembski and Marks accept the existence of natural (non-supernatural/Intelligent) causation operating in nature.

—Like Darwin and all original Darwinists, Joe Felsenstein and his colleagues completely reject supernatural or Intelligent causation operating in nature. The preceding fact means Darwinism accepts causation mutual exclusivity.

—Joe Felsenstein’s acceptance of causation mutual exclusivity should allow him to dispense with the claims of Dembski and Marks based solely on their acceptance of the existence of natural causation.

That is absurd. Mutual exclusivity? Ridiculous. Whatever I think of Dembski and Marks’s arguments, or YEC arguments for that matter, I know that whatever supernatural interventions in the real world they accept, they also know that ordinary gravity continues to operate. All those folks, and even Ray himself, accept that there is natural causation.

Let’s get back to Dembski’s actual arguments, not this absurd parody.

A better question to ask Dembski would be “What have you done with your CSI calculations?”

I’m going to be a contrarian on that one. It matters little whether CSI can be done in practice easily enough to be useable. We can define and calculate SI in simple population-genetic models of evolution. If Dembski had a conservation law for CSI for those models, that would be a Big Problem for evolutionary theory. So my question is whether he has that (he doesn’t).

Well, that Dembski never actually had a CSI conservation law to begin with perfectly explains why he’s never ever accomplished anything with his alleged CSI calculations.

Glen Davidson said:

The real motive behind Dembski’s CSI criterion is and was to confuse biology with technology, thereby to claim design for the former without even addressing all of the decidedly undesign-like structure and function of life.

Similar to Michael Behe’s dithering on about “Irreducible Complexity”?

Could we take a break from repetitively agreeing with each other how evil the other side is? I sometimes think the Internet’s purpose is to enable people to “vent” after a hard week at work.

Now about the science …

For example, have I misinterpreted Dembski? MIssed an important argument or a major reply he has made? Missed a major critique of him (I did cite a bunch of them in my 2007 article)?

I’ve always thought these declarations of improbability were worthless, because the unspoken assumption is that the whole thing just appeared, fully assembled. That certainly is improbable, but it doesn’t happen like that, so all the philosophy in the world about the wonderment of it all is based on nonsense.

We all know that everything in biology is the result of a very long chain of events, and each tiny step along the way is perfectly natural. If the result survives it reproduces. The next step in the chain doesn’t have to start from the beginning, because it already has the previously accumulated steps. So at any place an observer steps in to marvel at the result, he’s looking at end of a chain of natural events, with no intervening miracles, so the totality of the entire chain is therefore natural – albeit unpredictable in the beginning.

Joe Felsenstein said:

Could we take a break from repetitively agreeing with each other how evil the other side is?

Sorry, Joe. I was composing my comment and I didn’t see yours.

So, I believe Dembski’s “search for a search” papers all focus on evolutionary algorithms, rather than actual evolution in the field or any of the extensive examples of directed evolution. It’s easier to argue that the “default” fitness landscape is a random one when the landscapes in question are all arbitrary man-made constructs. But has he tried to argue that for a real fitness landscape of a protein, or a genome? Even Axe’s islands of function (or I think he’s also called them gemstones in a desert) view of protein sequence space (which, I doubt it’s necessary to point out, doesn’t exactly rest on strong evidence) is a huge deviation from the random landscape, as there are still pretty strong local correlations between sequence and function.

I’m also curious as to what would count as “active information” addition in a real-world example. Imagine putting a population of bacteria in a simple gradient of antibiotic, from low enough to have no effect to high enough to be 100% lethal, or a similar temperature gradient. Presumably you’re manipulating the fitness landscape pretty extensively by doing that, favoring some genomes, disfavoring others etc. But the act of making a gradient seems pretty low-information relative to the global effects on the landscape, it doesn’t really match up with something likel the Weasel program (Dembski’s favorite), where the experimenter is fine tuning each incremental step toward a pre-determined sequence. Is any evolution that happens in these experimental set-ups attributable to “active information” the scientist injected into the system?

MIssed an important argument or a major reply he has made?

I haven’t kept up on Dembski lately, but I once noted that his 500-bit “limit” could easily be circumvented by duplication and mutation – 1 unit that has <500 bits splits into 2 units with >500 bits, and one of them mutates. Now by his standard we have a single unit with >500 bits. Let’s call it Combobulated Complexity. The Law of Conservation of Combobulated Complexity says that there is no Law of Conservation of Combobulated Complexity.

Matt Young said:

I haven’t kept up on Dembski lately, but I once noted that his 500-bit “limit” could easily be circumvented by duplication and mutation – 1 unit that has <500 bits splits into 2 units with >500 bits, and one of them mutates. Now by his standard we have a single unit with >500 bits. Let’s call it Combobulated Complexity. The Law of Conservation of Combobulated Complexity says that there is no Law of Conservation of Combobulated Complexity.

I posted this somewhere over on The Skeptical Zone - I think; I don’t remember where I posted it. I borrowed the exact form of calculation from UD where it is purported to be exactly the way Dembski does it.

Suppose for example we find a rock weighing approximately 60 grams, that is a mixture of polycrystals of mostly SiO2 and some polycrystals of other compounds as well (about 3 atoms per molecule on average).

This allows and estimate of approximately 1027 molecules in the rock with approximately 1018 molecules per crystal on average.

Let N = 1027, the number of molecules.

Let P = 109, the number of crystals.

There are P! permutations of all the crystals in the sample.

Each crystal has an orientation in a 3-dimensional space; so we choose three perpendicular axes about which rotations can be made. There are 360 degrees, 60 minutes, 60 seconds per complete rotation about each axis, therefore each crystal can have 12960003 orientations in 3-dimensional space.

Since there are P crystals, there are 12960003P ways to orient all the crystals.

The number of permutations of the individual atoms is conservatively (3N)!.

There is also the number of possible orientations of the original rock when it was noted in the heath; and this is again 12960003.

Therefore, the number of possible arrangements and orientations of crystals and atoms and rock is

Ω = (3N)! x P! x 12960003(P + 1).

The amount of information in this particular rock is thus log2Ω, a number that far, far exceeds 500.

Therefore we can conclude without hesitation that this particular rock was designed; and since this is an arbitrary rock picked up in an arbitrary location, we can say that any rock is designed after we examine at it and carefully specify its structure.

But we haven’t even dealt with function yet. Suppose this rock was found with bird droppings on it. The rock therefore had the function of preventing the droppings from directly hitting the ground. It could also serve as shelter for insects and worms. It also can divert water; divert the path of a growing plant root. In fact there is literally no limit to the functions that a rock can perform.

So we can take that extremely large number of functions and raise it to a power equal to the number of rocks in the universe and conclude that there is specified functional complexity in rocks as well as specified complexity in each and every rock.

But rocks can also be organized into unified larger rocks and planets and moons; so there is specified organizational complexity in rocks as well. They don’t even have to be melded together, they can be disjoint clusters of rocks that prevent erosion or divert a river, or provide shelter. The organizational complexity of rocks is enormous.

Therefore ALL rocks are intelligently designed.

Joe,

I appreciate your desire to make this about the science, but the history of the ID movement shows that there is little science behind their work (and never has been), and that the key ID people are perpetually making things up, misapplying theorems they don’t understand (“written in jello” comes to mind), answering criticisms of their work with distractions and subject-changings that are really just more tedious versions of the Gish Gallop, and always promising to answer their critics with their next book (while refusing to address any of the errors in their published books).

I’d love to share a scientific discussion on this topic, but I just don’t think it’s possible beyond pointing out fallacies on the DI side.

Mike:

Love the rock calculation.

Chris Lawson said:

Joe,

I appreciate your desire to make this about the science, but the history of the ID movement shows that there is little science behind their work (and never has been), and that the key ID people are perpetually making things up, misapplying theorems they don’t understand (“written in jello” comes to mind), answering criticisms of their work with distractions and subject-changings that are really just more tedious versions of the Gish Gallop, and always promising to answer their critics with their next book (while refusing to address any of the errors in their published books).

I’d love to share a scientific discussion on this topic, but I just don’t think it’s possible beyond pointing out fallacies on the DI side.

Well, the object of this discussion is to consider what answers (if any) there are to their scientific arguments. There will be people confused by those, thinking that maybe they have some decisive refutation of natural selection. There will also be people who, while supporting evolutionary biology, are not sure how to explain the scientific points to others.

Then again, maybe they have some dramatic refutation of the last 150 years of evolutionary biology. I haven’t seen it yet, if they do.

Consoling ourselves about how iniquitous their motives are is less useful for the present discussion.

As far as I can see, what Dembski has done in his recent ENV postings is to finally draw lay attention to what was essentially a huge concession in his papers with Marks (and even, conceivably, in Specification, although I didn’t read it that way at the time): that his NFL argument doesn’t work as an argument against evolution.

So his blustering about how naughty Joe was not to have read his later work boils down to a clear concession that “Joe might have been right about my earlier argument, but now he needs to deal with my new one.” Which of course Joe has now done.

And which, as far as I can see, boils down to the ontological argument for God. You certainly can’t base a probability argument for God on a probability distribution you don’t actually have, limited as we are to one exemplar of universes.

Whenever one of these probability arguments arises, I wonder whether anyone has done a similar estimate for their alternative.

What is the probability that designer(s) which are capable of doing more than natural causes, and have no known limitations on what they might do, would do such-and-such?

My estimate is that that, because the number of possible design outcomes is greater than the number of natural outcomes, the probability of design is less than the probability of natural causes.

Isn’t Dembski a little late on the stage with his brainchild, his redefinition of CSI? It may have had some appeal for creationists in 1998 when he published The Design Inference, but science has come a long way since then, has it not? It seems to me that there is much more to evolution than RM and NS to be taken into account.

But that is another topic.

liddle.elizabeth said:

As far as I can see, what Dembski has done in his recent ENV postings is to finally draw lay attention to what was essentially a huge concession in his papers with Marks (and even, conceivably, in Specification, although I didn’t read it that way at the time): that his NFL argument doesn’t work as an argument against evolution.

So his blustering about how naughty Joe was not to have read his later work boils down to a clear concession that “Joe might have been right about my earlier argument, but now he needs to deal with my new one.” Which of course Joe has now done.

And which, as far as I can see, boils down to the ontological argument for God. You certainly can’t base a probability argument for God on a probability distribution you don’t actually have, limited as we are to one exemplar of universes.

I didn’t just now finally deal with the Search For a Search argument , I dealt with it some in my 2007 article, where I ended up saying:

We live in a universe whose physics might be special, or might be designed — I wouldn’t know about that. But Dembski’s argument is not about other possible universes — it is about whether natural selection can work to create the adaptations that we see in the forms of life we observe here, in our own universe, on our own planet. And if our universe seems predisposed to smooth fitness functions, that is a big problem for Dembski’s argument.

(which, I should clarify, meant for Dembski’s argument about the ineffectiveness of natural selection.

I also wrote about in two 2009 PT postings (here and here) that I linked to in my post above. By the way, Mark Perakh also had an article attacking the SFS here in 2007.

Anyway, the issue of ontological arguments for God seems to be beside the point. Dembski was asserting that natural selection doesn’t work. Now he Dembski and Marks say, well even if it does work there has to be a God in the picture at the beginning, though not necessarily later. So what has happened to Dembski’s argument for the ineffectiveness of natural selection? It is not necessary to get involved in the ontology-and-God debate to see that Dembski’s LCCSI argument isn’t around anymore, at least not if the SFS has replaced his earlier argument. And if the LCCSI argument is still around, then it needs some defending.

TomS said:

Whenever one of these probability arguments arises, I wonder whether anyone has done a similar estimate for their alternative.

What is the probability that designer(s) which are capable of doing more than natural causes, and have no known limitations on what they might do, would do such-and-such?

My estimate is that that, because the number of possible design outcomes is greater than the number of natural outcomes, the probability of design is less than the probability of natural causes.

I think they would answer, in effect, that we can’t predict what the Designer intended, that the Designer’s powers are infinite and his [it is a “he”, you know] knowledge is infinite, and therefore, whatever happened, that was what should have been predicted. Alas, the prediction is after the fact – the target is drawn on the side of the barn after the arrow hits it, and it is drawn right around the arrow.

Rolf said:

Isn’t Dembski a little late on the stage with his brainchild, his redefinition of CSI? It may have had some appeal for creationists in 1998 when he published The Design Inference, but science has come a long way since then, has it not? It seems to me that there is much more to evolution than RM and NS to be taken into account.

But that is another topic.

I’d acquit Dembski on this charge. If you want to make the argument that the other newer phenomena that have been discovered in genetics and genomics since 1998 rescue the assertion that natural selection and random mutation bring about the adaptations that we see, I think you have conceded a huge point. Are you really agreeing that the processes we knew about before 1998 can’t do the job, that Dembski’s CSI critique worked and refuted the effectiveness of natural selection and random mutation? Are you conceding that invoking newer exotica is necessary to save the Modern Synthesis from Dembski’s critique?

Sure, a full explanation of everything must invoke all phenomena that we know about (and some discovered after 2013, probably). But the fact that RM+NS leads to improved adaptation, in ordinary models of evolution, is not refuted by Dembski’s conservation law argument, nor by his No Free Lunch argument, nor by his and Marks’s Search For a Search argument. And his assertions were that these arguments showed that RM+NS would not lead to adaptation. They don’t. I don’t think we have to “give away the farm”.

We have to deal with the issue of whether the LCCSI refutes the effectiveness of RM+NS, and we don’t need to fall back on post-1998 phenomena to do that.

DS said: …even if someone could somehow prove that there was something that our current conception of mutation and natural selection could not accomplish that is actually observed, it would not in the slightest provide any kind of evidence for any kind of god.

…and even if it did, it would not in the slightest provide any kind of evidence for that god being the creator god of Genesis. Meyer’s previous book tried to prove that the “signature in the cell” was the “signature” of the creator god of Genesis - and failed miserably.

Mike Elzinga said:

Joe Felsenstein is correct about long range forces contributing to the smoothing of a fitness landscape.

[most of Mike’s explanation snipped]

The “dithering” that appears in the sampling of the fittness landscape of a phenotype is provided by the random variations going on at the genetic level. If the landscape is also changing, the “dithering” folds in those changes as well.

Atoms and molecules interact. The more complex the system, the more that slight variations in the interactions smooth things out.

Well, yes and no. All this will happen, but I think there is an bigger effect from even simpler physics. In a fitness surface that has random associations of possible fitnesses with genotypes, the surface is infinitely rough (a “white noise” fitness surface). That means that any nucleotide substitution moves you to a nearby DNA sequence that has a fitness drawn, in effect, randomly from all possibilities. In other words, one mutation has the same effect as mutating every site in the genome simultaneously. Now, mutation doesn’t work that way. We all carry some mutants, but we are not totally destroyed, just a little nonfunctional.

So the fitness surface is much smoother than the “white noise” surface. Why? I would point to the lack of totally tight interaction between, say, my enzymes that produce earwax and the photosensitive pigments in my eye. Why don’t they interact strongly?

For two reasons:

  1. Physics – they are not functioning at the same place and time, in the same cells. Weakness of long-range interactions.
  2. Evolution – if they start to get more interactive, evolution tends to move away from that. We stay in regions of the genome space that do not result in totally locked-in tightly-interacting organisms. This sounds teleological, but computer simulations and theory done by people such as Lee Altenberg have verified that individual selection can in fact do this.

Joe Felsenstein said: Now, mutation doesn’t work that way. We all carry some mutants, but we are not totally destroyed, just a little nonfunctional.

Non-functional, or differently functional - or do you mean neutral? (I would have said “more or less functional, but mostly neutral.”) You haven’t bought into the creationists’ “All mutations are harmful,” have you? :)

Paul Burnett said:

Joe Felsenstein said: Now, mutation doesn’t work that way. We all carry some mutants, but we are not totally destroyed, just a little nonfunctional.

Non-functional, or differently functional - or do you mean neutral? (I would have said “more or less functional, but mostly neutral.”) You haven’t bought into the creationists’ “All mutations are harmful,” have you? :)

I stand corrected, mostly. However, even if you just consider coding sequences, we all carry some mutant alleles (most of which are not brand-new mutations). Most of the those have some tiny effect, so “a little nonfunctional” might be right for them.

Prem Isaac said:

Well I havent mentioned God yet. I just said Designer, since ID proponents do NOT claim that the methods they employ will ever reveal the identity of the Designer.

Too bad, because you won’t be able to credit anything to a being without knowing what sort of cause it actually is, which means that you have to know a lot about it, and from that could likely come up with a kind of ID, even if just a fossil or some such thing.

ID proponents are saying, that there are only 3 options: Randomness(Chance), Necessity(something deterministic about the laws of physics/chemistry) and Design(intentional product of a mind).

Fake categories, derived from scholasticism (the first two, anyhow), not from epistemology or evidence.

Especially bad is the claim that “Design” is something other than Chance or Necessity (if we choose to use such poor terminology), when all of the evidence is that it is more or less “necessity” in your poor terms, something that is essentially determined by genetics and development. At the least it would have to be a mix of necessity and chance in such terms. Intelligence evolved and is limited by a number of factors.

So if Randomness and Necessity are not able to explain features of the world, the only other option is Design. This is not God-of-the-Gaps.

No, it’s choosing the premises in order to come to the desired conclusion.

Glen Davidson

A discussion started by Elizabeth Liddle over on The Skeptical Zone prompted me to look more closely at that factor of 10120 in Dembski’s CSI Specification paper,

Χ = - log2[10120φS(T)P(T|H)].

That factor comes from a reference given by Dembski on page 23 to a paper by Seth Lloyd in Physical Review Letters.

Lloyd calculates that the universe can have performed no more than 10120 elementary logical operations on 1090 bits during the course of its evolution. On page 7, Lloyd says the following:

What is the universe computing? In the current matter-dominated universe most of the known energy is locked up in the mass of baryons. If one chooses to regard the universe as performing a computation, most of the elementary operations in that computation consists of protons, neutrons (and their constituent quarks and gluons), electrons and photons moving from place to place and interacting with each other according to the basic laws of physics. In other words, to the extent that most of the universe is performing a computation, it is ‘computing’ its own dynamical evolution. Only a small fraction of the universe is performing conventional digital computations.

It is important to note that during this process, matter has condensed into galaxies, stars, planets, and life on at least one planet. Lloyd is using the standard big bang model and some adjustments that take into consideration the uncertainties in that model. He doesn’t include dark matter or dark energy. It’s a pretty straight-forward estimate.

The result of Dembksi’s use of this, in setting the threshold for his CSI, is that to compute the probabilities of a specified subset of that universe – e.g., a protein molecule – that subset must contain at least as much information and require at least as many elementary logical operations as the entire universe.

Let me repeat that. A small subset, already included in Lloyd’s calculation, is required to have at least as much information and require at least as many logical operations as the entire set itself.

Therefore the universe is required to have more logical operations than the universe requires.

Aside from the fact that Dembski doesn’t provide any justification for declaring something is designed, other than just hand waving, he and his followers are saying a finite subset must be as complex as the entire finite set that contains it. What kind of logic is that?

And Dembski rambles on for 41 pages while Lloyd gets right to the point in 17.

But if he didn’t include the dark matter and dark energy, than 10^120 it’s not the operations of the entire universe…

Does Current Biology have the Misfortune of Owning an Unreliable Clock? http://scienceandscientist.org/Darw[…]iable-clock/

Bhakti Niskama Shanta said:

Does Current Biology have the Misfortune of Owning an Unreliable Clock? http://scienceandscientist.org/Darw[…]iable-clock/

Current biology at least does not have to deal with deriving its theory from vedantic theology. This self-citation is to a “paper” by Bakhti Niskama Shanta on why complications in mitochondrial inheritance vindicate vedantic texts. The “Conclusion” section is an absolute classic.

I also like the part about how

evidence is forcing many biologists to conclude that, if Darwin had known some of what has been discovered since the publishing of his theory, he probably wouldn’t have believed in his own theory of evolution.

Also, the paper has nothing to do with Complex Specified Information or Dembski’s argument.

Bhakti Niskama Shanta said:

Does Current Biology have the Misfortune of Owning an Unreliable Clock? http://scienceandscientist.org/Darw[…]iable-clock/

Typical babble gook. It’s just the old “if you can’t explain everything to my satisfaction then I don’t have to believe anything you say, even though I have no evidence of my own” routine. Molecular clock are extremely useful but not simple to calibrate. That doesn’t mean they are worthless.

Time for a dump to the bathroom wall.

I see that Bhakti Niskama Shanta posted a series of identical comments all at the same time, one to basically every ongoing thread.

That is not responsible behavior (and not good advertising for his views). If he shows up on any of my threads again he goes straight to the Bathroom Wall.

https://www.google.com/accounts/o8/[…]dsyRGHgoWvW8 said:

But if he didn’t include the dark matter and dark energy, than 10^120 it’s not the operations of the entire universe…

ID advocates choose different “upper probability bounds” depending on how different advocates calculate the number of operations required to make a universe.

The discussion is still going on over at TSZ also; where I added the following comment.

The CSI calculation of Dembski boils down to a complete obfuscation of a very simple notion from statistics. That kairosfocus character over at UD simply obfuscates even further.

As anyone can learn from probability and statistics, if an event has a probability p, and the number of trials attempting to get that event is N, then the mean number of successes in achieving that event is Np.

Dembski’s calculation of Χ - after many paragraphs of rationalizations and side tracks – boils down to

Χ = - log2Np = log2(1/p) – log2N.

with N = 10120 φS(T) and p = P(T|H).

The 10120 was taken from Seth Lloyd’s paper in Physical Review Letters, and is Lloyd’s estimate of the number of logical operations it took to make our universe. Including the φS allows for the possibly of multiple universes involved in the number of trials, with 10120 logical operations per universe.

Assuming only one universe, the calculation comes down to

Χ = log2(1/p) – log210120 = log2(1/p) – 400

So 1/p is the number of trials required to get one instance of the specified event, and taking the logarithm gives the amount of “information” supposedly contained in that sample space, and log base 2 gives that “information” in bits. Note that it assumes uniform, random sampling. I am guessing that this would be what Dembski and Marks call “endogenous information.” The 400 (500 in some of Dembski’s calculations) would be the “exogenous information,” and their difference would be the “active information.”

As Elizabeth and Joe Felsenstein point out over on The Skeptical Zone, the problem is coming up with a distribution for P(T|H).

In addition, Seth Lloyd’s calculation is based on the fact that the events being questioned by ID advocates – such as the origins of life and evolution – are already included in the 400 (or 500 in some of Dembski’s calculations). If Dembski wants to use Lloyd’s number in his CSI and apply it to events in the universe, it therefore makes no sense to assert that the “endogenous information” they contain is greater than the “endogenous information” in the universe.

Said more directly, the N trials to make the universe already produced the event in question; therefore the number of trials required to produce that particular event has to be less than the number of trials to produce all the events in the universe.

So here again we see the circularity contained in the assumption that such events do have more such information.

One can very easily enumerate permutations and combinations of things and get numbers far larger than all the operations in the universe; just as I did with my calculation of the “CSI” of a rock. It all depends on how one chooses to describe it. The ID descriptions are generally chosen to make events such as the origins of life look impossible.

Thanks, I get it now.

And, by the way: “Note that it assumes uniform, random sampling” - this sentence says everything. It’s the tornado probability again…

Mike Elzinga said:

https://www.google.com/accounts/o8/[…]dsyRGHgoWvW8 said:

But if he didn’t include the dark matter and dark energy, than 10^120 it’s not the operations of the entire universe…

ID advocates choose different “upper probability bounds” depending on how different advocates calculate the number of operations required to make a universe.

The discussion is still going on over at TSZ also; where I added the following comment.

The CSI calculation of Dembski boils down to a complete obfuscation of a very simple notion from statistics. That kairosfocus character over at UD simply obfuscates even further.

As anyone can learn from probability and statistics, if an event has a probability p, and the number of trials attempting to get that event is N, then the mean number of successes in achieving that event is Np.

Dembski’s calculation of Χ - after many paragraphs of rationalizations and side tracks – boils down to

Χ = - log2Np = log2(1/p) – log2N.

with N = 10120 φS(T) and p = P(T|H).

The 10120 was taken from Seth Lloyd’s paper in Physical Review Letters, and is Lloyd’s estimate of the number of logical operations it took to make our universe. Including the φS allows for the possibly of multiple universes involved in the number of trials, with 10120 logical operations per universe.

Assuming only one universe, the calculation comes down to

Χ = log2(1/p) – log210120 = log2(1/p) – 400

So 1/p is the number of trials required to get one instance of the specified event, and taking the logarithm gives the amount of “information” supposedly contained in that sample space, and log base 2 gives that “information” in bits. Note that it assumes uniform, random sampling. I am guessing that this would be what Dembski and Marks call “endogenous information.” The 400 (500 in some of Dembski’s calculations) would be the “exogenous information,” and their difference would be the “active information.”

As Elizabeth and Joe Felsenstein point out over on The Skeptical Zone, the problem is coming up with a distribution for P(T|H).

In addition, Seth Lloyd’s calculation is based on the fact that the events being questioned by ID advocates – such as the origins of life and evolution – are already included in the 400 (or 500 in some of Dembski’s calculations). If Dembski wants to use Lloyd’s number in his CSI and apply it to events in the universe, it therefore makes no sense to assert that the “endogenous information” they contain is greater than the “endogenous information” in the universe.

Said more directly, the N trials to make the universe already produced the event in question; therefore the number of trials required to produce that particular event has to be less than the number of trials to produce all the events in the universe.

So here again we see the circularity contained in the assumption that such events do have more such information.

One can very easily enumerate permutations and combinations of things and get numbers far larger than all the operations in the universe; just as I did with my calculation of the “CSI” of a rock. It all depends on how one chooses to describe it. The ID descriptions are generally chosen to make events such as the origins of life look impossible.

Mike, didn’t you lose the Phi_S in your last equation for X?

I also think it’s important to point out that, while Dembski at the beginning of his 2005 paper says that P(T|H) is the actual probability given Darwinian evolution, by the end of the paper he just throws that out and sets P(T|H) equal to what I call the “tornado probability.”

This is a point that Joe F has also missed: Dembski swaps out the real P, that he can’t compute, for a fake P, the tornado probability.

That is, if the genetic sequence has letters of four types and length L then he sets p = (1/4)^L. This of course is far, far, far, astronomically far away from the probability of Darwinian evolution.

Near the end of the paper, repeating his shit calculation for the bacterial flagellum, he just counts up L = the # of amino acids in the whole flagellum, and since there are 20 kinds of amino acid, he sets p = p = (1/20)^L.

It’s sleight of hand, and it has nothing to do with evolution.

It’s essential to emphasize (as Joe F has not) that Dembski officially defines CSI as based on THE TORNADO PROBABILITY, which he can compute, in place of the probability of Darwinian evolution, which he cannot. That’s officially part of Dembski’s definition of CSI as of 2005.

diogeneslamp0 said:

Mike, didn’t you lose the Phi_S in your last equation for X?

I also think it’s important to point out that, while Dembski at the beginning of his 2005 paper says that P(T|H) is the actual probability given Darwinian evolution, by the end of the paper he just throws that out and sets P(T|H) equal to what I call the “tornado probability.”

This is a point that Joe F has also missed: Dembski swaps out the real P, that he can’t compute, for a fake P, the tornado probability.

That is, if the genetic sequence has letters of four types and length L then he sets p = (1/4)^L. This of course is far, far, far, astronomically far away from the probability of Darwinian evolution.

Near the end of the paper, repeating his shit calculation for the bacterial flagellum, he just counts up L = the # of amino acids in the whole flagellum, and since there are 20 kinds of amino acid, he sets p = p = (1/20)^L.

It’s sleight of hand, and it has nothing to do with evolution.

It’s essential to emphasize (as Joe F has not) that Dembski officially defines CSI as based on THE TORNADO PROBABILITY, which he can compute, in place of the probability of Darwinian evolution, which he cannot. That’s officially part of Dembski’s definition of CSI as of 2005.

I put φS equal to 1, for one universe. Dembski apparently wanted to fold in some other means of “minimal” specification to multiply up those probabilities he only calculates using those “tornado probabilities” that don’t apply to evolutionary processes.

That is also what those characters over at UD do. They like strings of characters because all they have to do is raise the number of ASCII characters to a power equal to the length of the string to get a probability as low as they want. That isn’t how molecules of any kind behave in the real world.

That is why I picked the rock example. The “CSI specification” was at least more realistic in specifying the exact configuration of the rock and all its crystals. Of course, I cheated just as they cheat over at UD. I took all permutations of identical atoms as different; which gives a nice over-count. But even if I didn’t do that, I still had enough in the permutations and orientations of the crystals.

But that is the key, isn’t it; declare the “impossible” winner and “prove” it by calculating all possible rearrangements of a string of characters that gives a very large number.

This also goes to another question I keep asking the ID/creationists but never get a response to; namely, “Where along the chain of complexity in atom/molecular assemblies do the laws of physics and chemistry stop working and ‘intelligence’ has to step in an do the job that physics and chemistry can no longer do?” I did it with rocks. Why can’t I declare rocks the winners in the evolutionary lottery?

Why do protein molecules count while rocks don’t? If I lower the temperature of a protein molecule until it becomes as rigid as a rock, can we then rule out intelligent design in protein molecules? If I lower the temperature of lead below 7.2 K where it becomes a superconductor, can we now declare lead to be intelligently designed?

Trying to make up other “probability killers” like “functionality” isn’t going to work either. There are lots of things going on in all sorts of condensed matter systems that can be specified with much more “information” than a string of letters enumerating the positions of molecules in a protein.

It is appalling just how doggedly dishonest ID has become in recent years. People without even a high school level of understanding of science declaring they can “mathematically prove” that protein molecules and the stuff of life is “intelligently designed.” I don’t think these people realize just how ridiculous their attempts have become.

First would you guys please learn to edit out less relevant parts of quotes?. These comments are getting unnecessarily long.

Diogenes, which part of Dembski (2005) is where he reverts to the Tornado probability? The flagellum calculation?

Dembski’s collaborator Winston Ewert has attacked my arguments in a post at Evolution News and Views, arguing that Dembski never was using the Tornado probability, that even in his 2002 book he incorporated the P(T|H) term. As I had based my 2007 article on the interpretation that CSI was calculated from the Tornado probability and by using Dembski’s Conservation Law argument, this would seriously invalidate those parts of my argument. Right now I am reading through the 2002 book trying to determine whether Ewert is correct about this.

In 2005 he definitely has the P(T|H) term. The question is, did Dembski always have it? Determining whether it really made a reappearance in the 2005 paper is then of some importance.

If P(T|H) is bigger than, say, 10^(-6), then that means that whether the UPB is 10^(-150) or 10^(-120) or 10^(-300) really doesn’t matter, as natural selection can do the job.

Here are two key quotes from Ewing’s reply to Joe Felsenstein.

In all discussion in the book regarding the Law of Conservation of Information, Dembski is using Shannon information, which is defined as the negative logarithm of the probability. In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection.

Biological life clearly needs to be explained. Conservation of information does not exclude the possibility of a Darwinian process as the explanation. However, it does pose a challenge to Darwinian evolution as being incomplete. Darwinian evolution does not satisfactorily explain the information in the genome. It depends on an explanation that does not yet exist. Until such an explanation exists and is tested, Darwinian evolution does not explain biological life.

IShannon = - ∑ pi log2 pi,

where the index i goes from 1 to Ω, the number of states.

When all states are equally probable with all pi = 1/Ω, this becomes IShannon = log2 Ω.

If done properly, this kind of “information” is proportional to entropy because flipping bits takes energy. Given that fact, Shannon “information” is not conserved because entropy is not conserved.

It is not possible to compute CSI and make all the claims that ID/creationists make if they totally disregard basic physics and chemistry.

As long as they cannot do even simple high school level calculations in physics and chemistry, everything they “calculate” is bogus; endless wrangling over words doesn’t save them.

Mike Elzinga said:

IShannon = - ∑ pi log2 pi,

where the index i goes from 1 to Ω, the number of states.

Just a reminder for those encountering this for the first time; whenever you take numbers, multiply by the probability of their occurrence, and then sum, you get the average.

So the formula for Shannon “information” (uncertainty, entropy; it gets called a lot of things by different users) is just the negative of the average of the logarithms of the probabilities.

Note also that the probabilities all have to add up to 1.

Mike Elzinga said:

I don’t think these people realize just how ridiculous their attempts have become.

Trouble is neither do politicians, who will pander to any group they think will give them votes and photo ops.

« In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection.» Bu I think Dembski used Michael Behe’s wrong idea that the flagellum couldn’t evolve by natural selection. Am I right?

Mike Elzinga said:

diogeneslamp0 said:

Mike, didn’t you lose the Phi_S in your last equation for X?

I put φS equal to 1, for one universe. Dembski apparently wanted to fold in some other means of “minimal” specification to multiply up those probabilities he only calculates using those “tornado probabilities” that don’t apply to evolutionary processes.

No, this is incorrect. φS is virtually never 1, in fact it’s a huge number usually, and it’s never equal to the number of universes.

In my two long previous comments, I gave detailed instructions on how to compute φS, although I didn’t use that notation. Nobody paid any attention then.

In his 2005 paper, Dembski gave two methods for computing φS, which I called “Guaranteed Success Methods #2 and #3”.

My description of Dembski’s 2005 “Semiotic String method”, based on counting the number of words in a verbal description of a system, and which I call “Guaranteed Success Method #3”, is here.

My description of Dembski’s 2005 “Simplicity of Description method”, based on counting the number of bit strings that are as simple or simpler (by Kolmogorov complexity) than the observed string, and which I call “Guaranteed Success Method #2”, is here.

These methods are ridiculous, but you can’t just set φS = 1.

ERRATA:

In my two comments above, I made an error: I left out the factor 10^120 which must be multiplied with the probabilities inside the square brackets [] of the logarithms. This factor of 10^120 of course represents Dembski’s so-called Universal Probability Bound.

There is a simple way to fix the equations I wrote before. Because

log2[10120] = ~400,

Thus, before when I said the log2[stuff] should be more than 1, in fact I should have said, the log2[stuff] should be more than 400. This corrects for my leaving out the UPB of 10^120.

In addition, Dembski’s “semiotic string” method assumes the number of words in the English language is 100,000. I used 250,000, which is more accurate, but for consistency I should use Dembski’s value.

I will re-post both of my comments on Dembski’s 2005 methods with fixes to the math.

diogeneslamp0 said:

Mike Elzinga said:

diogeneslamp0 said:

Mike, didn’t you lose the Phi_S in your last equation for X?

I put φS equal to 1, for one universe. Dembski apparently wanted to fold in some other means of “minimal” specification to multiply up those probabilities he only calculates using those “tornado probabilities” that don’t apply to evolutionary processes.

No, this is incorrect. φS is virtually never 1, in fact it’s a huge number usually, and it’s never equal to the number of universes.

You are apparently talking about Dembski’s MNφS.

I don’t much care how Dembski chooses to obfuscate the number of trials. All he did was to lard up a simple notion from probability that the mean number of successes from N trials on an event with probability p is just Np.

Dembski, Abel, and other “math whizzes” in the ID movement bamboozle using all sorts of “philosophy” and rationalizations to obscure what ultimately turns out to be simple math done wrong or applied inappropriately.

At bottom, whether he pretends to use Shannon “information” - or whatever he wants to call it – to “prove” Conservation of Information, he is wrong when he wants to apply such notions to the real universe where matter interacts with matter.

That kairosfocus character and others over at UD use Dembski’s CSI in exactly the same way I parodied it with a rock. They are counting arbitrary labels and specified characteristics that don’t interact among themselves. ASCII characters don’t interact among themselves. Therefore the probability selecting from of 20 amino acids to make a molecule with 500 positions in a chain is not the same as 20-500; that is a completely nonsensical calculation.

Whether the φ refers to some arbitrary description of the event - such as the number of English words needed to specify it - or whether it folds in the number of monkeys involved in making the trials, or any other thing that Dembski can think of to make the number of trials bigger and therefore “swamp the probability,” is completely irrelevant.

All Dembski has done by breaking down N into a bunch of factors is to allow the inclusion of a bunch of excuses to make the number of trials bigger so that no events in the building of the molecules of life can have a probability large enough to overcome it.

Mike Elzinga said:

All Dembski has done by breaking down N into a bunch of factors is to allow the inclusion of a bunch of excuses to make the number of trials bigger so that no events in the building of the molecules of life can have a probability large enough to overcome it.

In fact, the entire CSI obfuscation can be boiled down to the product Np. Take away the logarithms and the labels called “information,” because those completely obscure what is being discussed.

If Dembski, or anyone else in the ID movement, wants to tell us that there is a maximum number of trials possible in the universe to produce a given event, and he grabs a number like 10120, then the minimum probability required for Np = 1 is p = 10-120.

So Dembski has to tell us why p for the occurrence of a given event in the universe is less than that.

Either he has to tell us that the probability has to be jacked up by the intrusion of intelligence in order for the event to occur, or that the number of trials to produce that event is not as large as he claims. He is not about to admit the latter.

What he has done instead is lard up and obfuscate this simple calculation with hundreds of pages of “philosophy.”

So what the Dembski (2005) formulation amounts to, after one has exponentiated the formula and multiplied through, is this

1. Figure out a probability so small it can occur less than once, anywhere in the universe, in the whole history of the universe.

2. Ask the evolutionary biologist to compute the probability of an adaptation that good or better arising.

3. Compare it to the small probability.

4, If it is less, declare Design to have been detected.

And that’s it. No calculation of any kind of “information” necessary. Also notice who gets to do all the heavy lifting – and it’s not the “Design theorist”.

Joe Felsenstein said:

Dembski’s collaborator Winston Ewert has attacked my arguments in a post at Evolution News and Views, arguing that Dembski never was using the Tornado probability, that even in his 2002 book he incorporated the P(T|H) term. As I had based my 2007 article on the interpretation that CSI was calculated from the Tornado probability and by using Dembski’s Conservation Law argument, this would seriously invalidate those parts of my argument…

In 2005 he definitely has the P(T|H) term. The question is, did Dembski always have it? Determining whether it really made a reappearance in the 2005 paper is then of some importance.

In Winston Ewert’s post at Evolution News and Views he is of course lying about Dembski’s probability calculations, and also misrepresenting the point you made.

Ewert’s first argument is a misrepresentation of your post: Ewert repeatedly claims that Joe F claimed in his post that Dembski did not do probability calculations prior to his 2005 paper. This is obviously false.

Joe F has stated and consistently maintained that both Dembski’s pre-2005 and post-2005 work included probabilities, but that they were defined differently. Joe F said that pre-2005 Dembski defined probability = tornado probability, which he could compute but is irrelevant, but from 2005 onward he defined the probability as including Darwinian mechanisms, which are relevant but which Dembski can’t compute.

Winston Ewert misrepresents this by claiming that Joe F said there were no probabilities in Dembski’s pre-2005 work e.g. No Free Lunch. Joe F never said it. Ewert thinks he can refute Joe F by simply listing citations from No Free Lunch where Dembski computed his infantile tornado probabilities.

NO. It’s not that easy.

I assert that Joe F is wrong in claiming that Dembski in 2005 and thereafter asserts that the probability of Darwinian mechanisms is the necessary quantity. That’s not exactly right. Rather, post-2005 Dembski wants to have his cake and eat it too. Dembski continues to compute tornado probabilities, but he lies and says they’re probabilities of Darwinian mechanisms.

Ewert’s second argument is a lie: he says Dembski computed the probability of Natural Selection. Bullshit. Many IDers lie about the plain meaning of Dembski’s math, including Dembski himself.

Winston Ewert wrote:

…On page 72 of No Free Lunch, Dembski again presents the Generic Chance Elimination Argument. Step 7 instructs the subject to calculate the probability of the event under all relevant chance hypotheses. …In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection.

…The probability in the form of Shannon information existed in No Free Lunch. The change in the 2005 paper… was not a change in the way that probability was handled.

The design inference in all its forms has always involved calculating the probability of relevant chance hypotheses including natural selection. This is not a new feature of a revamped design inference, but a critical component of the design inference since its inception.

[Information, Past and Present. Winston Ewert. ENV. April 15, 2013.]

What a lying piece of shit! The text in boldface is manifestly false. Dembski only computes tornado probabilities, then later he lies about it and says he computed the probability of natural selection. Which I’m going to demonstrate with some quotes below.

Winston Ewert wrote:

…The argument of specified complexity was never intended to show that natural selection has a low probability of success.

Bullshit. Dembski has repeatedly claimed he computed that natural selection has a low probability of success.

Dembski himself lied about his own math at least two or three times, saying that his infantile tornado probabilities were the probability of evolution by Darwinian mechanisms– first at a forum at the AMNH in April 2002, transcript here.

William Dembski said:

Convinced that the Darwinian mechanism must be capable of doing such evolutionary design work, evolutionary biologists rarely ask whether such a sequence of successful baby steps even exists. Much less do they attempt to quantify the probabilities involved?

I attempt, in chapter 5 of my most recent book, “No Free Lunch”, to do that, to lay out the probabilities, there I lay out techniques for assessing the probabilistic hurdles that the Darwinian mechanism faces in trying to account for complex biological structures like the bacterial flagellum. The probabilities that I calculate, and I try to be conservative, are horrendous, and render the natural selection entirely implausible as a mechanism for generating the flagellum and structures like it.

[Dembski at the ID forum at the AMNH, April 23, 2002, transcript here]

He’s lying. He computed no such thing in No Free Lunch. Dembski did not misspeak; compare the above to his written notes for that forum; it’s deliberate misrepresentation.

But a month later, May 2002, Richard Wein forced Dembski to admit that his computation in No Free Lunch is based on random recombination of parts, which we call “tornado probability.” Dembski admitted this during his knock-down drag-out internet fight, only after Wein managed to squeeze the admission out of him.

William Dembski wrote:

Next, all the biological community has to mitigate the otherwise vast improbabilities for the formation of such systems is co-optation via natural selection gradually enfolding parts as functions co-evolve. Anything other than this is going to involve saltation and therefore calculating a probability of the emergence of a multipart system by random combination. But, as Wein rightly notes, “the probability of appearance by random combination is so minuscule that this is unsatisfying as a scientific explanation.” Wein therefore does not dispute my calculation of appearance by random combination, but the relevance of that calculation to systems like the flagellum. And why does he think it irrelevant? Because co-optation is supposed to be able to do it.

Now, why should we believe it? What I offer in chapter 5 of NFL are reasons not to believe it. I tighten Michael Behe’s notion of irreducible complexity… I submit that there is no live possibility here but only the illusion of possibility.

[William Dembski, “Obsessively Criticized But Scarcely Refuted: A Response To Richard Wein”, May 2002]

In the above, Wein gets Dembski to admit he computed “my calculation of appearance by random combination”, but says we should trust it because evolution can’t produce irreducibly complex structures.

Two years later, Dembski goes back to lying, reverses his admission to Wein about his “calculation of appearance by random recombination”, and goes back to calling it probability of natural selection.

William Dembski wrote:

Yet even with the most generous allowance of legitimate advantages, the probabilities computed for the Darwinian mechanism to evolve irreducibly complex biochemical systemsalways end up being exceedingly small.29

[Footnote 29: See, for instance, Dembski, No Free Lunch, sec. 5.10.]

[William Dembski, “Irreducible Complexity Revisited”, January 2004, p.29-30]

Again, outright lying. His citation #29 is to HIMSELF, to the oft-cited bullshit “tornado probability” he computed in Section 5.10 of No Free Lunch.

Remember this, Joe: Dembski and his followers often misrepresent the calculation he did in NFL: it’s the tornado probability, but Dembski passes it off as the probability of natural selection.

Richard Wein thoroughly demolished Dembski’s computation of tornado probabilities (they don’t call it that; they call it probability of random combination) in a detailed article here.

Joe Felsenstein said:

Diogenes, which part of Dembski (2005) is where he reverts to the Tornado probability? The flagellum calculation?

In several places, but it’s subtle. Note that Dembski in multiple places (p. 18, 25) explicitly asserts that P(T|H) is the probability including Darwinian mechanisms, but his computations do the opposite.

Every time that it gets down to computing numbers, Dembski always switches to the tornado probability, often in sneaky ways. You can start by looking at footnote 33 on page 25. There he cites his 2004 paper, which as we saw above, lies about what Dembski calculated and calls it the probability of Darwinian mechanisms. His 2004 paper, as we’ve seen, in turn cites section 5.10 of No Free Lunch.

So if you follow his footnotes, it all points back to the tornado probability in No Free Lunch, about which they all bullshit.

William Dembski wrote:

As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum. Recall the following description of the bacterial flagellum given in section 6: “bidirectional rotary motor-driven propeller.” This description corresponds to a pattern T. Moreover, given a natural language (English) lexicon with 100,000 (= 105) basic concepts (which is supremely generous given that no English speaker is known to have so extensive a basic vocabulary), we estimated the complexity of this pattern at approximately φS(T) = 1020… It follows that –log2[10120 φS(T)•P(T|H)] > 1 if and only if P(T|H) < 1/2×10-140, where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event, is the evolutionary pathway that brings about the flagellar structure… Is P(T|H) in fact less than 1/2×10-140, thus making T a specification? The precise calculation of P(T|H) has yet to be done. But some methods for decomposing this probability into a product of more manageable probabilities as well as some initial estimates for these probabilities are now in place.33

[Footnote 33: See my [Dembski’s 2004] article “Irreducible Complexity Revisited” at www.designinference.com (last accessed June 17, 2005). See also section 5.10 of my book No Free Lunch.]

These preliminary indicators point to T’s specified complexity being greater than 1 and to T in fact constituting a specification…

[“Specification: The Pattern That Signifies Intelligence.” William A. Dembski. 2005, version 1.22, p.24-25.]

You see what he did? He cited his 2004 paper, which in turn cited section 5.10 of No Free Lunch.

In other places likewise, Dembski in his 2005 paper openly computes tornado probability AND NOTHING ELSE.

On page 19 he computes the “specification” of several hands of poker, in each case computing the probability using uniform probability distributions, which we call “the tornado probability.” Below on page 22 he computes the probability P(T|H) “with respect to the uniform probability distribution denoted by H” of a sequence of Fibonacci numbers.

This [Fibonacci] sequence, if produced at random (i.e., with respect to the uniform probability distribution denoted by H), would have probability 10-10, or 1 in 10 billion. This is P(T|H).

[“Specification: The Pattern That Signifies Intelligence.” William A. Dembski. 2005, version 1.22, p.24-25.]

On page 26, he computes the specification for a possibly “loaded” die using tornado probability. Then he adds:

that still leaves alternative hypotheses H’ for which the probability of the faces are not all equal.

Note he is distinguishing between alternative hypotheses H’, which is based on a non-uniform probability, and his hypothesis H, which logically implies H is based on a uniform probability distribution.

I will emphasize that a few of the sharper ID proponents have noted Dembski’s fast moves with switching between probability of evolution and tornado probability (though they don’t call it that).

VJ Torley is one of the few ID proponents who understand Dembski’s math. Torley has addressed Dembski’s 2005 paper by basically saying the Dembski should define his probability as explicitly being based on random recombination, that is, the tornado probability.

Torley did a detailed analysis of Dembski’s probabilities and made a detailed case that the invocation of tornado probability should be an official part of CSI, in at a long comment, Comment #207 at the second MathGrrl thread at Uncommon Descent. His analysis is canny and you should read it.

Secondly, the ID proponent Niwrad, a frequent poster at UD, made an online calculator of CSI, and his math is explicitly based on the tornado probability.

You should also read Richard Wein’s detailed refutation of No Free Lunch because Wein focuses relentlessly on the issue of how probability is calculated; Dembski responds and eventually Wein gets him to admit that his probability is the tornado probability. Read Dembski’s responses to Wein. Wein does a detailed debunking of Dembski’s self-contradictory statements including his 2004 paper here.

So any ID proponent who says they’re computing the probability of “Darwinian mechanisms” is lying.

Joe,

I wrote a long comment in answer to your question and it’s held up in moderation.

diogeneslamp0 said:

Joe,

I wrote a long comment in answer to your question and it’s held up in moderation.

I’ll see what I can do, though I don’t think accusations of lying are helpful here – you can just make contradictions clear and let people draw their own conclusions. Given the human capacity for self-delusion there are probably fewer cases of deliberate lying than we think.

I changed its status to Approved, but it has still not appeared. Perhaps my powers are not as great as I hoped.

If you still have the text, cut it into two or three comments and post them. (While you’re at it tone down the accusations of deliberate lying).

If you don’t still have the text, I can recover it for you from the moderation queue.

It went through unchanged. If you want to delete it, go ahead and I’ll edit later.

Or maybe you delete it, I tone it down, and it could be an OP.

Oops. Yes, I see it up there.

If you want to make it an Original Post, that is for you to decide. I repeat that using the word “lying”, “liar”, or “lie” is not smart, as that lets the other side off the hook; making it easy for them to go into Taking Offense mode, instead of deaing with your arguments point by point.

Joe Felsenstein said:

And that’s it. No calculation of any kind of “information” necessary. Also notice who gets to do all the heavy lifting – and it’s not the “Design theorist”.

It does even worse to his followers. All that mathematical and “philosophical” larding-up has his followers slogging through all the obfuscation believing that they are doing deep mathematics and “information” theory.

After they master the all the unnecessary jargon and “sophisticated” math, they come out the end believing they have a depth of understanding of nature that their opponents can’t grasp or are too lazy and stupid to learn. This is particularly apparent over at UD.

But this is all an illusion because, when it comes right down to fundamental concepts in biology, chemistry, and physics at the high school level, not one ID/creationist advocate or follower can pass simple concept tests in any of these subjects. We see their consternation about what charge and mass have to do with anything over at UD.

One has to admit that Dembski has been pretty slick in knowing his followers. One cannot get into any discussions with him or his followers without being totally sidetracked into an infinite regress of wrangling over the meanings of the meanings of meanings.

Any attempt at discussing the real science gets lost in the inevitable heat that is generated by trying to come to some agreement about what anyone is saying. It serves the Culture War objective quite well by creating the illusion that there is a real, scientific discussion going on that is being kept out of the schools.

But there is no CSI; it is all bogus.

I would suggest that all further attempts to discuss CSI with the followers of ID/creationism be reduced to Np. It is not necessary to take logarithms and waste time on “information” and all of Dembski’s made-up definitions.

diogeneslamp0: Thanks particularly for the references to vjtorley’s comment and to Richard Wein’s writings, which are helpful. I am still busy with rereading this literature (and a few major distractions connected with my research and grant funding, none of which is about the CSI/Design issue).

Dembski has certainly managed to get a lot of people to write thousands of pages of response to his CSI.

However, “tornado probability” didn’t originate with Dembski or even with Fred Hoyle; it was going on even before that with Henry Morris back in the 1970s and 80s. Morris often referred to Isaac Asimov who, by the way, did not agree that evolution violated the second law of thermodynamics.

Morris and even Phillip E. Johnsom and Dembski have sometimes given credit to A.E. Wilder-Smith for the thermodynamic and “information analysis” arguments against evolution.

Here is Morris referring to newspaper syndicated columnist Sidney Harris as though Harris was an expert on thermodynamics.

Couching the thermodynamics argument in terms of “information” theory does a pretty good job of obscuring ID/creationist misconceptions about the second law and how condensed matter forms. The confusions about the existence of things and the second law go back into the 19th century before the fields of condensed matter and the quantum mechanical nature of matter were developed.

ID/creationists are thinking in terms of an “ideal gas” when they think about the “primordial soup” out of which emerge the assemblies of complex, heterogeneous molecular systems. That appears to be the limit of their understanding of chemistry and physics.

It is not the force of their arguments that has generated so many pages of response to the ID/creationists; it has been the socio/political threat of pushing that junk into public education. Furthermore, ID/creationists have generally done a pretty good job of dragging the discussion onto their territory and enticing their opponents to argue on ID/creationist turf using ID/creationist concepts.

If someone familiar with the science, and with the core misconceptions ID/creationists are propagating, attempts to drag the discussion back to reality, ID/creationists throw up a huge barrage of jargon and obfuscation and accuse their opponent of not understanding the issues. I find that a totally sleazy tactic.

ID/creationists can’t get a hearing in the science community because ID/creationist “science” is easily recognized as pure hokum. Trying to get the public to see that is a far more difficult problem. It is a shame that scientists have to spend so much time on ID/creationist crap with so little value.

About this Entry

This page contains a single entry by Joe Felsenstein published on April 7, 2013 1:22 PM.

Peppered moth article among most read, cited was the previous entry in this blog.

How to fund creationism in schools is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.38

Site Meter