Icons of ID: And the walls come crumbling down

In the last few weeks I have addressed various Icons of ID. I have come to the conclusion that the Explanatory Filter (EF) is fundamentally unreliable. In fact, I believe that the evidence shows conclusively that such concepts as Irreducible Complexity (IC), Complex Specified Information, Law of Conservation of Information are fundamentally flawed. This means that the ID hypotheses, at least those based on elimination, are fundamentally flawed.

Let’s look at the history of the theoretical foundations of ID and how they crumbled.

EF is reliable, no false positives. If it allows for false positives, it is useless. As ID proponents and critics alike have shown, false positives ARE a problem for the explanatory filter. The conclusion should be obvious, the explanatory filter is useless.

In 2001 Del Ratzsch reviewed the Explanatory Filter and found it quite wanting. Pointing out the limited approach in which design is inferred from elimination Del shows many flaws and short comings in the EF. One of these shortcomings is the existence of false positives.

In addition Del Ratzsch argues that the EF is unsuitable for inferring new design

So typically, patterns that are likely candidates for design are first identified as such by some unspecified (“mysterious”) means, then with the pattern in hand S picks out side information identified (by unspecified means) as releavant to the particular pattern, then sees whether the pattern in question is among the various patterns that could have been constructed from that side information. What this means, of course, is that Dembski’s design inference will not be particularly useful either in initial recognition or identification of design.

Various critics had already pointed out similar problems. Sober showed how modus ponens does have a probabilistic analog, modus tollens doesn’t.

Abstract: This paper defends two theses about probabilistic reasoning. First, although modus ponens has a probabilistic analog, modus tollens does not - the fact that a hypothesis says that an observation is very improbable does not entail that the hypothesis is improbable. Second, the evidence relation is essentially comparative; with respect to hypotheses that confer probabilities on observation statements but do not entail them, an observation O may favor one hypothesis H1 over another hypothesis H2 , but O cannot be said to confirm or disconfirm H1 without such relativization. These points have serious consequences for the Intelligent Design movement. Even if evolutionary theory entailed that various complex adaptations are very improbable, that would neither disconfirm the theory nor support the hypothesis of intelligent design. For either of these conclusions to follow, an additional question must be answered: With respect to the adaptive features that evolutionary theory allegedly says are very improbable, what is their probability of arising if they were produced by intelligent design? This crucial question has not been addressed by the ID movement.

In Intelligent Design and Probability Reasoning by Elliott Sober

After devastating reviews by Wein, Perakh, Chiprout and many others we come to present day where Wilkins and Elsberry have shown how the EF is unable to deal with rarefied design.

Thanks to the excellent work of Gedanken it was more recently shown in straightforward terms that the explanatory filter is inherently unreliable

The result of that analysis is that the EF is not reliable when the probability of designer action to cause observed ‘specification’ is lower than the probability of making an error in probability calculation of natural processes causing observed ‘specification’. These are conditions which, when present, cause the EF to be unreliable – they are not aspects of the EF itself.

Nelson claims that “The question is whether one can reliably use CSI/EF combination or unitary argument in detection of design. That is what I claim cannot be done reliably in many circumstances” which Nelson claims is indefensible. but is it indefensible? So far Nelson has failed to show that it is. In fact, it is self evident that if the probability of a false positive is larger than the probability of design, that the filter is inherently unreliable.

Nelson clings to the concept of CSI as if this concept, which is inherently part of the EF can save the filter from its imminent demise. Little did he realize that Dembski himself had claimed that there is apparant and actual CSI while failing to provide for any way to distinguish between the two. Wesley Elsberry challenged Dembski to address the Algorithm room, so far Dembski has yet to address Elsberry on this issue as well as the devastating comments raised by Del Ratzsch.

Then we had the inappropriate application (written in Jello as one of the mathematicians involved in the NFL theorems called it) of the No Free Lunch theorems, adding another nail to the concept of CSI, law of conservation of information and other ‘ID relevant’ concepts.

For all practical purposes, the explanatory filter has been shown to be useless at least for what it was most intended, namely inferring a disembodied designer.

IC, the concept first proposed by Behe, did not fare much better. From the onset Behe had to admit that indirect pathways may exist and although he claimed without much evidence that such pathways are improbable, science actually showed otherwise. It started with Ussery et al who showed that various pathways to IC did exist. In fact not only did such pathways exist but evidence showed how co-option appeared to be quite prevalent in nature. Following this, Lenski et al showed how in fact evolutionary processes could lead to IC systems (Lenski, R. E., C. Ofria, R. T. Pennock, and C. Adami. 2003. The evolutionary origin of complex features. Nature 423:139-144). The concept of IC had been reduced to “but yes, it can evolve but show us that it actually did…“… Not much of an ID relevant concept, unless appeal to ignorance is argued to be an inevitable ID approach.

So where are we now? ID’s theoretical foundations have failed, reflected in the obvious lack of ID relevant research or contributions to our scientific understanding.

Del Ratzsch alluded to this in a recent interview on ISCID

I think that some are certainly too far in the materialist direction, and they claim that science backs them up on that. ID can at least serve a ‘keeping em’ honest’ function, even if nothing else. I think that ID may very well have things to offer science, but I think that it is too early for ID to claim that it has done so. I don’t think that it is just obvious that ID will contribute substantively to science, but I think it has that potential, and that it should be pushed as far as it can be made to legitimately go.

Link

Now we reach present day where we see ID proponentssuch as Beckwith and the IDEA center speak out against their perception of ‘errors, misrepresentation etc’ in the work by F&G (Forrest and Gross: Creationisms Trojan Horse: The Wedge of Intelligent Design). F&G have done an excellent job at documenting the religious and political foundation of the ID movement in Creationism’s Trojan Horse: The Wedge of Intelligent Design.

Putting all this together, it all started to make sense to me.

The Wedge strategy strongly depended on the success of the ID movement scientifically for it to make a claim to be allowed in high school curricula. Francis Beckwith’s articles and other articles by Discovery Institute people show how much depends on ID being scientifically relevant (more in a future posting). Now that ID has shown to be a failure scientifically, F&G’s book is becoming even more a liability to the ID movement. Not only is ID scientifically meaningless but its roots in religion and politics has been documented in full detail.

Is the Wedge closing up? Time shall tell.

A final thought: Mike Gene’s front loading, while far more defendable scientifically seems to have the same flaw inherent to ID hypotheses namely that it cannot establish the relative probabilities between a possible false positive and a design hypothesis and thus any inference become highly unreliable.

So what is the future for ID?