Icons of ID: Reliability revisited

| 3 Comments

The more I read about Dembski’s explanatory filter and his reliability claims the more I come to the conclusion that Dembski has created a big problem for himself by suggesting on the one hand that it is reliable:

Briefly, the claim that specified complexity is a reliable marker of design means that if an item genuinely instantiates specified complexity, then it was designed. As I argue and continue to maintain, no counterexamples to this claim are known.

Dembski, ISCID No False Positives and the Lust for Certainty on November 2002

and

I argue that the explanatory filter is a reliable criterion for detecting design. Alternatively, I argue that the Explanatory Filter successfully avoids false positives. Thus whenever the Explanatory Filter attributes design, it does so correctly.

Dembski, “The Explanatory Filter: A three-part filter for understanding how to separate and identify cause from intelligent design”, Mere Creation Conference originally titled “Redesigning Science.”, 1996.

But on the other hand Dembski also accepts the risk of false positives (although he considers this, incorrectly, the inevitable nature of science).

Dembski in addition adds confusion by talking about two very different forms of reliability.

One argues that whenever the EF infers design it does this without any errors, the other form of reliability states that when applied correctly it works. The first one is a claim of the EF in action, the latter one is which argues that if we have applied the filter correctly we can go back and show that it was correctly applied. Of course the latter one is a tautological claim, while the former claim is one of actual performance.

But both arguments have their own problems

Gedanken has presented some in depth arguments and examples on ISCID as to the problems with the EF.

The “eliminative” Explanatory Filter is not reliable, and only has “no false positives” when one retroactively applies all information that would be needed to find out if it did have a false positive. That makes it useless as a test and is the very “cherry picking” that makes that an invalid induction.

But if one considers present information, it is not reliable. It is most unreliable when the probability of making an error in the analysis of natural causes is greater than the probability of the “designer did it” scenario. And since the Explanatory Filter makes no attempt at analyzing or controlling for coincidence with resepct to “designer” aspects, the Explanatory Filter is of unknown reliability when taken without further evidence. And one can easily verify that the worst cases of reliability are the conditions in which there are likely to be errors in finding a probability of the natural-non-intelligent cause pathway, and there is no reason, access, or other consideration for a “designer” to achieve the result. One result of this is that the ID inference is purely dependent upon belief, and is thus philosophical and not scientific, a problem that Dr. Dembski himself warns against.

Gedanken

Gedanken’s comments seem to be mirrored in Wilkins and Elsberry’s Advantages of theft over toil, which distinguishes between rarefied design (where we have no independent data allowing us to calculate the probabilities for the design hypothesis) and “ordinary design” which is ‘based on a knowledge of the behavior of designers’.

In addition, let’s assume that we can show that all chance and regularity hypotheses have small probabilities. Does this mean that even if we have eliminated all hypotheses that we can thus infer ‘design’ without the risk of false positives? Again that depends on the probability for the ‘design hypothesis’. Assume that the design hypothesis has a probability smaller than the pure chance hypothesis or even a regularity hypothesis. Which hypothesis would be more likely to be correct?

In other words, Dembski’s reliability claim is unsupported for the following two reasons.

Even if we can show that all relevant hypotheses have low probability, a design inference can still be erroneous if the probability of the design inference is smaller than the chance hypothesis. After all even unlikely events are known to happen.

On the other hand, we can also show how the design inference is susceptible to false positives because of our ignorance of all the relevant hypotheses.

In other words we should take notice of Dembski’s statement that

On the other hand, if things end up in the net that are not designed, the criterion will be useless.

Dembski, William, 1999. Intelligent Design: The Bridge Between Science & Theology, p. 141

3 Comments

Part of me says “Who cares”, because even if the UnExplanatory Filter is ground into the dust, they’ll just make up some new term, some more poorly-defined ideas (cf. Nelson’s Flaw) and start the whole thing over again. And it’ll just coincidentally happen that the next ‘science’ they come up with will also 1) destroy evolution & 2) prove the existence of The Creator.

The other part appreciates the work done to field-test and field-reject these bogus ‘science’ ideas.

Just a tiny quibble, the quote from Dembski (1999) is “On the other hand, if things end up in the net that are not designed, the criterion will be worthless.”

This really awesome new visualization of some cancer pathway from Nature reinforces what others have said here about cancer being IC according to their argument. I wonder if the IDiots admit that?

http://www.nature.com/nrc/journal/v[…]berg_poster/

About this Entry

This page contains a single entry by PvM published on July 9, 2004 8:44 PM.

Dembski Reviewed was the previous entry in this blog.

muse@nature.com: The Tyranny of Design is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter