Icons of ID: Explanatory filter and false positives

| 20 Comments

The first Icon I will explore is the Icon of the Explanatory Filter being reliable, or in the words of Dembski “No false positives”.

Dembski’s opinion on “false positives” seems to have evolved over time. From an initial claim of reliability and “no false positives” via an admission that if the filter erroneously attributes design, it is useless to acceptance of “false positives”, the “Explanatory Filter” evolved from reliable to useless.

Despite the fact that “false positives” are inevitable and thus the filter is not only unreliable but in fact useless, Dembski and the ID movement seems to continue to insist that the Explanatory Filter is a reliable theoretical foundation for detecting design.

One may speculate as to the reasons why the ID movement is slow or reluctant to self correct. Is it due to a lack of peer review? Lack of peer review in the ID movement is self evident as its arguments are presented in books rather than in the more common format of papers to a scientific journal.

Is it because of the often strong belief that intelligent design must be the correct answer thus any design inference has to be reliable which seems to me to be a circular argument ?

I am not sure I have all the answers but I am hoping to take the reader on an interesting journey through the history of the “Explanatory Filter” from its conception to its early retirement.

The original claim was that:

1996

I argue that the explanatory filter is a reliable criterion for detecting design. Alternatively, I argue that the Explanatory Filter successfully avoids false positives. Thus whenever the Explanatory Filter attributes design, it does so correctly.

Source

Dembski bases his claim of “no false positives” on two weak arguments. The first argument is an inductive argument that states that whenever the filter has been applied and infered design, it did so correctly.

1998

Dembski Wrote:

“straightforward inductive argument: in every instance where the explanatory filter attributes design and where the underlying causal history is known, it turns out design is present; therefore design actually is present whenever the explanatory filter attributes design.”

W.A. Dembski, ed., Mere Creation, InterVarsity Press, 1998. p. 107

As Perakh argues:

While Dembski devotes several pages to the elaboration of this assertion, he does not substantiate it by providing any record which would indeed show his filter’s impeccable reliability.

The second argument is that the explanatory filter mimicks how we recognize intelligent causation.

“The explanatory filter is a reliable criterion for detecting design because it coincides with how we recognize intelligent causation generally.”

Again Perakh shows what is wrong with this claim. In fact, I would argue the opposite is true. When recognizing intelligent causation in for instance archaeology, criminology, and ever SETI, we rely on motives, means, opportunities or in other words pathways.

1999

Dembski strengthened the claim by stating that:

“On the other hand, if things end up in the net that are not designed, the criterion will be useless.”

Dembski, William, 1999. Intelligent Design: The Bridge Between Science & Theology. P 141.

In other words, if it can be shown that “false positives” are inevitable, then it has been shown that the criterion to infer design is useless. So let’s explore this possibility.

2001

In Dembski’s own words:

Now it can happen that we may not know enough to determine all the relevant chance hypotheses. Alternatively, we might think we know the relevant chance hypotheses, but later discover that we missed a crucial one. In the one case a design inference could not even get going; in the other, it would be mistaken. But these are the risks of empirical inquiry, which of its nature is fallible. Worse by far is to impose as an a priori requirement that all gaps in our knowledge must ultimately be filled by non-intelligent causes.

Dembski, William, 2001. No Free Lunch, Rowman & Littlefield, p 123

2002

I argue that we are justified asserting specified complexity (and therefore design) once we have eliminated all known material mechanisms. It means that some unknown mechanism might eventually pop up and overturn a given design inference. But it also means that we have prima facie evidence of design and that we are justified in holding to this claim in the absence of such mechanisms being found. I also note that there can be cases where all material mechanisms (known and unknown) can be precluded decisively.

Source

In other words, unknown mechanisms may eventually pop up.

I would say that by itself is sufficient to take Dembski’s conclusion seriously namely:

“On the other hand, if things end up in the net that are not designed, the criterion will be useless.”

But let’s explore further. When discussing “false negatives” Dembski argues:

One difficulty is that intelligent causes can mimic undirected natural causes, thereby rendering their actions indistinguishable from such unintelligent causes.

Source

But I argue that the same applies however for undirected natural causes that can mimic intelligent causes. Or in other words evolutionary processes such as natural selection. That processes such as natural selection and variation can be shown to increase the information in the genome has been shown by Schneider in “ev: Evolution of Biological Information” and Adami et al’s “Evolution of biological complexity”.

But it is not only Intelligent Design critics who have reached this conclusion,Del Ratzsch, in “Nature, Design and Science” shows how Dembski’s filter is not only limited in its usefulness but also how it is vulnerable to “false positives”.

But there are other problems such as with Detachability:

“Detachability must always be relativized to a subject’s background and knowledge”

Ric Muchaga concludes that detachability has become a liability.

And of course he is right: our clever code that produces a “false positive” is not detachable. But there is an obvious problem: the DNA code is not detachable either! Certainly Dembski cannot believe that a mathematician might have figured out that life on earth would be based on a code constructed of four protein bases arranged in the shape of a double helix. Wasn’t God free to create life based on six protein bases, or from some other arrangement altogether?

Dembski is caught in a dilemma. If he gives up the detachability criterion, there will be no end of “false positives.” If he doesn’t give up the detachability criterion, he has reduced biology to a purely mathematical, a priori discipline, and he has denied a fundamental doctrine of Christian theology–namely, that God was perfectly free to create in any way he saw fit.

Del Ratzsch also show what is problematic with tractability:

“The problem with the tractability criterion is that there is no substantive discrimination among the patterns produced. The lack becomes accutely evident and the tractability requirement clearly loses all bite when omniscience expands those patterns to the set of all possible patterns.”

Del Ratzsch reaches the inevitable conclusion:

“I do not wish to play down or denigrate what Dembski has done. There is much of value in the Design Inference. But I think that some aspects of even the limited task Dembski set for himself still remains to be tamed.” “That Dembski is not employing the robust, standard, agency-derived conception of design that most of his supporters and many of his critics have assumed seems clear.”

So typically, patterns that are likely candidates for design are first identified as such by some unspecified (“mysterious”) means, then with the pattern in hand S picks out side information identified (by unspecified means) as releavant to the particular pattern, then sees whether the pattern in question is among the various patterns that could have been constructed from that side information. What this means, of course, is that Dembski’s design inference will not be particularly useful either in initial recognition or identification of design

What I have reservations about, however, is the fact that designs produced by the deliberate setting of natural processes to produce them seem to escape the filter, and that means that all filter-relevant design theories become gap theories.

Source

“In the present case, however, it seems to me that design theories are going to have to produce some positive results which are not easily assimilable by reigning theories. And it seems to me that to date design has not achieved that. “

We may take notice of Dembski’s insightful words

“We have done amazingly well in creating a cultural movement, but we must not exaggerate ID’s successes on the scientific front.”

William Dembski

And yet we are faced with exaggerations of ID’s scientific successes such as the reliability of the explanatory filter. In addition Wimsatt, in response to an advertisement for Dembski’s “No Free Lunch” asks Dembski to clarify his position.

Wimsatt on the evolutionary psychology Yahoo group objects to the Advertisement text for Dembski’s book “No Free Lunch”

the Hype Wrote:

In No Free Lunch, William Dembski shows that blind natural processes, such as the Darwinian mechanism of natural selection and random variation, are incapable of generating biological structures like the bacterial flagellum.

The response by Wimsatt:

I could not in conscience fail to respond to the ad for Bill Dembski’s new book, ““No Free Lunch”, and to the general tenor of the political push generated either within or by others using the so-called “intelligent design theory”. This is not a theory, but a denial of one, and a denial whose character is widely misrepresented, at least in the press.

Unfortunately “popular” presentations of “Intelligent Design” have tended to give the impression that it rested solely on mathematical demonstrations. Anyone who could have succeeded in showing that natural selection is incapable of generating biological structures according to standards from mathematics or logic would have constructed a mathematical proof that would have dwarfed Godel’s famous Undecideability theorem in importance. As one who read Dembski’s original manuscript for his first book, found much to like in it, and had appreciative remarks on the dust jacket of the first printing, I can say categorically that Demski surely has shown no such thing, and i call upon him as a mathematician to deny and clarify the implications of this advertising copy.

My question: Has Dembski clarified these isssue?

Maybe Dembski should have taken notice of his own words

Critics and enemies are useful. The point is to use them effectively. In our case, this is remarkably easy to do. The reason is that our critics are so assured of themselves and of the rightness of their cause. As a result, they rush into print their latest pronouncements against intelligent design when more careful thought, or perhaps even silence, is called for.

Source

Seems that Dembski should have stayed silent on the issue of the lack of usefulness of the explanatory filter and “false positives”. In fact, the reliability claim is but one example in which Dembski seems to have rushed to judgement in his pronouncements, other examples include the “No Free Lunch” advertisement claims and in a future posting I intend to explore Dembski’s use and abuse of the “No free lunch theorems” and his subsequent reversals when it was pointed out to him that his interpretation of these theorems “was written in jello”

One may wonder why Dembski has avoided to respond to these issues, since they undermine the most basic foundations of his claims. Perhaps the following statement can help us understand

I’m not going to give away all my secrets, but one thing I sometimes do is post on the web a chapter or section from a forthcoming book, let the critics descend, and then revise it so that what appears in book form preempts the critics’ objections. An additional advantage with this approach is that I can cite the website on which the objections appear, which typically gives me the last word in the exchange. And even if the critics choose to revise the objections on their website, books are far more permanent and influential than webpages.

I guess this may mean that we may have to wait for the next book from Bill before we may expect a coherent answer. But is it just me who finds Bill’s approach less than ‘scholarly’?

20 Comments

Darwin be praised.

Steve: Darwin be praised.

Well, your name is “Steve,” so you must believe in evolution, right? :)

Seriously, though…

Dembski: … Worse by far is to impose as an a priori requirement that all gaps in our knowledge must ultimately be filled by non-intelligent causes.

Excuse me, but “Intelligent Design” is a very young “science,” begun just last century, and its primary “scientific conclusion” is based on an a priori requirement that is approximately 5,000 years old! Doesn’t this register with folks like Dembski at all? (Yes, I know…rhetorical question. It just blows my mind.)

Also, I have a minor question for those who are informed enough about the “explanatory filter” and its supporters claims that it can tell designed artifacts from natural artifacts…my question is this: according to these supporters, wasn’t pretty much everything designed by the intelligent designer? Don’t they believe God made everything? If so, then how could the explanatory filter possibly detect the difference between designed and non-designed artifacts, when literally everything would be designed? What basis for comparison is there?

Creationist arguments make me carsick, going in circles like they do.

Syntax Error: mismatched tag at line 3, column 295, byte 589 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

Richard Wein Wrote:

So it turns out that Dembski’s assertion above has no support whatsoever!

Nonsense! Richard, I believe, resides in some non-American part of the globe, so he can be forgiven for not getting it. Dembski’s assertion has all the support it needs: political and financial!

Richard Wein Wrote:

. Whenever a specified event is improbable with respect to all possible non-design processes, it must have involved design.

Dembski uses the terms Explanatory Filter and Complexity-Specification criterion to refer to part 2 only, and his claim of “no false positives” applies only to this part. He is not claiming that his method of design inference as a whole produces no false positives. This is how he could maintain the claim of no false positives while admitting (NFL, p. 123) that if we overlooked a relevant chance hypothesis we might mistakenly infer design.

Actually, rule 2 would still produce false positives. Just because something is improbable, does not mean that it can never occur.

Joe P. Guy wrote:

“Don’t they believe God made everything? If so, then how could the explanatory filter possibly detect the difference between designed and non-designed artifacts, when literally everything would be designed? “

Like you, I remember being amazed at why those who obviously believe that an almighty Creator can do things any way He wanted, still want others to think that mere humans can outsmart that Creator (sorry, “designer”) and catch Him in an irreducibly complex mousetrap. I was also amazed why these same people also found it necessary to supplement their basic design argument with irrelevant and misleading arguments against evolution – and in the case of ID provide no promising alternative - when most of the religions already agree that God uses evolution. Here’s a clue:

http://reason.com/9707/fe.bailey.shtml

Joe P Guy: Creationist arguments make me carsick, going in circles like they do.

Good news, Joe. With a little practice you can learn to relax and enjoy it. First, a brief outline of the ‘filter’ argument for those who may be wondering what in Designer’s name this is all about. You’re trying to figure out how something happened or came to be. You try to explain it scientifically. If you can’t, then check whether it is likely as a random assembly of particles. If it’s really unlikely that way, Bingo! The Designer did it! More conditions have been added to the filter argument, such as ‘specified complexity’. The ‘specified’ part is a mess in principle but in practice it is some simple verbal description. The complexity part is just a trick word for ‘improbable’, so that’s not really a new condition. The improbability is supposed to relate to the specification, but this requirement is relaxed in Dembski’s practice.

Example: Suppose that you come across a large open field covered with stones, and the stones are neatly arranged in circles, with no stones inside the circles. Somehow you know that people didn’t do it. You also know of no scientific explanation for the circular arrangement. [by the way ‘circular’ is a specification]. Now, had a giant tossed out the stones randomly, how probable would it be to get all these circles? Way improbable. Bingo! The Designer did it.

But what if, later on, a geological explanation is figured out? Oops! You had a ‘false positive’.

Now if you go here it really gets to be fun.

As Richard Wein concludes:

Thus, all the mumbo-jumbo about “specified complexity” and “no false positives” is—yet again—just a smokescreen for the argument from ignorance.

From ignorance to the Designer, that is. It’s an elaborate attempt to rationalize substituting Design for Don’t know as the default, as you already know from reading Theft Over Toil. And the ignorance in question is not just any ignorance. It’s a contrived ignorance, based on assuming that evolution doesn’t work. [This may also explain, Joe, why it seems circular. It’s supposedly an anti-evolution argument, but you assume “No scientific explanation, i.e. evolution doesn’t work, at the start.]

A word on the use of probability here. Let’s call it creationist probability, not because no one else ever makes this mental error but because only certain people do it on steroids. Here’s how it works: if you call something improbable because there are very many ways its particles could be arranged randomly, then anything with lots of particles is improbable, even, say, a thimble full of air. If there are N ways to arrange the particles, the probability of any one arrangement (assuming no one is more likely than another) is 1/N. We are usually concerned with the likelihood of a general sort of thing – in other words, if the thimble is just sitting there without a finger in it, the probability that it is full of air is nearly 1. To get a smaller probability, just take more details into account until 1/N is as small as you wish. If you do this and also think of 1/N as the likelihood of that general kind of thing (and apply this reasoning selectively to biology) you may talk yourself into being a creationist.

Example: Dembski assumes that a bacterial flagellum couldn’t come from evolution, since it is supposed to be irreducibly complex (more ID terminology, and not really a problem for evolution but that’s another story). Then he makes up a formula which he says gives the probability of a flagellum as a random assembly of proteins. (By the way it is specified as a rotary outboard motor). The formula has several parameters which he mostly makes up. He gets a satisfyingly small number so the Designer did it.

Obviously, science wouldn’t get very far by making Design the default explanation and using creationist probability. But does this mean that Dembski is on the right track here?

Worse by far is to impose as an a priori requirement that all gaps in our knowledge must ultimately be filled by non-intelligent causes.

Of course not. Design simply requires positive evidence like anything else, instead of being privileged as the default explanation.

Feeling better, Joe?

Charlie’s attempt to distract from the OP has led me to dump the responses to the Bathroom wall where they belong

If Charlie cannot restrain himself then at least he should find the appropriate place for his comments.

Frank J and Pete Dunkelberg,

Thanks very much! :) For information that is helpful and useful, and easy to work with. That whole Sam Spade analogy was something they should’ve taught in my college biology class. Haven’t read all of any of the individual articles yet, but I’ve got them bookmarked and plan to do so in the near future. Thanks!

I think I’ll still get “sick” from Creationist arguments, but not from the going around in circles. ;)

Thanks again!

Final warning to Charlie, stick to the discussion or post on the bathroom wall or your comments will be deleted.

your choice

Joe P. Guy Wrote:

wasn’t pretty much everything designed by the intelligent designer?  Don’t they believe God made everything?

I want to sort of turn Joe’s question around. ID is focused on individual biological artifacts - it’s the watchmaker argument on the nanometer scale. Much of the refutation of ID focuses on the fact that nature has self-organizing properties (which we are just learning the details of). Thus many apparently designed artifacts (such as the Arctic circles) have been shown to be the result of natural properties and/or activities. But what if you ‘zoom out’ a bit, and look at a lower resolution, so to speak. Why does nature have self-organizing properties at all? And why are those organizing properties capable of making such fantastically complex organisms? (Before somebody answers “natural selection, dummy”, I know - but selection needs a substrate upon which to act, and the things it can produce are limited by the properties of the material it has to work with.) I’m basically just getting at the anthropomorphic principle, which I realize has been discussed ad nauseum in various places. But I’m just wondering what the agnostics and atheists make of the properties of nature. Do you just take them as brute fact, or do you have some other explanation for them?

(As an addendum, I think I know why the IDers don’t take this line of argument very often. It’s because they want evidence of a Designer who intervenes in nature - I believe Johnson has said as much. The AP only helps favor deism, which they abhor as much as atheism.)

From the Reason article by Bailey:

Also, biologists agree that a general principle of evolutionary biology rules out the possibility that there are organisms that will sacrifice their own reproductive success in order to enhance the reproductive success of some other species.

Hmm, so what about that activist who filmed the Grizzlies in Alaska and then was killed by them? ; )

It should be noted that the specified complexity criterion has produced at least one false positive. Percival Lowell noticed that the canals on Mars met at just a few hubs. If they were positioned randomly, there would be many intersections, each with just a few canals meeting there. Lowell calculated the probability of the configuration of Martian canals being due to chance, found it extremely low (less than 1 in 10^260), and concluded the canals were designed. As we know now, the canals were not only undesigned, they were illusory.

Lowell pre-dates Dembski by about 90 years, but the method he used is effectively the same as what Dembski suggests. We can expect Dembski’s method to fail just as spectacularly.

Syntax Error: not well-formed (invalid token) at line 2, column 173, byte 273 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

Mike S.:

Much of the refutation of ID focuses on the fact that nature has self-organizing properties (which we are just learning the details of).

Where did you find this? Up until now the answer to ID has always been normal biology.

Re Paul King’s comments, the comparison is apt, but I think it’s worth pointing out the difference. Lowell was a scientist. If/When new data shows the canals don’t exist, he changes his mind. These ID nitwits will never, ever conclude that god didn’t do it, under any circumstances. Their beliefs are faith-based.

Another quote I missed

John Wilkins and Wesley Elsberry attempt to offer a general argument for why the filter is not a reliable indicator of design. Central to their argument is that if we incorrectly characterize the natural necessities and chance processes that might have been operating to account for a phenomenon, we may omit an undirected natural cause that renders the phenonmenon likely and that thereby adequately accounts for it in terms other than design. Granted, this is a danger for the Explanatory Filter. But it is a danger endemic to all of scientific inquiry.

W. A. Dembski. No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. p. 14

What clearer admission of false positives and the EF can we expect?

Dembski Wrote:

Granted, this is a danger for the Explanatory Filter. But it is a danger endemic to all of scientific inquiry.

PvM Wrote:

What clearer admission of false positives and the EF can we expect?

Please see my interpretation of the EF (above), according to which Dembski’s statement is not an admission that the EF can produce a false positive.

If I understand correctly Dembski’s position is that errors applying the filter (including errors that are in reality unavoidable) are not relevant to the question of whether the EF itself produces errors. Thus even if practical problems render the filter highly unreliable in actual use he can still maintain his “no false positives” stance, although it is highly misleading.

If I remember correctly in his exchange with Orr over Dembski’s fatally flawed attempt to apply the filter to a bacterial flagellum, Dembski asserted that the only hypotheses that needed to be tested by the filter are those where there was a sufficiently detailed account such that the probability could be calculated. If this rule were to be considered part of the filter then Dembski’s position crumbles - the rule almost guarantees false positives. But if it is not part of the filter why insist on it at all ?

I have not been able to find the article in question recently, so I can’t verify that my memory is correct. As I remember it, it came in Dembski’s second response in the exchange.

Paul King: I have not been able to find the article in question recently, so I can’t verify that my memory is correct. As I remember it, it came in Dembski’s second response in the exchange.

Just google on Orr unfettered. And don’t sweat false positives or any of this filter business. Go back to Richard Wein’s post of the 13th. Also ignore probability statements from Dembski. All it amounts to is selectively drawing attention to the ‘improbability’ of particulars. If you take you finger out of the thimble what is the probability that it will fill up with air? Nearly one. What is the probability of any one configuration of air molecules? Nearly zero. Depending on what ‘probability’ you want, just slice it finer - just take more details into account.

Quiz: 500 hundred million years ago, was a proboscis monkey probable? Was a diversity of animal species probable? 4.3 billion years ago, was the exact bacterial flagellum probable? Was it probable that microbes would evolve ways to move?

And now for the information you’ve all been waiting for: why are there really no false positives? Because if you think you have one, it means a mistake was made in application of the filter argument; you didn’t eliminate all natural explanations after all. The Filter works perfectly in hindsight.

About this Entry

This page contains a single entry by PvM published on June 12, 2004 2:58 PM.

Bicoid, nanos, and bricolage was the previous entry in this blog.

Project Steve, Panda’s Thumb in the News is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter