Intelligent Design explained: Part 1 Introduction

The following posting is based on a response I provided to Allen MacNeill on his excellent blogsite. In addition to much needed checking of grammar and spelling, I also have added additional content and/or revised the argument for clarity.

Avid readers of Pandasthumb may remember that Allen MacNeill is a Cornell professor who will be teaching an Intelligent Design course this summer. The course in question is: BioEE 467/B&Soc 447/Hist 415/S&TS 447: Seminar in History of Biology, and has a blogsite. The first class will start June 27, 2006.

In the posting, to which I responded, Allen shows the many problems in one of Salvador Cordova’s postings. Sal is an avid ID activist and defender of Dembski and his postings can be ‘admired’ at Uncommon Descent. Sal stated that ““There are many designed features in biology that make no sense in terms of natural selection but make complete sense in terms of design.””

As Allen shows, this is a very flawed statement. In my response I make an attempt to explain in straightforward terms why Intelligent Design’s approach is flawed and makes ID scientifically vacuous or in other words, void of content.

PvM wrote:

Excellent points Allen. ID proponents seem to be quick to claim that science is using ID’s approach to detect design but on closer scrutiny these claims fall apart quickly.

ID is inherently a claim based on ignorance (elimination) and while it uses some ‘fancy sounding’ terms like complex specified information, the terms are used in a manner which conflates ID’s terminology with how science uses such terminology.

ID starts with an unfounded assertion that design is that which remains once natural processes of regularity and chance have been eliminated. Or in other words, ‘design’ is the set-theoretic complement of chance and regularity (Del Ratzsch).

Let’s stop at and consider the following:

Why should we accept this when available empirical evidence and logic suggests that there is nothing necessarily supernatural about intelligence. In fact, intelligent behavior seems quite well reducible to regularities and chance as polling, profiling, advertising and many other arenas show. Intelligence is in other words predictable and since intelligence has the ability to make choices given multiple options, there will be a certain level of variation or uncertainty present.

Amazon for instance uses this to propose to its users, items of interest based on their own past interests as well as based on the interests of those who have bought similar items.

And as Dembski himself admits, sciences such as criminology, archaeology, cryptography and SETI all rely on the design inference. Since Dembski also argues that science as it exists right now rejects the design inference a-priori, it seems clear that Dembski’s design is different from the design detected by the sciences.

But let’s for the moment accept ID’s Explanatory Filter approach. How is the EF applied in biology? Well,through the concepts of specification and complex information. Specification is trivial in biology as it refers to function and information refers to the negative log(2) of the probability. Now we get into some interesting territory. Dembski argues that if something can be explained as a regularity, its probability becomes close to 1 and the information goes to 0. But the same applies then to intelligent design. If something can be explained as intelligently designed, the amount of information is zero.

So that does not really work well. So perhaps we can define the amount of information as the likelihood that the item arose under uniform probability? Under that scenario, something is ‘designed’ if it has a function and if its pure chance probability is too low. But then we still do not know if designed means ‘designed by regularity/chance’ or ‘designed by an intelligence’ (remember I am for the moment accepting the distinction between the two and I am showing how even accepting the distinction the filter suggests that the two explanations are nothing different). So how does the filter work? Well, it argues that if chance alone does not explain it and if regularities cannot explain it (yet) then we have to accept ‘design’ as the default explanation. So ‘design’ includes anything from ‘intelligent designer’ to ‘an unknown regular process’. Once again ID fails to explain how to distinguish between actual and apparent design. And now the best one. Even if we accept ‘design’, Dembski has shown that this does not necessarily need to involve an intelligent designer. Confused?… I bet… Ryan Nichols points out that:

Nichols wrote:

“Before I proceed, however, I note that Dembski makes an important concession to his critics. He refuses to make the second assumption noted above. When the EF implies that certain systems are intelligently designed, Dembski does not think it follows that there is some intelligent designer or other. He says that, “even though in practice inferring design is the first step in identifying an intelligent agent, taken by itself design does not require that such an agent be posited. The notion of design that emerges from the design inference must not be confused with intelligent agency” (TDI, 227, my emphasis).

Source: Ryan Nichols, Scientific content, testability, and the vacuity of Intelligent Design theory, The American Catholic Philosophical Quarterly, 2003 ,vol. 77 ,no 4 ,pp. 591 - 611

As Elsberry has shown, given Dembski’s logic, natural selection matches his definition of an intelligent designer. Once again we notice how ID fails to distinguish between apparent and actual design.

And since ID refuses to propose positive hypotheses, it is thus doomed to be unable to deal with the issue of apparent versus actual design in any scientifically relevant manner.

And that is why Intelligent Design is scientifically vacuous.

In future postings I will address various concepts related to the Intelligent Design thesis, discussing such topics as ‘Complex Specified Information’, the ‘Explanatory Filter’, the ‘argument from ignorance’, the concept and impact of ‘false positives’, the ‘law of conservation of information’, the ‘displacement theorem’ and various other topics to show not only why the foundation of Intelligent Design is fundamentally flawed but also that ID’s claims are outright incorrect. Rather than rejecting ID a-priori, I am willing to entertain the concept of ID being scientifically relevant. As I have shown and will show, ID, by virtue of its flawed foundation, is doomed to remain scientifically vacuous and that so far ID’s contributions, or perhaps better stated, lack thereof, to science have shown my predictions to be validated. In the tradition of Laudan and pragmatic thinking, I intend to show that ID is ‘bad’ science.