Following the publications by Ryan Nichols  and Patrick Frank  we now have a paper by Elliott Sober who explains in very accessible language why Intelligent Design is scientifically vacuous.
Elliott Sober, What is wrong with Intelligent Design?, The Quarterly Review of Biology, Volume 82, No. 1 March 2007
Abstract: This article reviews two standard criticisms of creationism/intelligent design (ID): it is unfalsifiable, and it is refuted by the many imperfect adaptations found in nature. Problems with both criticisms are discussed. A conception of testability is described that avoids the defects in Karl Popper’s falsifiability criterion. Although ID comes in multiple forms, which call for different criticisms, it emerges that ID fails to constitute a serious alternative to evolutionary theory.
Elliott Sober starts by pointing out that ID is formulated as a very modest claim, which he calls mini-ID which states that “complex adaptations that organisms display (e.g. the vertebrate eye) were crafted by an intelligent designer.”
Sober quickly addresses the claims by some ID proponents who argue that the intelligent designer is ‘supernatural’ or who deny common ancestry. Since the mini-ID thesis leaves out so much details, why do ID proponents spend so much time defending and formulating it? Sober points out that the US court systems have forced ID to formulate a religiously neutral statement. In addition, the mini-ID statement avoids dealing with such issues as the age of the earth etc, allowing a unified front for all creationists to subscribe to.
Although mini-ID is modest in what it asserts, ID proponents have high hopes for what it will achieve. According to the Discovery Institute’s “Wedge Strategy” (available here), which was leaked on the internet in 2001, “[d]esign theory promises to reverse the stifling dominance of the materialist worldview, and to replace it with a science consonant with Christian and theistic convictions.” The Discovery Institute is the flagship ID think tank, and the “Wedge Strategy” is its political manifesto. So much for questions about religious motivation and political context (Forrest and Gross 2004). What about the evidence?
What about the evidence…
Sober first looks at some common objections to ID and shows why they often fail. For instance the argument of sub-optimal design. By definition such an approach suggests that ID is testable, and in addition in makes presumptions about capabilities and intentions of the designer(s). The answer from IDers is simple and effective: How do we know these intentions? While Sober argues that this is a good reply, it also provides us with a much better way to critique ID.
If imperfect adaptations do not demonstrate that the mini-ID claim is false, perhaps the right criticism is that this statement cannot be tested. But, what does testability mean?
Sober first mentions Popper, whose work on the concept of falsifiability is often cited as a way to demarcate science from non-science. But Popper’s falsifiability concept is too weak to reject the claims by creationists:
Popper’s account entails that some versions of creationism are falsifiable, and hence scientific. Consider, for example, the hypothesis that an omnipotent supernatural being wanted everything to be purple, and had this as his top priority. Of course, no creationist has advocated purple-ID. However, it is inconsistent with what we observe, so purple-ID is falsifiable (the fact that it postulates a supernatural being notwithstanding). The same can be said of other, more modest, versions of ID that do not say whether the designer is supernatural. For example, if mini-ID says that an intelligent designer created the vertebrate eye, then it is falsifiable; after all, it entails that vertebrates have eyes. An even more minimalistic formulation of ID is also falsifiable; the statement that organisms were created by an intelligent designer entails that there are organisms, which is something we observe to be true.
So if Popper’s falsification concept is too weak to reject creationist claims, what to do? Sober points out that “[i]n addition to entailing that many formulations of ID are falsifiable, Popper’s criterion also has the consequence that probability statements are unfalsifiable.”
In an attempt to address this problem Popper tried to extend his falsification argument by suggesting that we can regard a particular hypothesis H as false when an observation occurs that H states is very improbable. But how do we determine was is ‘too improbable’? Popper thought that the cut-off was “a matter of convention”.
Popper’s argument mirrors Fisher’s test of significance but fails to capture what testability is.
According to Fisher, if H says that an observation O is very improbable, and O occurs, then a disjunction is true—either H is false or something very improbable has occurred. The disjunction does follow, but it does not follow that H is false, nor does it follow that we should reject H. As many statisticians and philosophers of science have recognized (Hacking 1965; Edwards 1972; Royall 1997), perfectly plausible hypotheses often say that the observations have low probability. This is especially common when a probabilistic hypothesis addresses a large body of data. If we make a large number of observations, it may turn out that H confers on each observation a high probability, although H confers on the conjunction of observations a tiny probability. If Fisher’s test of significance fails to provide a criterion for when hypotheses should be rejected, it also fails to describe when a hypothesis is falsifiable. Perhaps Popper’s f-word should be dropped.
Sober argues that testability requires a concept of testing and that testing should be comparative. “If ID is to be tested, it must be tested against one or more competing hypotheses”.
For example, if mini-ID says that an intelligent designer made the vertebrate eye, and this claim is to be tested against the claim that chance produced the vertebrate eye, we must discover how these two hypotheses disagree about what we should observe. Since both entail that vertebrates have eyes, the observation that this is true does not help. We need to find other predictions that mini-ID makes.
Duhem’s thesis states that theories on their own do not make testable predictions, rather “auxiliary propositions” are needed to the theories to be tested. Duhem’s argument can be applied to mini-ID as well. The statement for instance that an intelligent designer made the vertebrate eye does not have any observational consequences beyond the fact that vertebrates have eyes. Only by adding assumptions about for instance motive can we supplement mini-ID with a testable proposal. But does this not make mini-ID similar to laws of nature? The answer is a straightforward ‘no’ because these auxiliary propositions cannot be simply invented, allowing us to fit the data. In other words, there needs to be independent evidence for the auxiliary propositions that are used. And this is where mini-ID fails. As Sober concludes since “we have no independent evidence concerning which auxiliary propositions about the putative designer’s goals and abilities are true”, mini-ID remains vacuous. In fact, as Sober argues, various ID proponents have admitted to such for instance Johnson refers to the designer’s motives as “mysterious” and “inscrutable”. While Sober does not mention much of Dembski, it is clear that Dembski himself has admitted to much of the same:
As for your example, I’m not going to take the bait. You’re asking me to play a game: “Provide as much detail in terms of possible causal mechanisms for your ID position as I do for my Darwinian position.” ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories. If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots. True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”
After all, finding fundamental discontinuities in a theory of natural selection confers no significance to other theories.
Sober then continues to address some of the objections by ID proponents, slowly unraveling what’s left of the argument and shows how mini-ID remains scientifically vacuous.
For instance it’s trivial for ID proponents to move the instance of intelligent design further back in time. Sober uses an interesting analogy, the printing press, which can hardly be considered to be intelligent and yet it delivers intelligently designed print-outs. So if a newspaper contains complex information, rather than holding the printing press responsible, ID proponents will argue that if you look back further in time, you will find, along the causal chain, an intelligent being. As such, it becomes trivially simple for ID proponents to keep moving the goal posts, while maintaining that an intelligent designer is involved. In fact, assuming that scientists were to push back the existence of complex information to the beginning of the universe, ID proponents can always claim that the designer exists outside space and time. This conclusion was also reached by people such as Wesley Elsberry and strengthened by comments from such ID proponents as William Dembski. But if one can keep moving the goal posts, how can ID be falsifiable?
In addition, the proponents of ID who make this argument have lost sight of the role of observation in Popper’s concept of falsifiability. For a proposition to be falsifiable, it is not enough that it be inconsistent with a possible state of affairs; it must also be inconsistent with a possible observation. Granted, the ID position is inconsistent with the existence of complex information that never had an intelligent designer in its causal history. It is equally true that “all lightning bolts issue from the hand of Zeus” is inconsistent with there existing even one Zeus-less lightning bolt (Pennock 1999). These points fail to address how observations could refute either claim.
Sober then addresses some alternatives routes by which ID proponents attempt to test their position, such as via the concept of Irreducible Complexity. As Sober is quick to point out however, the position that evolution cannot explain ‘X’ does nothing for the position of ID. For ID to be testable it needs to make a prediction. That other theories make predictions however does nothing for the veracity of ID’s position.
Sober thus reduces irreducible complexity to the following
The most that can be claimed about irreducibly complex adaptations (though this would have to be scrutinized carefully) is that evolutionary theory says that they have low probability. However, that does not justify rejecting evolutionary theory or accepting ID. As noted earlier, many probabilistic theories have the property of saying that a body of observations has low probability. If we reject theories because they say that observations have low probability, all probabilistic theories will be banished from science once they are repeatedly tested.
Finally, Sober provides for an insightful example of an irreducibly complex system such as the four legs of a horse. Take away any one leg and the horse will be unable to walk or run, take away two legs and again the horse will be unable to walk or run. And yet, despite precursors to the four legged horse being unable to run, a four legged horse does exist. How could such a situation have evolved? Simple, the horse did not evolve it legs one at a time, but rather the development of the horse’s legs are controlled by a single set of genes, not four sets of genes, one for each leg. This leads to the conclusion that
A division of a system into parts that entails that the system is irreducibly complex may or may not correspond to the historical sequence of trait configurations through which the lineage passed. This point is obvious with respect to the horse’s four legs, but needs to be borne in mind when other less familiar organic features are considered.
As I stated in the introduction, Sober is not the first one to reach this conclusion. Ryan Nichols  or “Gedanken” at ISCID, Wesley Elsberry, Mark Perakh and many others have pointed out why ID fails to be of scientific relevance.
Sober ends with the following:
It is easy enough to construct a version of ID that accommodates a set of observations already known, but it also is easy to construct a version of ID that conflicts with what we have already observed. Neither undertaking results in substantive science, nor is there any point in constructing a version of ID that is so minimalistic that it fails to say much of anything about what we observe. In all its forms, ID fails to constitute a serious alternative to evolutionary theory.
This is probably the most damaging observation namely that it is as simple to construct a version of ID that accommodates as well as conflicts with a set of observations and no way to determine which version if correct.
While Sober repeats the conclusions reached by others before him, his paper presents the arguments in a very readable format and in addition provides for some very useful analogies and examples
Michael Egnor Wrote:
You say that ‘several claims of IC have been falsified’. Translated into non-Darwinian English, you mean ‘Darwinists have no answer to the vast majority (many millions) of IC predictions’.
So what are some of these ‘predictions’?
More ID predictions? ID predicts that ‘junk DNA’ isn’t junk; it’s there for a purpose.
But that ‘prediction’ does not follow logically from ID.
Perhaps another one?
Irreducible complexity is an ID prediction.
Nope, not much better. So much for ID.
In fact, “as pointed out by Chris Ho-Stuart, it is a prediction of evolutionary theory actually made by Herman Muller, in 1918. “
Let’s try again
The astonishing cellular complexity revealed by molecular biology is an ID prediction.
Nope, not really a prediction of ID, more like a post-diction. I may very well argue that ID would predict simplicity as a designer would certainly prefer this over too much complexity. If any ID proponent disagrees with me, perhaps we can compare the two auxiliary hypotheses to see which one follows more logically from ID.
Duhem on analogies
Since ID proponents are so fond of analogies, it is time to remind the reader that while there is nothing wrong with analogies guiding research into formulating a theory, “an [analogy and the] analogue form no part either of the theory or any adjunct to it by which is value is assessed.”
Mellor, Models and Analogies in Science: Duhem versus Campbell? , Isis, Vol. 59, No. 3 1968, pp. 282-290.
In other words, once an analogy, such as for instance ‘a outboard motor’ has been identified, one has to “generate a system of hypotheses and deducible consequences”.
The comparative nature of theories
As Larry Laudan points out
Let us drop the pretense, dear to the hearts of Bayesians and error statisticians, that our evaluations of hypotheses are absolute. Instead, let us say explicitly what scientific practice already forces to acknowledge implicitly, viz., that the evaluation of a theory or hypothesis is relative to its extant rivals.
The problem with ID however is that it does not present its own theory of ID but rather attempts to find observations which reject (Darwinian) evolutionary theory. However, since ID neither provides us with its own explanations and predictions relevant to the ID thesis, and since ID is eliminative and thus cannot even compete with the null hypothesis of ‘we don’t know’, it is clear that we have to reject ID as scientifically without any content.
 Ryan Nichols, Scientific content, testability, and the vacuity of Intelligent Design theory, The American Catholic Philosophical Quarterly, 2003 ,vol. 77 ,no 4 ,pp. 591 - 611
Proponents of Intelligent Design theory seek to ground a scientific research program that appeals to teleology within the context of biological explanation. As such, Intelligent Design theory must contain principles to guide researchers. I argue for a disjunction: either Dembski’s ID theory lacks content, or it succumbs to the methodological problems associated with creation science-problems that Dembski explicitly attempts to avoid. The only concept of a designer permitted by Dembski’s Explanatory Filter is too weak to give the sorts of explanations which we are entitled to expect from those sciences, such as archeology, that use effect-to-cause reasoning. The new spin put upon ID theory-that it is best construed as a ‘metascientific hypothesis’-fails for roughly the same reason.
 Patrick Frank On the Assumption of Design, Theology and Science, Volume 2, Number 1 / April 2004, pp. 109 - 130.
Abstract: Abstract: The assumption of design of the universe is examined from a scientific perspective. The claims of William Dembski and of Michael Behe are unscientific because they are a-theoretic. The argument from order or from utility are shown to be indeterminate, circular, to rest on psychological as opposed to factual certainty, or to be insupportable as regards humans but possibly not bacteria, respectively. The argument from the special intelligibility of the universe specifically to human science does not survive comparison with the capacities of other organisms. Finally, the argument from the unlikelihood of physical constants is vitiated by modern cosmogonic theory and recrudesces the God-of-the-gaps