Icons of ID: Reliability: Do we care?

No false positives? So what? All science is tentative, isn’t it?

Such seems to be the common response of ID proponents when ID critics show that the claims about reliability of the Design Inference are unsupportable. Let’s first look again at the claims made by ID proponents as to the exact nature of the explanatory filter, the Design Inference and its reliability. Once I have established the relevance of the reliability criterion, I will explore the recent objections and arguments from ID proponents. I will use claims and observations from many resources to show how the arguments from ID proponents fail to address these issues. In addition I will show how ID critics have made a compelling argument that the theoretical foundation of ID is fundamentally flawed.

Let’s first explore some of the claims made about the reliability of the Design Inference to show what’s at stake.

1998: Rigorous criterion

Biologists worry about attributing something to design (here identified with creation) only to have it overturned later; this widespread and legitimate concern has prevented them from using intelligent design as a valid scientific explanation.

Though perhaps justified in the past, this worry is no longer tenable. There now exists a rigorous criterion–complexity-specification–for distinguishing intelligently caused objects from unintelligently caused ones.

Dembski in Science and design

1999: Useless

On the other hand, if things end up in the net that are not designed, the criterion will be useless.

Dembski, William, 1999. Intelligent Design: The Bridge Between Science & Theology, p. 141

and

2002: ISCID

(William A. Dembski:) Briefly, the claim that specified complexity is a reliable marker of design means that if an item genuinely instantiates specified complexity, then it was designed. As I argue and continue to maintain, no counterexamples to this claim are known.

Compare this with

I argue that we are justified asserting specified complexity (and therefore design) once we have eliminated all known material mechanisms. It means that some unknown mechanism might eventually pop up and overturn a given Design Inference.

William Dembski, “No Free Lunch”

No wonder that Del Ratzsch himself concluded that

So typically, patterns that are likely candidates for design are first identified as such by some unspecified (“mysterious”) means, then with the pattern in hand S picks out side information identified (by unspecified means) as releavant to the particular pattern, then sees whether the pattern in question is among the various patterns that could have been constructed from that side information. What this means, of course, is that Dembski’s design inference will not be particularly useful either in initial recognition or identification of design.

Del Ratzsch: “Nature, Design and Science”

So what’s the deal?

William Dembski continues to defend his original statement that the explanatory filter has no ‘false positives’ but at the same time admits that reliability at most is a theoretical possibility and that in real life, the explanatory filter faces the possibilities of incorrectly inferring design.

On ISCID, Dembski started at thread No False Positives and the Lust for Certainty on November 2002. He argues that

Briefly, the claim that specified complexity is a reliable marker of design means that if an item genuinely instantiates specified complexity, then it was designed. As I argue and continue to maintain, no counterexamples to this claim are known.

Note the word genuinely. In other words this tautological statement states that this marker of design is reliable when it is reliable and unreliable when unreliable. Or in other words, the design may be actual or apparent. So how do we differentiate between these two cases? Dembski certainly does not provide us with ways to do this. In fact, ever since Wesley proposed his “algorithm room” to deal with CSI, Dembski has ignored dealing with Wesley’s objections.

Wesley on the Calvin reflector

But Dembski previously claimed that his “complexity-specification” was a completely reliable indicator of the action of an intelligent agent back in the “First Things” article. His “reiterations” post stance completely obviates that claim. If the determination of “actual CSI” or “apparent CSI” requires the evidence of what sort of process produced the object in question, then finding that an object itself has CSI is necessarily ambiguous and uninformative on the issue of whether it was produced by an intelligent agent or an unintelligent natural process.

In this posting under the heading “The Problem That Dembski Does Not Want To Address”, Wesley presents his argument:

Dembski says above that he is interested in the problem of the generation of specified complexity, not its reshuffling. But Dembski’s Design Inference treats this distinction as a “don’t care” condition. Dembski offers specified complexity and his Design Inference as a means of getting to a reliable indication of the action of an intelligent agent. The problem is that Dembski’s Design Inference does not distinguish between an event whose specified complexity is merely transformed prior existing specified complexity and an event whose specified complexity is (somehow that never gets discussed by Dembski) actual original specified complexity. Given two events which have the same complexity as measured by probabilistic analysis of each event having occurred by a chance process, Dembski’s Design Inference can find “design” as the relevant category. But if by the “cheating” process of transforming already existing information into new forms with the “appearance of specified complexity” natural processes and algorithms can construct counterfeit specified complexity, why does it matter that some other process can produce actual specified complexity? They cannot be distinguished after the fact even under Dembski’s uneven rules if one does not have in hand the actual causal story that goes with the event. But Dembski’s Design Inference was supposed to get us away from having to rely upon having those causal stories. We were supposed to be able to simply examine the properties of the produced event and declare it to be only explicable as an instance of “design”. Dembski’s response to the problem that evolutionary computation poses for his thesis shows that the Design Inference is incapable of doing this.

Now Dembski has to deal with the reality of false positives but where he used to argue that false positives would render the approach useless he now makes the following claim

But how do we know that something is complex and specified? It’s here that we need to move from specified complexity as we can best assess it on the basis of available evidence to specified complexity as a fact about some item in nature. The issue, then, as I argue in the paper cited above is not with reliability but with assertibility, namely, our epistemic justification for asserting that some item instantiates specified complexity. Such assertions can be wrong, so varying degrees tentativeness attach to Design Inferences as to all scientific claims.

Dembski however ignores that scientific claims are not eliminative but rather positive claims with hypotheses that can be rejected by new data. But that would require that intelligent design accepts the “we don’t know” inference proposed by Wilkins and Elsberry in The advantages of theft over toil: the Design Inference and arguing from ignorance

In other words, even accepting that Dembski’s original claim that “false positives would render the filter useless” was hasty, Dembski now has to explain why an unreliable eliminative filter should allow us to make a Design Inference and not a “we don’t know” inference until we can present a positive hypothesis. Dembski knows what is at stake “I argue that we are justified asserting specified complexity (and therefore design) once we have eliminated all known material mechanisms.” Dembski has moved From all material mechanisms to “all known mechanisms” making the Design Inference unreliable if not useless, a “God of the Gaps” theory, an argument from ignorance.

So how can ID rectify these shortcomings?

I can envision various ways but none seem too appealing.

Dembski proposes one possibility “I also note that there can be cases where all material mechanisms (known and unknown) can be precluded decisively.”. If all known and unknown mechanisms can be decisivly eliminated then a Design Inference may be warranted. But it is not self evident that we can argue reliably that all known and unknown mechanisms can be eliminated.

Additionally ID proponents can propose positive hypotheses of design which describe in detail how design inferences can be constrained. Or ID proponents can describe motive, means, opportunity similar to how criminology infers intelligent design.

Independent evidence and explanatory power need to work in tandem and for one to outpace the other typically leads to difficulties.

W. Dembski NFL p. 91

So far we have shown that specified complexity is a reliable marker for “we don’t know” or “design”.

Del Ratzsch argues concisely:

“In the present case, however, it seems to me that design theories are going to have to produce some positive results which are not easily assimilable by reigning theories. And it seems to me that to date design has not achieved that. “

Objections

Dembski

Furthermore, the evolutionist has no burden of evidence. Instead, the burden of evidence is shifted entirely to the evolution skeptic

p. 23 “The logical underpinnings of design” 2002

Dembski is wrong, first of all the evolutionist has the burden of showing a positive hypothesis of how evolutionary pathways may explain a particular system. In addition, lacking such evidence the evolutionist is also competing against the default hypothesis which is “we don’t know”. Evolution skeptics as well as evolutionary scientists have the same ‘burden’ which is to find alternative hypotheses which explain the evidence better, or find data and or observations which conclusively falsify the existing hypothesis.

Dembski may object that the burden of the design proponent is to eliminate all known and unknown mechanisms but that’s because Dembski has proposed such an eliminative approach and has inisted that it has to be reliable, that is “no false positives”, or it will be useless. In other words, it is not the evolutionist who is insisting this but rather the ID proponent’s reliance on the Explanatory Filter. If this leads to the conclusion that “this is not how science is supposed to work” that perhaps the ID proponent should revisit the Explanatory Filter rather than accuse evolutionists of setting unachievable goals. Evolutionists merely have pointed out the consequences of Dembski’s own arguments.

But the cost of admitting that the Explanatory Filter is useless, despite earlier statements, is quite high. The eliminative approach chosen by Dembski would have been a powerful method to infer design without the need for any positive evidence, positive hypotheses just pure elimination. Such a Design Inference would have served its purpose of being able to detect a transcendent designer without the burden of evidence. And the temptation may have proven itself to be too strong leading ID proponents to make claims that are clearly unsupportable.

Dembski complains that science limits itself by rejecting design explanations by excluding a priori non-material mechanisms. But let’s look at these claims in more detail. First the claim that science rejects design explanations a priori. Is this a sustainable claim? I argue that this is incorrect, any design hypothesis can be compared to other hypotheses to determine which explanation best explains the known facts. But now we get to the next issue namely that science only considers “material mechanisms”. This seems to be the crux of the matter, science cannot deal with “non-material mechanisms” thus it is not fair to a design hypothesis which invokes (or should I say “tries to avoid invoking”) non-material mechanisms.

So what are ‘non-material mechanisms’ if not a codeword for supernatural design? Is thus the argument not reduced to a simple observation that science cannot deal in supernatural explanations?

So either Dembski has to accept that science does not reject design a priori or he has to claim that it does but then design is merely referring to “supernatural”.

Paul Nelson presents a thought experiment

Thought experiment: Let’s suppose that life was designed and brought into existence by a non-human intelligence. That’s what actually happened, and we want science to be able to discover (or infer) what actually happened. So, using biological evidence, we infer that life was intelligently designed.

Now, Wilkins and Elsberry would call this a “rarefied” Design Inference. According to their analysis, in science “no rarefied Design Inference is needed” (p. 722). Therefore science cannot infer certain possible states of affairs, despite the fact that (under our thought experiment), those states of affairs are not only possible, but actually the case (i.e., true).

I’d say there is something profoundly wrong with the Wilkins/Elsberry analysis. The thought experiment stipulates that an empirically possible conclusion is true (i.e., life was intelligently designed). The Wilkins/Elsberry analysis says, however, that this true conclusion cannot be known or inferred.

Paul in other words object to the fact that science may be unable to uncover evidence to support the Design Inference. Let’s assume the two following possibilities:

  1. Evidence supporting that life was intelligently designed remains hidden from science.

  2. Evidence supporting that life was intelligently designed does not remain hidden from science.

It’s the first instance to which Paul seems to be objecting. In other words we know that life was designed but we cannot prove it. So why should we infer “we don’t know” if we know the “truth”. Here is where we get to the crux of the matter namely that ID proponents are convinced that life has been ‘designed’ and that science should thus be able to discover it. But the reality is that the ID inference is eliminative and thus has to compete with the “we don’t know” explanation. Until ID thus proposes a better hypothesis that “we don’t know” it cannot outperform such an explanation. By virtue of its eliminative nature, and it’s inability to eliminate the “we don’t know” category, it cannot thus perform better than the “we don’t know” hypothesis.

Wilkins responded to Nelson’s claim on ISCID

Anything that we know through science we know from empirical data. So a Design Inference has to be not only consonant with data, but licensed by the patterns that exist in the data. To be achievable, we need to understand (that is, have a model of) design and designers. Consider three kinds of designer: known ordinary (type A), unknown ordinary (type B) and unknown rarified (type C).

We already know how type A designers work - we are they. We can observe them, experiment with them, and they can express their intent. From this, we can develop (and in folk psychology we have in-built) a model of design. We can assess when a design goal has been successfully met or when it fails. We can compare two designs and tell which is better according to the stated, and sometimes the unstated, goals and criteria for success. Moreover, we know the laws of physics and materials that apply to designs, and so we know what may have been intended when the intent and criteria are hidden to us (in the case of ancient artifacts). Notice, though, that the less culturally like us human designers are, the less obvious the design intent is. With a lot of stone tools from the paleolithic, we can only guess what they were for. And that is with humans, whose ways are as our ways.

What of designers whose ways are not our ways? What of type B designers? I’m not talking about minimalist architects here, although they are about as alien as one can be with a human anatomy, I’m talking about non-human designers. What do we know about their designs? This is really a question of what we would be able to recognise as design. For terrestrial organisms we are more or less related. We can recognise design among mammals, and to a lesser extent birds, although the less like us the more the tendency to anthropomorphise (the biological equivalent of eisegetical interpretation). Still, if a crow in Japan puts a nut under the tyres of a car stopped at a stop light, and returns to gather the contents after it is cracked, then that is design; because we know that crows eat, that they have problem solving skills, that shells are hard, and that cars are heavy. In short, it is how we would do it, if we were in their place.

But now what about something that is very unlike us? What about type B designers that are discovered through SETI, to use one of Dembski’s favourite examples (although I shall not make any inferences based on the novel or film Contact, as it is fiction, and philosophers are altogether too fond of using Gedankenexperiments as if they proved anything)? Well, we still know the properties of the physical world, and we might expect that they would use the same maths as we do (although if the Wignerian explanation of why maths is useful is correct, there is no guarantee of this - they may evolve a maths quite unlike anything we can envisage, because they interact with the physical world very differently to us; still, I would expect it to be something we could describe in some ideal mathematics). We might therefore be able to say of a certain signal that it is designed because the physical world only delivers such signals when there are designers in play. Exactly how this might work I am not sure; and neither is anyone else. Certainly not Jodie Foster, for all her undoubted virtues. So much for SETI - it tells us nothing until we actually have cases in hand, really, other than how our expectations might work. But again we can say of some fictitious case that that is how we would do it, if we were in their place (because, with the fictitious example, we are - it is only in our imaginations).

Now, here’s the Big One: what about a type C designer? This kind of designer is not bound by our logics, nor by our physics, nor by our cognitive propensities. Any pattern of data generated by the agency of a type C designer is either going to be indistinguishable from the patterns generated by ordinary designers, and hence something we will interpret as an ordinary (lawlike) process, or it will be indistinguishable from a random process - in effect we will not have any reason to gather into a single class all the measurements that are due to this supernatural intervention (let us be frank about this - if we are talking about pansperming aliens, they fall under type B). How could we do this without begging the question?

and

But there is another possible designer - the type D designer; this is the unknowable rarified designer. This designer cannot be established in terms of empirical data for the reasons the type C designer cannot be - we have no inductive patterns that lead us to make that inference more likely over “we do not know”, as Wesley and I argued. But the type D designer doesn’t need this because this designer is not knowable a posteriori. This designer is the designer of revelation, and revelation is what makes some patterns of empirical data significant; it is what puts the outcomes of supernatural agency into a single class. If the designer you seek is one that reveals itself in non-empirical ways (i.e., outside the purview of any science as such) then I can have no possible objection; and if that revelation is enough for me, then I will accept that inference. But I shall not, now or then, call it science or think that the empirical world is what licenses that inference.

Paul Nelson quotes Kitchner

Even postulating an unobserved Creator need be no more unscientific than postulating unobserved particles. What matters is the character of the proposals and the ways in which they are articulated and defended. (P. Kitcher, Abusing Science, 1982, p. 126)

Wesley Elsberry enters the discussion

I’m not sure that John is disagreeing with Kitcher. Kitcher is talking about postulates, things that are assumed to be true for some line of inquiry. Rarefied design as an inference, though, is something that some people assert can be concluded from particular premises.

The problem with a postulate of the sort that Kitcher discusses, though, is that someone like Paul Nelson will come along and claim that what is being argued is theology and not science (as your 1997 NTSE talk set forth).

If “postulating an unobserved Creator” were as generally productive as “postulating unobserved particles” has been in physics, I don’t think that we would be having this sort of discussion now. Postulating unobserved particles has led to specific hypotheses and experiments aimed at producing empirical data which would bear on whether outcomes based on the existence of those heretofore unobserved particles are actually there. So far in ID, though, there is no similar push to test the postulate: once the unobserved Creator is postulated, no evidence concerning whether that Creator exists is sought after or solicited.

Wilkins describes the argument well when he states

I do not think an unobserved designer is an illicit inference. An unobserved and unobservable designer - that is, one about whom we have no independent information and are not likely to - is.

The Christian presumption of design

Paul Nelson’s comments about “rarefied design” led me to speculate why Christians may insist that not only intelligent design exists but also that is can be detected in nature.

The apostle Paul argued that the natural world is evidence of God’s existence:

Romans 1:20, : “For since the creation of the world God’s invisible qualities - his eternal power and divine nature - have been clearly seen; being understood from what has been made, so that men are without excuse”

Quoted from Evolution and Intelligent Design: Implications for Integration of Faith and Learning by Jerry E. Deese

From a Christian perspective one may argue that there is an expectation that evidence of intelligent design can be found in nature. Thus any argument which argues that such an expectation is metaphysical and can reasonably be doubted will have to be opposed. But this is not a question that science should be expected to be able to resolve.

In 1999 Dembski argued that:

Does nature exhibit actual specified complexity? The jury is still out.

Dembski:as quoted by Wesley see also Explaining Specified Complexity

Is the Jury still out? Or is the verdict being appealed?