False positives and the design inference

| 35 Comments

C9: Misuse of an inductive argument by the assertion of no false positives. also CI111.1 Specified complexity is a reliable criterion for detecting design. from the index of creationist claims by Mark Isaak

Hat tip to Wesley Elsberry

Dembski Wrote:

Biologists worry about attributing something to design (here identified with creation) only to have it overturned later; this widespread and legitimate concern has prevented them from using intelligent design as a valid scientific explanation.

Though perhaps justified in the past, this worry is no longer tenable. There now exists a rigorous criterion—complexity-specification—for distinguishing intelligently caused objects from unintelligently caused ones.

Wiliam Dembski: Science and Design 1998 First Things 86 (October 1998): 21-27.

Compare this with

Dembski Wrote:

I argue that we are justified asserting specified complexity (and therefore design) once we have eliminated all known material mechanisms. It means that some unknown mechanism might eventually pop up and overturn a given design inference.

No False Positives and the Lust for Certainty

These two claims seem to be contradictory, so which one should we take as relevant? The one which claims that there exists a rigorous mathematical criterion which reliably (no false positives) detects design? Or the one which admits that there is the possibility of false positives, rendering the reliability criterion of this ‘rigorous concept’ invalid and making the concept of the eliminative filter useless

Dembski Wrote:

“On the other hand, if things end up in the net that are not designed, the criterion will be useless.”

Dembski, William, 1999. Intelligent Design: The Bridge Between Science & Theology. P 141.

35 Comments

Paul Nelson lies, Dembski contradicts himself again…SSDD.

Note that Dembski’s contradictions have long been known. Since 1) ID activists seem to be largely unfamiliar with these contradictions and weaknesses of ID and 2) these contradictions have yet to be resolved by ID, I find it useful to remind our readers.

Dembski Wrote:

“On the other hand, if things end up in the net that are not designed, the criterion will be useless.”

One thing that Dembski has going for him is the ill-defined nature of the net. Since nobody can say for sure whether something is in or out of the net, it’s hard to pin down a false positive.

Is there anything that Dembski claims is definitely in the net? Bacterial flagellum? Nope. He says that the flagellum work is only preliminary. Caputo case? Nope. Not enough bits of CSI. Has Dembski ever applied the full EF to anything? Nope, and neither has anyone else, as far as I know. The contents of the net are a big unknown.

Uhhh… This is too mind-numbing to read any further. I need to keep what little logical organization my mind still possesses…

The Alligator->Caution track on the Grateful Dead’s ‘Anthem of the Sun’ LP is so fantastic and wonderful that God must have designed it.

By the way, have you looked at your hand?

I mean, really looked at it?

Wow.

The Isaac Newton of information theory.

Isaac Newton? I thought he was the Fig Newton on account of the general fruitiness of this stuff.

I argue that we are justified asserting specified complexity (and therefore design) once we have eliminated all known material mechanisms. It means that some unknown mechanism might eventually pop up and overturn a given design inference.

For example, bacteria with their “fantastic” flagella might not have been designed by a mysterious alien being.

They may have been sneezed or even pooped out by a mysterious alien being.

Hey, this is fun. I can see why Dembski likens the promotion of “intelligent design” to “street theatre.” You don’t need a big brain or any training. You just need to be shameless and modestly creative.

It’s too bad the fundies suck at the latter. Otherwise we might actually have to work a little bot to demonstrate how vacuous and vapid their ID garbage really is.

I argue that we are justified asserting specified complexity (and therefore design) once we have eliminated all known material mechanisms. It means that some unknown mechanism might eventually pop up and overturn a given design inference.

I.e., “god of the gaps”.

As I’ve always said, ID simply has nothing new to offer. (shrug)

Dembski changed his tune

Contrary to Wilkins and Elsberry, the risk of further knowledge upsetting a design inference has nothing to do with the filter’s reliability. The filter’s reliability refers to its accuracy in detecting design provided we have accurately assessed the probabilities in question (see chapter 12). Wilkins and Elsberry purport to criticize the filter’s reliability but are in fact criticizing its applicability (see chapter 14). They’re like someone who dismisses a calculator as unreliable after a friend, seeking to know what “9 times 9” is, gets the wrong answer by accidentally punching “6 times 6.” If that person were dead-set on dismissing the calculator’s usefulness but were pressed to admit that the friend had made the error, then the calculator hater might insist that nobody can be trusted to use the calculator accurately. That’s essentially what Wilkins and Elsberry have done with the Explanatory Filter.

Link

Still not the level of rigor that Dembski suggested initially. So now the argument is that IFF the probabilities have been correctly applied and IFF all known and unknown probabilities have been taken in consideration, the conclusion of ‘design’ is free from false positives. Otherwise, the conclusion of ‘design’ is of unknown accuracy and cannot even compete with ‘we don’t know’ since it lacks any independent hypotheses of how the system arose. In other words, while initially the design inference was portrayed as reliable, it is only reliable if it is used reliably but we have no way to determine the level of reliability of the filter. In other words, seems that the criterion is quite useless.

From Dembski, William, 1999. Intelligent Design: The Bridge Between Science & Theology. P 141.

Only things that are designed had better end up in the net. If this is the case, we can have confidence that whatever the complexity-specification criterion attributes to design is indeed designed.

Since when Dembski is talking about false positives he has no way of determining the reliability of the criterion, he cannot avoid false positives and as such the criterion becomes quite useless.

Thus whenever this criterion attributes design, it does so correctly

Dembksi claims that this is because of the inductive argument that whenever the criterion is applied is is successful (although no rigorous examples have been shown). In other words, if you don’t look to closely for false positives (and many examples exist and have been proposed), and when rigorous examples supporting Dembski’s claims are lacking then yes, one may indeed get the impression that the criterion appears to be ‘rigorous’ and ‘reliable’. However once the veil of reliability is lifted, nothing much remains.

And there ends the reliability of the explanatory filter argument. Is it not time for ID activists to accept the simple fact that the claim has failed to live up to its reputation, whether or not the reputation was overhyped?

No wonder ID has remained scientifically infertile and seems to have little hope of ever escaping its fate.

So perhaps it is time for ID activists to clearly admit that

1. The design inference is tautologically reliable: when it is applied correctly it works. 2. The design inference is practically unreliable as we do not have any way to determine if the inference has been correctly applied. 3. As such, the design inference does not live up to its intended purpose and certainly not to its hype of being a reliable and robust criterion. While in ‘theory’ it may be a perfect criterion, it lacks in applicability 4. Lack of applicability explains why ID has remained scientifically vacuous.

All this can be remedied by explaining how a particular system which is claimed to have been designed, arose. Only when providing the pathway and probabilities can we determine if the ‘designed’ explanation can compete with the ‘we don’t know explanation’ or other hypotheses.

In other words, design always remains a possibility but lacks in supporting evidence to determine how likely.

It’s back to the drawing boards it seems.

When mathematicians arrive at a valid proof, history sugggests that the other mathematicians will eventually go along even if the result is very distasteful to them. Godel’s initial work, for example, was hardly welcome to the Vienna positivists with whom he hung at the time; but they ended up accepting it because they couldn’t find any holes in the logic.

If Dembski’s vaporings were valid, the math world would have long sense lionized him. They haven’t. Dembski apparently understands that he hasn’t succeeded as a mathematicians. Presumably that’s why he’s now set up shop as a theologian.

The irony, I find in all of this, is that coming up with a framework for a “design inference” in principle is fairly interesting. It appears that the “design inference” of the kind we are familiar with is simple pattern matching with things that we expect a human to generate. But, in principle, a human need not generate such patterns, an alien could do the same, and we would still make the inference.

These issues could be more formally worked out by Dembski himself. I am waiting for someone else to take these ideas up, do it correctly, and get a something out of it.

Sadly, one can barely even discuss these things on his blog.

On another note, in:

— C12 [Erik]: I have not checked all the relevant publications, but to the best of my knowledge at most one person has been able to apply Dembski’s concepts and methods to a real example, namely Dembski himself. —

too much credit is given to Dembski here. He doesn’t even apply the EF correctly for the flagellum because he doesn’t calculate the probability of the flagellum conditioning on things that we know. He does this weird P(orig), P(local), P(config) calculation that, in no way whatsoever, comes close to the proper probability calculation.

So, C12 should be corrected. No one has ever been able to apply the concepts and methods.

If Dembski’s vaporings were valid, the math world would have long sense lionized him. They haven’t.

The Information Theory world would have, too. They haven’t. Go to the IEEE Information Theory Society website. Search for Dembski. You get nothing.

Isaac Newton of information theory? - more like the Isaac Newton of the bleeding obvious.

Steve S said:

The Information Theory world would have, too[Lionized the Kevin Federline of disInfromation anti-science Theory]. They haven’t. Go to the IEEE Information Theory Society website. Search for Dembski. You get nothing.

That’s easy WAD doesn’t do ‘Information’ just Noise. What with his infinity wave (lamda/2 => 15 billion light year with equal probability of being 0 or 1 for the existance of the non existant) and flubber clown boots he’s still grapling with the pickle matrix. Goyvinglaving.

Perhaps his audience Cletus and Brandine Spuckler and their 26 children can help him out.

PvM Wrote:

So now the argument is that IFF the probabilities have been correctly applied and IFF all known and unknown probabilities have been taken in consideration, the conclusion of ‘design’ is free from false positives.

The only person capable of getting everything right in such a calculation would be an omnipotnent Intelligent Designer. IOW, Dembski has provided a tool whose only use is for God to prove that (s)he exists.

Is the Babel Fish irreducibly complex?

Bob

Bob O'H Wrote:

The only person capable of getting everything right in such a calculation would be an omnipotnent Intelligent Designer. IOW, Dembski has provided a tool whose only use is for God to prove that (s)he exists.

Ironically, in addition to failing if we don’t know enough, the EF also fails if we know too much. (“Essentially, the reason the Generic Chance Elimination Algorithm works is because we do not know enough.” - No Free Lunch, p. 75) So it appears that the EF doesn’t work for anyone – not even God.

“Thus whenever this criterion attributes design, it does so correctly” Arrrgggh! As a geneticist it simply baffles me how someone can think that they’ve cleverly and properly designed a screening procedure when they’ve never tested it to SHOW that it is reliable and/or robust. If I walked into Monsanto or something and told them, look, I’ve got this great screen that is going to catch you many varieties of crops that will be worth a lot of $$$ for you, so you should hire me, they would ask how I know that it works. References? Tests? Show how you came up with the procedure? What is the probability of false negatives, false positives? Without any or all of these, they would chuck a tomato at me and tell me to get lost, no matter how many differential equations I piled up.

Look! As long as I think I’ve put it together correctly and as long as none of my untested assumptions are incorrect, I DON’T NEED TO DO SCIENCE! GIVE ME MONEY!

Show that it is reliable, Bill, don’t just wave your hand and say that it is.

I am waiting for someone else to take these ideas up, do it correctly, and get a something out of it.

It’s been done. It is called the “universal distribution”. Jeff Shallit and I applied it to the idea of detecting a simple computational process as a causal factor in an appendix to this essay. It is my contention that everything useful that is claimed as a property of Dembski’s “design inference” is actually delievered by “specified anti-information” (SAI). SAI is easy to apply to problems, it is objective, and it is not based on any probability estimates.

I spoke with an information theorist/mathematician friend of mine who made the following counterargument to Dembski:

All known information describing physical phenomena are encoded in physical matter. As a corollary, no known code exists independently of a substratum to encode the information. Even our notion of “intelligent” beings depend on a mind that operates on a physical substrate that holds such information. Suppose we grant Dembski’s assertion that any amount of information can only be generated by a source that has more information (i.e. his conservation theorem). Then, it follows that there is necessarily a physical substrate greater than the known universe that contains the information content for this universe. This is Dembski’s argument turned inductively to argue for naturalism contra theism.

[The Isaac Newton of information theory.]

The real Issac Newton must be spinning in his grave, thinking about what mistakes he must have made to be compared to someone as dumb as D_mbski.

It seems one can add the vacous nature of EF to the list of vacuous ideas of ID and CSI. As for the other two ideas it looks like it becomes vacuous by the usual intended failure to provide mechanisms.

The idea of ‘ID done right’ in the essay is nice. It seems the vacous nature of CSI is replaced with real specifications in SAI. And real results, but they aren’t the ones Dembski is looking for.

Apart from Elsberry’s and Shallit’s SAI one can also, at least naively, ponder the methods of criminology and archeology in the world of intentional actions.

I think criminology looks for “motive, means, opportunity” when analysing intentional actions. Applied to ID it seems they have some problems.

The ID designer lacks motive - ID doesn’t supply one. The ID designer lack means - ID doesn’t supply mechanisms. The ID designer lacks opportunity - evolution has no alibi and was around right after abiogenesis, so it seems it is the closest suspect. But how can this be, ID claims their methods are supported by criminological and archeological methods?! :-)

“Isaac Newton? I thought he was the Fig Newton on account of the general fruitiness of this stuff.”

He is the Moriarty of desinformation theory.

Sherlock Holmes noted “… when you have eliminated the impossible, whatever remains, however improbable, must be the truth.” The anti-Sherlock says “… when you have eliminated the improbably, whatever remains, however impossible, must be the truth.” But it is really impossible, so the ‘improbable’ is the truth.

Sherlock Holmes also said “They say that genius is an infinite capacity for taking pains. It’s a very bad definition, but it does apply to detective work.” It seems it also applies to the detective works of PvM et al when uncovering the path of ID on its Trail of Tears.

You’re missing the point. This is a wonderfully rigorous scientific proof of the well known fact (unless you’re an imbecile) that if something looks designed then it IS designed … if something LOOKS designed, then you apply the criterion and it always turns out to BE designed. always. so no false negatives. And only a MORON (or troublemaker) would apply the criterion to something which DOESN’T look designed, so when used PROPERLY there are no false positives either. sheesh.

Wesley Elsberry Wrote:

It’s been done. It is called the “universal distribution”. Jeff Shallit and I applied it to the idea of detecting a simple computational process as a causal factor in an appendix to this essay. It is my contention that everything useful that is claimed as a property of Dembski’s “design inference” is actually delievered by “specified anti-information” (SAI). SAI is easy to apply to problems, it is objective, and it is not based on any probability estimates.

I agree with Wesley 100%. If you look at Dembski’s various definitions and examples of specified information, the only thing that they all have in common is regularity, or in other words, algorithmic compressibility. Since SAI is a measure of compressibility, it objectively quantifies Dembski’s notion of specificity.

Algorithmic information theory has been around much longer than Dembski’s ideas have, so he’s reinventing the wheel, and doing a poor job of it. Even worse, he’s calling his wheel a wing, and claiming that it will make your car fly. And he continues to make that claim even though his own flubbermobile has never left the ground.

If I walked into Monsanto or something and told them, look, I’ve got this great screen that is going to catch you many varieties of crops that will be worth a lot of $$$ for you, so you should hire me, they would ask how I know that it works. References? Tests? Show how you came up with the procedure? What is the probability of false negatives, false positives? Without any or all of these, they would chuck a tomato at me and tell me to get lost, no matter how many differential equations I piled up.

I once saw a presentation by an inventor who had come up with this marvelous new circuit, and his proof that it worked was based solely on results from an industry-standard computer simulation. Since a computer simulation is by definition an approximation, and can only predict failure rather than success, it appeared that what he had actually done was to push the simulation beyond it’s applicable limits. (For any EEs out there, it was a metastable-immune flip flop, proved by SPICE simulation.) Without working silicon, I doubt anyone bought it.

In Dembski’s case, it would be as if he were trying to sell both the circuit and the simulator that proves it works.

Dembski Wrote:

There now exists a rigorous criterion—complexity-specification—for distinguishing intelligently caused objects from unintelligently caused ones.

The above just struck me for what apparently is Dembski’s egregious mistaking of “mathematical exactness” for “rigorous”. He evidently does not know that “rigorous” refers to proper empirical testing, and not to ad hoc limitations imposed upon the data. IOW, it remains for his own criterion to be tested for it to have any traction in science, for it to have any sort of rigor at all.

The other quotes indicate that he has some notion of this fact, but of course he wants to use metaphysical categories–chance, necessity, and design (actually, he has to add in that one, since ancient metaphysical thought did accept that what could be designed could also happen by chance)–to eliminate the problem that his criterion has not tested positively. In fact, it has tested negatively, in that supposedly CSI objects show evidence of having evolved.

Let’s note that Dembski even recognizes that problem. It is for that reason that he denied that flagellum homologies are much in evidence:

http://www.pandasthumb.org/archives[…]in_id_d.html

I don’t even know what Dembski could mean by saying that “only 10” proteins in the flagellum are homologous, since that is a suggestive number by itself, and there is no guarantee that homologies will always be recognizable (that he’s wrong on that number is only icing on the cake). More importantly, he was trying, erroneously, to allay a threat to his “rigorous criterion”, for he tacitly recognizes that one may, under the right circumstances, test to see if his “criterion” is justified. Of course, when it is shown that it is not justified, he simply claims that it is anyhow.

The trouble is that he knows nothing about science, confusing “rigorous” with “precisely but arbitrarily mathematically defined”. Thus mere speculation is to be redefined as “rigor”, so that the spectacular failure of any organisms, organs, or organic systems to reveal design (at least not when closely analyzed sans preconceptions) can be ignored, at least by his ilk.

One does not know if Dembski is being dishonest or obtuse at any one time. I think this shows the usual pattern in creationists/IDists–ignorance and dishonesty mingle irreducibly within their minds.

Glen D http://tinyurl.com/b8ykm

The Isaac Newton of information theory.

That’s right; as an information theorist, he’s a pretty good alchemist.

Does all this mean that Dembski is Dumbski? He can’t be - he’s Dembski. Did the bumski steal your whiskey? Wish I had that surname. Whoever he is, bless him, he must be onto something. Someone might tell us sometime what all this is about?

Re “metaphysical categories — chance, necessity, and design”

I can’t figure why he seems to think that deliberately engineered is mutually exclusive to change and/or necessity.

Not to mention the same point that somebody else made recently (iirc) - that chance and necessity aren’t exclusive of each other; they describe regions along a sliding scale (necessity would be at or near the 100% end of that scale, the rest of it would be “chance”, but the boundary between them is fuzzy).

Henry

Please allow two honest questions from a novice that no one has answered to my satisfaction. 1)What criterion do SETI researchers propose to use to determine if there is other intelligent life in space? 2)How is this fundamentally different from what ID proponents propose?

Please allow two honest questions from a novice that no one has answered to my satisfaction. 1)What criterion do SETI researchers propose to use to determine if there is other intelligent life in space? 2)How is this fundamentally different from what ID proponents propose?

I can answer both questions with a single link.

In fact, the signals actually sought by today’s SETI searches are not complex, as the ID advocates assume. We’re not looking for intricately coded messages, mathematical series, or even the aliens’ version of “I Love Lucy.” Our instruments are largely insensitive to the modulation—or message—that might be conveyed by an extraterrestrial broadcast. A SETI radio signal of the type we could actually find would be a persistent, narrow-band whistle. Such a simple phenomenon appears to lack just about any degree of structure, although if it originates on a planet, we should see periodic Doppler effects as the world bearing the transmitter rotates and orbits.

And yet we still advertise that, were we to find such a signal, we could reasonably conclude that there was intelligence behind it. It sounds as if this strengthens the argument made by the ID proponents. Our sought-after signal is hardly complex, and yet we’re still going to say that we’ve found extraterrestrials. If we can get away with that, why can’t they?

Well, it’s because the credibility of the evidence is not predicated on its complexity. If SETI were to announce that we’re not alone because it had detected a signal, it would be on the basis of artificiality. An endless, sinusoidal signal — a dead simple tone — is not complex; it’s artificial. Such a tone just doesn’t seem to be generated by natural astrophysical processes. In addition, and unlike other radio emissions produced by the cosmos, such a signal is devoid of the appendages and inefficiencies nature always seems to add — for example, DNA’s junk and redundancy.

The gist of it is that SETI does NOT use the same criteria as IDists, despite IDists’ claims to the contrary. This is similar to their false appeals to the sciences of archeology and forensic criminal investigations. IDists claim that complexity is the key to detecting design, while SETI relies on simplicity, the sort of simplicity that in certain contexts can most reasonably be attributed to intentional, artificial means. This is not what IDists claim to see in cells and whatnot.

Please allow two honest questions from a novice that no one has answered to my satisfaction. 1)What criterion do SETI researchers propose to use to determine if there is other intelligent life in space? 2)How is this fundamentally different from what ID proponents propose?

Soon I will address this in more detail but the simple answer is SETI looks for relatively simple narrow band signal

In fact, the signals actually sought by today’s SETI searches are not complex, as the ID advocates assume. We’re not looking for intricately coded messages, mathematical series, or even the aliens’ version of “I Love Lucy.” Our instruments are largely insensitive to the modulation—or message—that might be conveyed by an extraterrestrial broadcast. A SETI radio signal of the type we could actually find would be a persistent, narrow-band whistle. Such a simple phenomenon appears to lack just about any degree of structure, although if it originates on a planet, we should see periodic Doppler effects as the world bearing the transmitter rotates and orbit

Link

In other words, SETI presumes that intelligent life would have found similar solutions to communcation as we have and use this to ‘contact’ other intelligent life. In other words, motives… SETI assumes that such intelligent life would use analogous signals to the ones we use for our communication.

Pennock on SETI analogy

As I said, skeptic had a good article on SETI and ID and I will be posting soon on it.

Dembski is quick to assert that the explanatory filter (ID’s approach) is how science deals with design inferences but when looking at the specifics (and Dembski gives few himself) it becomes clear quickly that science does not use the explanatory filter.

How is this fundamentally different from what ID proponents propose

Of course, the ultimate irony of the “ID is like SETI” argument is that SETI only makes sense in a universe powered by evolution, because only in such a universe would you expect to have a decent chance of eventually finding someone else out there to talk to.

About this Entry

This page contains a single entry by PvM published on May 29, 2006 3:22 PM.

Unanswered Criticism of Dembski’s Specified Complexity was the previous entry in this blog.

The Nelson/Miller Saga Continues is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter