Where’s the beef, Paul?

| 6 Comments

Fifteen months ago Paul Nelson made available a “discussion paper” on Ontogenetic Depth to serve as background for an ISCID chat. In that paper he claimed that

The ontogenetic depth of a handful of extant animals (from the model systems of developmental biology) is known with precision.

That’s 15 months ago. In the chat itself Nelson cut and pasted the same claim from the background paper, and said further that

But, as I said at the beginning, the start of any scientific answer begins with correctly understanding the problem. Ontogenetic depth helps to do that. This is what any candidate theory of animal origins has to explain. it is not itself an explanation, but a description.

So somehow or other, ontogenetic depth provides a problem for biology to explain, but we’re not told what the problem is - we are not given the metric Nelson uses to describe the problem. That’s of no particular help in “correctly understanding the problem.”

In a thread discussing that chat that lasted for more than 7 months, Nelson never described how to measure OD nor did he mention what the values asserted to be “known with precision” actually are. To be fair, that thread wandered off into ‘how could common descent be refuted’ questions and ontogenetic depth pretty much disappeared from view.

More recently, in the comments following PZ Myers’ entry here on the Thumb, 7 weeks ago Nelson wrote

Quick note – I’m drafting an omnibus reply (to points raised here and in Shalizi’s commentary), with title and epigraph from a Rolling Stones song. I’ll post it tomorrow.

and then a week later,

I’m lecturing at the University of Maine (Orono) today, but will try to post the reply when I return to Chicago tomorrow. It’s pretty long: I think I’ll put it up at ISCID and link from here.

I think that after this history of promises we are entitled to conclude that Ontogenetic Depth is a fantasy. In spite of repeated requests, we have never seen the two critical pieces of information necessary to evaluate Ontogenetic Depth as a purported new metric:

  • how to measure/calculate/estimate it; and
  • systematic validation and calibration data.

I think it’s past time to call Nelson on this. Ontogenetic Depth most closely resembles the Explanatory Filter in this respect. Dembski and his colleagues repeatedly claim that there is a “scientific” methodology for detecting design in biological systems, and that claim is used as support in their quest to inject ID into the teaching of biology in public schools. For example, in DeWolf, Meyer, and deForrest’s Intelligent Design in Public School Science Curricula: A Legal Guidebook (here), we read

Mathematician William Dembski has, for instance, published an important work on the theoretical underpinnings for detecting design. In The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge University Press, 1998) he shows how design is empirically detectable and therefore properly a part of science.

They repeat that claim in their Utah Law Review article, available at the same URL:

The [Explanatory] filter outlines a formal method by which scientists (as well as ordinary people) decide among three different types of explanations: chance, necessity, and design.(76) His [Dembski’s] “explanatory filter” constitutes, in effect, a scientific method for detecting the effects of intelligence.(77) (p. 61)

Yet like Ontogenetic Depth, that “scientific method for detecting the effects of intelligence” has never been validated or calibrated. More damning, as far as one can tell from their publications (peer reviewed or otherwise) ID “theorists” have never actually even used it. It is an empty promise, vaporware, as is the promise of Ontogenetic Depth.

So, with apologies to Clara Peller, Where’s the beef, Paul?

RBH

6 Comments

I doubt that you’ll find satisfaction, nor even a lucid reaction. Behind the smoke screen there’s an ID machine building weapons of media distraction.

Yes, this is really where ID’s pants fall down. They want to claim that they are doing science, they publish claims of doing science, they promise that they really do have a scientific procedure, but when it comes down to actually showing their work, what happens? Squat.

One more, since I’m feeling frivolous:

A meal of intelligent design when served with the fruit of divine is lacking in beef, which supports my belief that it’s tripe marinating in whine.

(choke gasp wheeze)

To my knowledge, Dembski has published descriptions of his so called explanatory filter at least seven times. This triad-like scheme is claimed to be a novel and very efficient tool to establish whether or not an event is a result of intelligent design. Dembski’s colleagues often repeat his claim and refer to the filter as a great achievement. Strangely, none of them, including Dembski, has so far shared with the world how they have practically applied the famous filter to a single real-world problem. Given their prolific output wherein the accolades in regard to the filter are common, it seems hardly probable they use it practically, in secret. It is easy to see why the filter remains only a subject of acclaim without being actually utilized. It cannot have any use because it is an artificial construct which contradicts elementary logic and has no relation to reality. Its foundation - classification of antecedents into three clearly distinctive categories is unrealistic since in the real world more than one cause are usually intertwined. Its first and second nodes are meaningless because they imply an impossible procedure wherein probability of events is supposed to be estimated prior to having any knowldege about the causal history of the event. Dembski seems not to notice that this is contrary to his own definition of probability according to which it is estimated “given background information H” . In the first and second nodes, probability is to be estimated without accounting for such a background information, but is supposed to be “read off the event,” which is impossible. More than one critic have pointed to the fallaciousness of the Explanatory filter but Dembski and Co, without offering any substantial refutation of the critique and any justification for their claims, continue praizing this alleged great breakthrough day in and day out, while so far (since at least 1998 when it was widely publisized in Dembski’s The Design Inference) avoiding, for obvious reasons, attempts to practically use it. What a pitiful picture.

My advice to Paul would be to cut down on the speaking and to concentrate on writing up his work.

I can say this from experience - in the last few years I’ve given a lot of seminars around the world, but in the last year or so, I’ve cut down drastically on the speaking and am concentrating on getting the work from my lab published. In my case, it’s publish or perish. In your case, it’s publish or we laugh at you when you come by for your next barrel of Baramin Brown Ale (“Baramin Brown Ale - truly one of a Kind”).

About this Entry

This page contains a single entry by Richard B. Hoppe published on May 23, 2004 4:25 PM.

The Fear of Evolution was the previous entry in this blog.

Gene duplication versus ID is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter