Falsehoods on the Air

| 27 Comments

The “Powerpoint” radio show from Atlanta, Georgia this evening was about evolution and “intelligent design”. The guests included Barbara Forrest, Casey Luskin, David Schwimmer, and John Calvert. It was an interesting discussion, to say the least.

I called in to make a comment, in response to an assertion by John Calvert that “intelligent design theory” was being used in science, referencing “design detection” methods in archeology and life sciences.

My response:

I’m a theist who is a critic of the claims of “intelligent design” advocates. I’d like to focus on the claim made by Dr. Calvert that ID has theoretical content, and that the design detection methods of ID are being applied in science. Briefly, design detection in ID refers to the concept of “complex specified information” expounded by William Dembski. However, Dembski has never shown the full and successful application of his concept to any phenomenon whatsoever. No one else has, either. Dembski’s design detection method is both incoherent and unworkable. It is of no value to science. This is detailed in the recent book, Why Intelligent Design Fails.

I started off the way I did because Calvert was doing his best to cast this as a “theists vs. atheists” sort of issue.

What was instructive was the response from the ID advocates, Calvert and Luskin.

Calvert asserted that biochemists assume design in trying to “reverse engineer” biological systems, and thus are using “design detection” without giving ID the proper credit for what they are doing. This is, of course, so much flapdoodle. What biochemists assume has nothing to do with intervening disembodied designers and everything to do with evolutionary processes constrained by the environment. It also overlooks identifying exactly what process of “design detection” proposed by ID advocates has been unfairly denied credit… which is explicable on the view that there is no such process to be credited.

Luskin simply asserted that I was wrong, and that Dembski had applied his concept of CSI, notably to the E. coli flagellum in the pages of Dembski’s book, No Free Lunch. I’m not sure what to make of this, because I’m pretty sure that Casey and I discussed Dembski’s failed methodology before. In any case, Dembski failed to fully apply his “generic chance elimination argument” to the E. coli flagellum. First, Dembski failed to give a specification for the flagellum. Second, Dembski failed to eliminate any evolutionary hypothesis of origin for the E. coli flagellum. The single hypothesis Dembski considered was one of random assembly, a thoroughly non-evolutionary proposition.

It does show the advantage to being a guest on a show, since any sort of nonsense may be spouted with little threat of exposure.

[I’ve removed a possibly inflammatory bit of rhetoric…] I do appreciate that Barbara Forrest did note that Dembski’s work, including No Free Lunch, has been extensively debunked.

For those who still think Dembski’s CSI has something going for it, check out this article by myself and Jeff Shallit.

27 Comments

David Schwimmer? Well, he was an actor pretending to be a scientist. I suppose that balances out Casey Luskin, a reverend pretending to be a science advocate.

Seems like only yesterday Casey was asking me how in the world I got the mistaken impression that their ID club was some sort of religious group.

IDK, maybe it’s things like where the website says “Casey also believes that the Intelligent Designer happens to care about people, and that there is ample evidence He is the God of the Bible, who has shown the world His love and offered forgiveness through His Son, the predicted Jewish Messiah, Jesus Christ. “

David Schwimmer is a paleontologist at Columbus State University. I believe he is considered to top paleontologist in the state (mainly because he works on local fossils).

I missed the show and am trying to get a copy.

I think I know who David Schwimmer is, Reed. I’ve seen like 3 whole seasons of Friends. Thank you very much.

Typical to see how ID proponents are still making these false claims about CSI, and Dembski having applied this concept to the flagellum. In my article where I show that the design inference is nothing but an argument from ignorance, I already wondered why ID proponents seem to be making such unsupported and fallacious claims. Of course ID really has nothing scientifically relevant to offer which may help understand why ID proponents have to resort to ‘making things up’. Not only is ID a risky business when it comes to religious faith but it also seems to undermine the ability of ID proponents to argue in a logically coherent manner. The sad thing is, many people believe these claims despite them being totally misleading and erroneous. Ignorance has still quite an appeal in this society.

Just to expand upon your refutation of the claim about what biochemists do: we attempt to obtain mechanistic explanations of the behavior of complex systems. As do geologists, climatologists, astonomers, and other scientists who study complicated systems. If this variety of “reverse engineering” by scientists implies that biochemical systems are “designed”, then it must also imply that geological systems and climates and galaxies are in the same sense “designed”. If so, the claim becomes so broad as to be meaningless. And of course, that’s exactly what it is.

Luskin wrote one of the most absurd articles ever written about ID:

http://www.pastors.com/article.asp?[…]p;ArtID=6665

An ominous force, lurking in the courts of kings, the halls of learning, and even the towers of wizards, threatens to dominate a society. Those who opposed the force have been systematically excluded from power. Some make peace with the force, advancing their personal interests, but are foolishly deceived into believing the force will not consume them and their descendants. Others refuse to acknowledge the force and stay secluded in their shires, pretending there is no impending threat to their way of life.

Yet one small alliance, composed of brave souls with differing backgrounds, cultures, and belief systems realize the weakness of the force and have organized a fellowship to stop it. No, I’m not talking about The Lord of the Rings by J. R. R. Tolkien - I’m talking about the intelligent design (ID) movement, and the force of materialistic philosophy.

Damn, I was guessing Dawn of the Dead.

As I recall, Luskin has another year remaining of law school. I note that his name is not listed among those who passed the California State Bar Exam given in July, presumably because he didn’t take it. I hope Luskin sprinkles his essay answers with plenty of references to his occultistic philosophy. I hear the graders really go for that.

It is not surprising to see ID proponents phrase their ‘battle’ in terms of Good versus Evil. What they may fail to understand is that they may be under the influence and spell of the evil ring. What else would explain them using fallacious logic and poor science to place religious faith at significant risk? By insisting that their design(ers) can be empirically detected they also have handed those who are against religion a powerful weapon to once and for all disprove this design(ers). If history is in any way a predictor of the future, it will not take long for science to show in more and more detail how the flagella arose or how IC systems arise fully naturally. We already see how ID has to retreat into the gaps of our ignorance by insisting that such increased knowledge will never be sufficient in detail to their likings. In other words, moving goalposts are replacing a fallacious argument in an attempt to save it from total failure. Yet the ominous signs are there, ID science is poorly founded in theory and practical application leading ID proponents to recklessly make claims that only serve to add to its demise.

Not being a fan of “Friends”, but is this some kind of strange-but-true trivia about the show, that David Schwimmer (the actor) plays a paleontologist in the series and that David Schwimmer (the scientist) is a real paleontologist ?

Quite the contrary, I believe there are TWO David Schwimmers we are talking about here. The actor who stars in Friends is sadly not also a real paleontologist.

To forestall the argument that it is “arguable” that Dembski’s NFL discussion of the E. coli flagellum in No Free Lunch represents a “full and successful” application of Dembski’s “design detection” apparatus, here an unarguable point:

On page 72 of No Free Lunch, Dembski summarizes the steps in his “argument schema”. Step 3 is of special concern to us, since it tells us what a “specification” looks like: it is a “rejection function f” and a “rejection region R”. R is itself specified in terms of real numbers. Now, compare Dembski’s proffered specification for the E. coli flagellum: methinks it is like an outboard motor. No rejection function and accompanying rejection region is given. Dembski comes nowhere close to fulfilling his own criteria on this. It gets worse as one considers that no steps are taken toward fulfilling the criteria of #4 on page 72, either.

For more reasons that Dembski fails to fulfill his own criteria, refer to the paper linked above.

Luskin’s article about ID is just hilarious - a real keeper. Re the comparison of Phil Johnson with Gandalf, I would suggest that Saruman, with his silver tongue, is a much closer match!

I would add the following comments on Dembski’s NFL discussion of the E. coli flagellum.

1) Dembski rules out evolutionary explanations by an appeal to Behe. However Behe did not say that it was impossible that evolution could produce an IC structure - only that in Behe’s opinion it was very unlikely. Thus Dembski failed to calculate the most important probability and did so based only on sloppy scholarship. (Don’t the DI Fellows even talk to each other ?)

2) The probability Dembski does calculate is irrelevant. We know that an individual flagellum forms by regularity, not simply chance assembly of proteins. Moreover it is not related to the specification - which is where defining the rejection region comes in.

So all in all Dembski’s attempt to apply his method is a disaster - he misrepresents his colleague Behe and as a result of that error leaves out a vital calculation. The calculation he does do isn’t relevant.

Neither of these points are arguable either. If Dembski wishes to rely on Behe’s auhority for a key point of his argument he needs to accurately present Behe’s claims. There is no excuse for making such a serious error on such an important point. Nor is there any excuse for ignoring the actual mechanisms by which a flagellum forms.

Paul King:

Just to make sure you haven’t taken your eye off the ball here: Your arguments presume that there is some scientific validity behind Dembski’s filter, and that by misapplying it he loses that validity and should correct his error. But of course, Dembski’s filter isn’t intended to discover anything at all, it is instead intended to wrap a creationism core in a cloak of pseudoscientific mumbo jumbo, complete with as many equations and Greek symbols as can be hung on this flimsy framework, to achieve political leverage. You are probably aware that Dembski has been pressured to demonstrate the efficacy of his filter by (1) Creating the specification before he already knows the answer he wants; and (2) then subjecting the filter to some blind tests to see how well the specifications are met. Dembski steadfastly refuses to do anything of the sort (as do all of his disciples).

And so we are left with one single “practical” example of how the filter works, and how it works is to assume design, calculate that the flagellum couldn’t have occured by “tornado in a junkyard” chance, and conclude design. Nobody is fooled except those who wish to be, but that latter group is large and they open their wallets fairly readily.

There are some serious problems with Dembski’s clarity (and some examples of apparent equivocation) and some problems with the details (the derivation of the Universal Probability Bound omits one important factor, for instance). However if we generously interpret Dembski’s method, avoiding obvious errors, it would seem that it is basically valid - in an ideal situation.

And that is the real problem. It is a very impractical method and Dembski has not developed the mathematical tools to deal with real, non-trivial, cases. In particular, the method requires some way of estimating the probability that a potentially valid non-design explanation has not been taken into account. That in itself leaves the method completely open to false positives in any situation where the true explanation is not known.

In applying the method to evolution the situation is far worse because there is no indication of how we could calculate a valid probability of evolution accounting for a feature (as I pointed out, Dembski incorrectly ducked this issue in NFL). Simply ignoring evolution if we have no explanation detailed enough allow the probability to be calculated - as Dembski has suggested - is clearly invalid.

And that is why we have no non-trivial examples - even if the method worked in principle (and IMHO it could be made to do so) it is simply not practical - and there will always be more practical and reliable methods.

However, I think it is significant in itself that Dembski badly failed to apply his own method correctly. That there are problems with the method that still need to be addressed is another topic.

However if we generously interpret Dembski’s method, avoiding obvious errors, it would seem that it is basically valid - in an ideal situation.

I guess I just don’t get it. Imagine you traveled to some distant galaxy, where you found some very geologically active (uninhabited) planet, generating wild formations of all kinds with great abandon. Imagine that one of those formations was an exact replica of Mount Rushmore. Would you conclude design anyway? If you were an octopoid totally unfamiliar with humanity, would you be just as likely to conclude design?

Dembski’s filter simply cannot be applied outside of some assumed context. At the very least, Dembski and the ID crowd are extrapolating from known human design styles and known human intent in creating our designs. And indeed they must do so, because “regularity plus chance” is infinitely variable, and can in principle account for everything without resort to design. In other words, false positives can’t possibly be eliminated.

Dembski surely understands this. Again, he’s not trying to identify design - the filter won’t work unless this identification is already done in advance. His purposes are theological.

As a creationist, I tow the ID party line 90% of the time, but flagellum is not, in my opinion, the best candidate for making CSI arguments.

I have stated the best avenues for calibrating the EF and CSI concepts is in bio-warfare, quantum cryptography, chemical and nano-molecular patent law enforcement.

Within evolutionary biology, I believe “convergent evolution”, hierarchical patterns in molecular taxonomies are areas ripe for exploration. Here we have very good estimates of probability distributions, further we can falsify the distributions in principle.….However, these areas are neglected in the poplular eye because they are not as glamorous as the flagellum.…

As far as “falsehoods in the air”, I would not say that of my ID brethren, but rather “areas for improvement”, and the the ID community is slowly responding to their critics, and self-correcting.

As far as Calvert saying EF is used in scineces, a charitable interpretation of what Calver said is that Dembski has only formalized what is already used in practice. I worked on Automatic Target Recognition systems in 1998, and Dembski’s ideas embody what is already customary in the practice of detecting design. We never used “Dembski’s Brand of EF” but we did, in our own way use some form of the EF in these recognition applications.

Salvador

Flint, lets start with the basics.

I agree with you on every issue except for the basic question of whether Dembski’s filter might be valid IF the practical problems were dealt with. That Dembski seems to have shown no interest in dealing with the very real practical problems (as well as his hopelessly shoddy use of Behe’s IC) I take as adequate evidence that he is interested in producing something which looks good and gives the “right” conclusions rather than a genuinely useful method of identifying design.

I’ll explain why I consider Dembskis method to be valid in principle - as well as why I reject it as utterly useless.

Eliminative arguments are logically valid. But extremely, hard to use correctly except in cases where the domain it is applied to is very well-defined (which is often the case in mathematics and almost never the case in non-trivial real-world situations).

If a hypothesis has a sufficiently low probability of accounting for the evidence we may validly dismiss it.

My charitable interpretation of Dembski’s method to an event is as follows:

1) Identify a valid specification for the event (if none exists the result is negative)

2) Enumerate all possible hypotheses that can account for the event

3) Identify a probability bound sufficiently low to justify rejection of hypotheses as described in step 4

4) For each hypothesis calculate the probability that it would produce an event meeting the specification. Reject any hypotheses for which the probability falls below the bound determined in 3) above.

5) If all hypotheses have been rejected conclude design.

None of these is wrong in principle. Practical applications are another matter.

To offer my assessment on these points:

1) While I am not entirely happy with Dembski’s criteria for a valid specification, they are acceptable provided the other steps are carried out correctly.

2) This is the fundamental problem with eliminative methods. It is very difficult to actually do this in a scientific context (indeed it can never be guaranteed and so we must quantify the probability that we have missed a possible explanation). Moreover we also need to deal with the question of how to divide up the possible explanations into distinct hypotheses - without a rigorous way of doing so it is possible to acheive a false positive by “splitting” a hypothesis down into a number of finer and finer detailed hypotheses. As far as I know, Dembski has dealt with neither problem, and until he does his method cannot be considered scientific.

3) This is a real problem. Dembski’s so-called Universal Probability Bound was not calculated correctly and to the best of my knowledge Dembski has not identified how to correctly calculate a local probability bound.

4) I have no idea how this would be done for an evolutionary explanation. Nor am I aware of any concrete suggestions from Dembski.

So steps 2), 3) and 4) all present major problems and I see no sign that they have been addressed - even though Dembski’s The Design Inference was published 6 years ago and IDers have been claiming ever since that Debmski’s method was not just theoretically valid but actually usable.

Paul King:

In the real world, I argue that your step 2) is impossible *in principle*. It can’t be done. It can never be done. I can see that regularity+chance can produce a given result in an infinity of ways.

I see no reason to analyze the filter any further. Design can’t be inferred. Over and out.

As I said it would be enough to quantify the chance of having missed out a valid explanation. IF Dembski could do that (and it is a difficult problem and may not have a solution) then that particular issue would be dealt with.

But I don’t see any sign that Dembski has even acknowledged the problem.

As I said it would be enough to quantify the chance of having missed out a valid explanation.

To be sure. The probability is unity. To date, that probability is simply waved away by saying “I can’t believe any other explanation could possibly exist.”

But I don’t see any sign that Dembski has even acknowledged the problem.

I personally consider Dembski’s (and his disciples’) refusal to even TRY to apply his filter, much less in a blind test, as a big honking acknowledgement of this problem.

In the discussion of Dembski’s so-called Explanatory Filter the disputants routinely repeat Dembski’s term for it - “Explanatory Filter.” In fact, however, all the discussion always is only about the “third node” of the “filter.” This is quite logical because the first and the second “nodes” are meaningless in whichever context one my wish to talk about them. For these two “nodes” Dembski suggested an unrealistic procedure wherein the event’s probability is “read off the event” without knowledge of the event’s causal history. This is nonsense. The event’s probability can only be estimated based on its causal history. We have to first attribute the event, say, to regularity, and THEREFORE assign to it a high probability rather than somehow “read off the event” its probability (which is impossible) and therefore attribute it to regularity, as Dembski suggested. Hence there is no Explanatory Filter a la Dembski, but only its third “node” where the choice between chance and design is to be made. Therefore it is time to cease talking about the non-existent triad-like filter and talk instead only about Dembski’s third node’s “design vs. chance” criterion - the so-called specified complexity. This criterion is much vulnerable to both false negatives and false positives which render it useless. This vulnerability stems from the unfounded separation of specification from probability. In fact specification is not a concept separate from probability but just a component of the latter: specified events naturally have a smaller probability than non-specified ones. Therefore the procedure at the third node boils down to estimation of the event’s probability of which only one of the contributing components is specification. Since probability reflects the level of our ignorance about the event, the design inference a la Dembski boils down to the argument from ignorance, a.k.a. God-of-the Gaps argument, which even Plantinga rejects as a valid argument. This is trivial so all the fuss about the Filter is much ado about nothing. It has all been said before but amazingly the discussion is going on and on and on…

That a simple, natural algorithm like evolution can produce lots of things like flagella, is just astonishing.

There are things better than science, but damn few.

There are indeed two David Schwimmers - one an actor, the other a Mesozoic paleontologist, and author of a book on Deinosuchus.

What are the odds that a spontaneous process, unguided by an intelligent actor, could give rise to two David Schwimmers?

A couple of nights ago, Fox TV aired the movie “Ice Age”, a fairly straightforward children’s animated movie, in which a mammoth, a saber-tooth tiger, and a sloth rescue a human child and return it to its tribe, as all involved migrate south to avoid the ice. The movie clearly states that the events are depicted as taking place 20,000 years ago.

So naturally, at the Christian movie review site, we read:

There were only two objectionable things I found is that this movie. The whole plot is based heavily on evolutionary theory with the obvious funny looking animals–almost like “unevolved” versions of our present-day animals…

Of course, there was no “evolutionary theory” involved in the movie, except perhaps for the implication that there have been some extinctions of North American megafauna in the meantime. I would guess that 20,000 years means “a very long time ago” and anything unlike today must mean evolution, or something like that. No concept whatsoever of the time scales involved, as would be expected for those who consider 6 days ample time for all of reality to get created.

Odds notwithstanding, there are at least two David Schwimmers with some association with paleontology. I believe my technical publication list exceeds the actor’s by several orders of magnitude. Conversely, I have never received a single royalty payment from “Friends.” By the way, I’m currently working on soft-bodied fossils in the Cambrian of Georgia, and haven’t seen a sign of an explosion. Cheers.

Some ways to earn money in Businness …

About this Entry

This page contains a single entry by Wesley R. Elsberry published on November 21, 2004 9:31 PM.

Why the “impartial” committee keeps under wraps the results of the Bible code experiment? was the previous entry in this blog.

Dover School District Wades Into Troubled Waters is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter