CURSES! Foiled again!

| 21 Comments

Vizzini copy.JPGIt looks like my latest “Darwinist scheme” fell straight into Jonathan Witt’s clever trap.

See, in his recent response to my post about the parallelisms between Percival Lowell’s and modern ID advocates’ arguments, Witt says that he didn’t originally mention Lowell’s failed design inference on purpose, because

… knowing how irrational some ultra-Darwinists can be, I knew some of them would raise the objection anyway, and in the process, perform invaluable rhetorical work for the cause of intelligent design.

I tell you, these guys are just too smart for us!

Alas, when it comes to actually showing how “invaluable” the rhetorical work that I supposedly did for him was, Witt just has to resort to putting words in my mouth, to do for me the work he claimed I was supposed to have done for him on my own. (I know: Never mess with a Sicilian…)

Witt repeatedly tells his readers that my post intended to draw a parallel between Lowell and the modern ID advocates in order to imply that if Lowell’s design inference was wrong, ID’s design inference must be too. But that would be a silly argument, and I never made it, either as a sweeping conclusion or as a statement about any of the specific ID arguments I mentioned. What I did, instead, was simply say that the arguments used by Lowell to support his inference closely parallel in structure, logic and tone those of ID advocates today, and that it would be a good lesson for ID advocates and their supporters to be aware of that (hence, the lost lesson chance by Witt). At the very least, they should ask themselves if their version of the arguments is indeed substantially improved over Lowell’s, and why. This is a lesson that Witt chose to ignore in his original piece, preferring instead to construct a bizarre argument about science going backwards, and materialistic scientists accepting claims, later proven wrong, based on philosophical preferences (when, in fact, the majority of astronomists of the time rejected Lowell’s claims based on empirical grounds, regardless of metaphysical implications). (In truth, there’s a lesson in Lowell’s story for everyone of us, but it’s particularly meaningful for ID advocates, who routinely use the same kind of arguments.)

To get a sense of Witt’s approach, take a look at the following (italics mark Witt’s quotes from my post):

“You will find confident claims about the manifestly non-natural basis of the observed structures.” In other words: Lowell expressed confidence in his design inference. Design theorists have expressed confidence in their design inferences. Lowell’s confidence was misplaced. Ergo, the design theorists’ confidence is misplaced. Bottaro’s fallacy: hasty generalization.

You will not find those “other words”, or their equivalent, in my post - they are entirely a product of Witt’s imagination. First of all, Witt is wrong: saying that Lowell was confident of the non-natural origin of the Mars channels is not the same as saying that he expressed confidence in his design inference. In fact, in the passages from which I quoted, Lowell was claiming that he was confident about his design inference because he was confident he had ruled out all possible natural mechanisms. That’s a big difference - one could in principle make a design inference first, and therefore state with confidence that natural mechanisms did not play a role in the origin of the designed item in question. Indeed, that’s the kind of design inference we all do most often: we don’t go about wondering what natural process may have caused this or that, we use independent evidence about designers, design processes, etc (that‘s actually why we infer design when we find a watch on the ground during a walk - because we have independent knowledge of watches, watch-makers, watch-making processes, human technology and artifacts in general, watches’ function, the human need to tell what time of day it is, etc)

My actual point was that, just like Lowell, modern ID advocates as well often claim that their design inferences rest, in significant part, on supposedly ironclad conclusions that natural mechanisms can be ruled out (absolutely or probabilistically). This doesn’t mean that ID advocates today are necessarily wrong simply because Lowell was, but clearly there may be a valuable heuristic lesson for ID advocates in Lowell’s story, if anything about the fact that natural processes can sneak up on you from unexpected places (in Lowell’s case, he tought he had ruled out geology, but the natural processes that doomed his hypothesis laid in the perception properties of the human visual system).

Witt’s next objection is cut from the same cloth:

“You will find references [in Lowell’s argument] to diagnostic features of basic human design, and analogies with known designed structures.” In other words: Since Lowell’s set of diagnostic features proved misleading, all sets of diagnostic features will prove misleading. And since Lowell’s analogies with known designed structures proved misleading, all analogies with designed structures will prove misleading. Bottaro’s fallacy: hasty generalization.

I challenge you to find that argument in my post – it’s not there. I did however comment on how Lowell’s use of the claim that “It was the mathematical shape of the Ohio mounds that suggested mound-builders” to bolster is argument that the Mars canals were also designed is (quite unarguably, in my opinion) very similar to Behe’s argument that the fact that the very shape of Mt. Rushmore points to a sculptor suggests that an intuitive design inference about the flagellum is justified. Note here that both Behe’s and Lowell’s arguments about the Ohio mounds and Mt. Rushmore are obviously correct - that’s not the question. It is the usefulness of using such arguments to prop up an unrelated “design inference” that is questionable, and should give the ID advocate some thought. Ironically, Witt himself makes a very similar mistake later on:

Even as children we accurately make countless such inferences [from function to purpose] concerning the things around us, usually unconsciously (e.g., “That machine functions to evenly cut the grass; its purpose is probably to evenly cut grass.”

Of course, since most of us live in design-rich environments, that’s hardly surprising, if anything from a statistical perspective. Historically, though, humanity’s attempts to assign purpose to natural objects and phenomena based on perceived “functions” have a much poorer track record.

Next, Witt says:

“Specious mathematical/probabilistic arguments and analogies are there, too.” In other words, because we know that Lowell’s mathematical/probabilistic arguments and analogies to design were specious, all mathematical/probabilistic arguments and analogies to design are specious. Bottaro’s fallacy: hasty generalization. The way for Bottaro to rescue his fallacious argument would be to show that a design theorist made the same specific sort of mistake that Lowell had made. Lowell inferred design from the appearance of three lines crossing on what was (in terms of Lowell’s situation as an observer) a two-dimensional surface observed at low resolution. Dembski, who holds a Ph.D. in mathematics from the University of Chicago, rules out chance explanations when the probability for something dips below 1 chance in 10 the 150th power (1 followed by 150 zeroes), and then insists on ruling out law-like explanations as well before inferring design (Cambridge University Press thought his methodology was sound enough that they published his monograph on the subject). Clearly, the two probabilistic arguments are highly dissimilar. If one wants to rebut Dembski’s argument, one will have to address the details of Dembski’s argument, not those of a radically different one posed by someone else.

Here, Witt seems to imply that, since the two items in the analogy are not identical, the analogy is invalid. But in fact, Lowell’s probabilistic argument that, for instance, three lines are enormously unlikely to cross at the same point, and Dembski’s probabilistic argument that a flagellum is enormously unlikely to have come together by chance alone are exactly analogous, regardless of pseudo-precision and arbitrary cutoffs. (As far as I know Lowell didn’t explicitly calculate the probability based on chance alone of all the multiple canal intersections he thought he observed on Mars - although he could easily have done so, since he knew the number of “canals”, their approximate length and width, as well as Mars’s size and the angle under which he was observing it – but he was probably correct in stating that that the result would have been staggering in its improbability) . It is also precisely the case that Lowell, like Dembski, confidently assumed that he had ruled out law-like explanations (see above). The parallels between the two arguments are hard to ignore. And once again, the point is not that Dembski’s argument is wrong because Lowell’s was, but that Lowell’s is a good lesson to consider when excessive confidence is put in a priori probabilistic arguments that do not take into account the actual variables affecting a natural system, but work from unrealistic abstractions. Witt is right in one thing: I did call Wells’s argument that centrioles are like teensy-weensy turbines “fanciful”, when I should have probably called it “preposterous”. Here too, though, Witt states that I imply Wells’ idea is wrong because it is “fanciful”, but that’s not the case. In fact, it is entirely irrelevant whether Wells’ hypothesis is right or wrong. What I claimed is that, like Lowell came up with the vision of a desertic Mars in need of massive irrigation projects in order to support his design inference that the Martian lines he saw through his telescope were bona fide channels, Wells came up with an extremely convoluted and implausible model of how two turbines may work during cell division, in order to support his intuitive design inference that centrioles are turbines because they look like turbines. By the way, Witt is also right that Wells’s model is at least is testable, but I never said otherwise (so was Lowell’s, by the way, or the proposition that the Earth is 6,000 years old – testable claims are not hard to make and are not some sort of noteworthy achievement).

That’s pretty much how it goes throughout Witt’s response. Like Dick Cheney, Witt clearly likes his hunts “canned”, so instead of shooting at my real arguments, he lets loose some himself that are a bit easier to aim at, and pretends they’re the real thing. The reader can judge as to what extent there are real similarities between the arguments used by Lowell and those used by modern ID advocates to bolster their respective design inferences, and whether an analysis of Lowell’s use of such arguments might have been a good lesson to ponder for ID advocates (as opposed to the bogus lesson Witt provided them by oddly linking Lowell, habitability, the Big Bang, spontaneous generation and evolutionary theory). Indeed, in the discussion thread to my original post several commenters have raised interesting and pertinent points regarding precisely the similarities and differences between Lowell’s and modern ID’s arguments – some more or less agreeing with me, and others disagreeing. Witt could have done the same if he had taken my arguments at face value, instead of concocting some for me.

There is however another issue I would like to touch upon. Witt accuses me of making an argument from authority, first at the beginning of his piece, claiming that I “attributed this [Witt’s inability to see the lesson in Lowell’s story] to [Witt’s] Ph.D. training in literature, logic, and the philosophy of aesthetics, rather than in science”, and later again by saying:

Bottaro concludes by noting that I confessed to having learned Darwinist jargon at one point. He encourages the reader to conclude from this that my arguments against neo-Darwinism should be rejected out of hand. Let’s reconstruct the logical sequence, with the implicit premises drawn out into the light: People who learned Darwinist jargon as adults can never make good arguments concerning Darwinism. Jonathan Witt learned Darwinist jargon as an adult. Ergo, he’s a poopy head.

And once again, I did neither. (I note however that Witt himself seems to be quite sympathetic to arguments from authority directed the other way - see the quote above expounding on Dembski’s mathematics graduate training and his book’s publication by CUP.)

What I said is quite different, and it’s right there at the beginning of my post. I referred to Witt’s “almost comical lack of self-awareness”, which I reiterated at the end by quoting his own tone-deaf claim that he realized “Darwinism” was fallacious not by analyzing the actual evidence supporting it (although he was well aware of, in his words, the “wealth of arcane scientific data”), but supposedly, once he mastered the “jargon”, by showing errors in the logical structure of “Darwinist” arguments , of which he then provides a patently preposterous list. His response to my post follows the same pattern: ignore the facts, put words in other people’s mouth, and gloatingly point out how wrong they are.

21 Comments

Witt:

Design theorists have expressed confidence in their design inferences. Lowell’s confidence was misplaced.

Might I suggest that if it’s possible for one’s confidence to be “misplaced” in X, then X is not a form of inference, but a form of hypothesis generation, or less generously, wishful thinking.

Since Lowell’s set of diagnostic features proved misleading, all sets of diagnostic features will prove misleading.

Witt seems to be confused about existential and universal quantification. For anything that was designed, there may exist a set of “diagnostic features” that would make a compelling case for design, but for “design inference” to amount to anything resembling inference in the logical sense, you’d need a rigorous method of picking these features such that for all features selected, their presence makes a compelling case for design.

Or, less abstractly, the reason “design” is not “inference” is not because designed objects never present features indicating design. The reason is because non-designed objects often present features falsely suggesting design. Making this so-called inference requires picking the true features and ignoring the false ones, but there is no rigorous methodology for doing so. This is not inference but confabulation.

Might I suggest that if it’s possible for one’s confidence to be “misplaced” in X, then X is not a form of inference

Why? Confidence can easily be misplaced in inference. Uncertain things, they are.

Dembski, who holds a Ph.D. in mathematics from the University of Chicago, rules out chance explanations when the probability for something dips below 1 chance in 10 the 150th power (1 followed by 150 zeroes), and then insists on ruling out law-like explanations as well before inferring design

Well, no. Dembski, who holds a Ph.D. in mathematics from the University of Chicago, has never calculated the probability of biological systems arising from any remotely realistic evolutionary process, and instead infers design more or less when he wants to, as far as I can tell.

(This is an interesting appeal to authority. I do not have a Ph.D. in math, but I, too, can fail to calculate the probability of a biological system arising from any remotely realistic evolutionary process. I guess if you set the bar low enough, we are all authorities.)

What has this got to do with Wallace Shawn? Or did I miss something?

If I see a bird that walks like a duck, quacks like a duck and shits like a duck - and if that bird is not unlike a duck in any way at all - then it is not hasty generalization to call that bird a duck.

That machine functions to evenly cut the grass; its purpose is probably to evenly cut grass

That machine also functions to make a lot of noise; therefore its purpose is probably to make a lot of noise?!?

I might learn the purpose of the machine by asking the Designer, but I have no idea who that might be.

Here is Dr.Bottaro’s quotation from Witt’s “reply”:

If one wants to rebut Dembski’s argument, one will have to address the details of Dembski’s argument, not those of a radically different one posed by someone else.

Witt seems to be unaware of the critical analysis of Dembski’s literary production which have addressed “the details of Dembski’s argument,” and convincingly demonstrated that Dembski’s “mathematical” arguments are not worth the paper the Cambridge Univ. Press wasted on his piffle. Everybody can make a mistake, but only fools persist in adhering to their mistakes. Cambridge Univ. Press apparently rejected the idea of being foolish and refused to publish Dembski’s next “mathematical” exercise where he egregiously misused the No Free Lunch theorem. Likewise, Dembski was invited to give a talk at the mathematical department of a Danish Technical university, but having listened to his talk, the faculty of that department concluded that they would never invite him again, his PhD from a Chicago university notwithstanding. The emptiness of Dembski’s “mathematical” arguments, which may look impressive at a superficial glance or to an unqualified layman, becomes obvious as soon as a reader with a minimal background in mathematics takes a little deeper look at his mathematically-looking crap. But how could Witt have known this? He seems to be a stranger even to a simple logic, not to mention his laughable assertion about the alleged irrationality of “ultra-Darwinists” (i.e. all those who see through the smoke screens of creationist pseudo-science). Judging a mathematical discourse is apparently something he has not mastered in his studies of literature and philosophy.

Dembski, who holds a Ph.D. in mathematics from the University of Chicago, rules out chance explanations when the probability for something dips below 1 chance in 10 the 150th power (1 followed by 150 zeroes), and then insists on ruling out law-like explanations as well before inferring design

Time once again for my standard response to Dembski’s “Explanatory Filter” BS:

Perhaps the most celebrated of the Intelligent Design “theorists” is William Dembski, a mathematician and theologian. A prolific author, Dembski has written a number of books defending Intelligent Design.

The best-known of his arguments is the “Explanatory Filter”, which is, he claims, a mathematical method of detecting whether or not a particular thing is the product of design. As Dembski himself describes it:

“The key step in formulating Intelligent Design as a scientific theory is to delineate a method for detecting design. Such a method exists, and in fact, we use it implicitly all the time. The method takes the form of a three-stage Explanatory Filter. Given something we think might be designed, we refer it to the filter. If it successfully passes all three stages of the filter, then we are warranted asserting it is designed. Roughly speaking the filter asks three questions and in the following order: (1) Does a law explain it? (2) Does chance explain it? (3) Does design explain it? . … . … I argue that the explanatory filter is a reliable criterion for detecting design. Alternatively, I argue that the Explanatory Filter successfully avoids false positives. Thus whenever the Explanatory Filter attributes design, it does so correctly.”

The most detailed presentation of the Explanatory Filter is in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. In the course of 380 pages, heavily loaded with complex-looking mathematics, Dembski spells out his “explanatory filter”, along with such concepts as “complex specified information” and “the law of conservation of information”. ID enthusiasts lauded Dembski for his “groundbreaking” work; one reviewer hailed Dembski as “The Isaac Newton of Information Theory”, another declared Dembski to be “God’s Mathematician”.

Stripped of all its mathematical gloss, though, Dembski’s “filter” boils down to: “If not law, if not chance, then design.” Unfortunately for IDers, every one of these three steps presents insurmountable problems for the “explanatory filter” and “design theory”.

According to Dembski, the first step of applying his “filter” is:

“At the first stage, the filter determines whether a law can explain the thing in question. Law thrives on replicability, yielding the same result whenever the same antecedent conditions are fulfilled. Clearly, if something can be explained by a law, it better not be attributed to design. Things explainable by a law are therefore eliminated at the first stage of the Explanatory Filter.” Right away, the filter runs into problems. When Dembski refers to laws that explain the thing in question, does he mean all current explanations that refer to natural laws, or does he mean all possible explanations using natural law? If he means all current explanations, and if ruling out all current explanations therefore means that Intelligent Design is a possibility, then Dembski is simply invoking the centuries-old “god of the gaps” argument — “if we can’t currently explain it, then the designer diddit”.

On the other hand, if Dembski’s filter requires that we rule out all possible explanations that refer to natural laws, then it is difficult to see how anyone could ever get beyond the first step of the filter. How exactly does Dembski propose we be able to rule out, not only all current scientific explanations, but all of the possible ones that might be found in the future? How does Dembski propose to rule out scientific explanations that no one has even thought of yet – ones that can’t be made until more data and evidence is discovered at some time in the future?

Science, of course, is perfectly content to say “we don’t know, we don’t currently have an explanation for this”. Science then moves on to find possible ways to answer the question and uncover an explanation for it. ID, on the other hand, simply declares “Aha!! you don’t know, therefore my hypothesis must be correct! Praise God! – uh, I mean The Unknown Intelligent Designer!” ID then does nothing – nothing at all whatsoever in any way shape or form – to go on and find a way to answer the question and find an explanation for it.

Let’s assume that there is something, call it X, that science can’t currently explain using natural law. Suppose, ten years later, we do find an explanation. Does this mean: (1) The Intelligent Designer was producing X up until the time we discovered a natural mechanism for it, then stopped doing it at that point? Or (2) The Intelligent Designer was doing it all along using the very mechanism we later discovered, or (3) the newly discovered natural mechanism was doing X all along and The Intelligent Designer was never actually doing anything at all?

Dembski’s filter, however, completely sidesteps the whole matter of possible explanations that we don’t yet know about, and simply asserts that if we can’t give an explanation now, then we must go on to the second step of the filter:

“Suppose, however, that something we think might be designed cannot be explained by any law. We then proceed to the second stage of the filter. At this stage the filter determines whether the thing in question might not reasonably be expected to occur by chance. What we do is posit a probability distribution, and then find that our observations can reasonably be expected on the basis of that probability distribution. Accordingly, we are warranted attributing the thing in question to chance. And clearly, if something can be explained by reference to chance, it better not be attributed to design. Things explainable by chance are therefore eliminated at the second stage of the Explanatory Filter.”

This is, of course, nothing more than the standard creationist “X is too improbable to have evolved” argument, and it falls victim to the same weaknesses. But, Dembski concludes, if we rule out law and then rule out chance, then we must go to the third step of the “filter”:

“Suppose finally that no law is able to account for the thing in question, and that any plausible probability distribution that might account for it does not render it very likely. Indeed, suppose that any plausible probability distribution that might account for it renders it exceedingly unlikely. In this case we bypass the first two stages of the Explanatory Filter and arrive at the third and final stage. It needs to be stressed that this third and final stage does not automatically yield design – there is still some work to do. Vast improbability only purchases design if, in addition, the thing we are trying to explain is specified. The third stage of the Explanatory Filter therefore presents us with a binary choice: attribute the thing we are trying to explain to design if it is specified; otherwise, attribute it to chance. In the first case, the thing we are trying to explain not only has small probability, but is also specified. In the other, it has small probability, but is unspecified. It is this category of specified things having small probability that reliably signals design. Unspecified things having small probability, on the other hand, are properly attributed to chance.”

In No Free Lunch, Dembski describes what a designer does:

(1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (Dembski, No Free Lunch, p xi)

But Dembski and the rest of the IDers are completely unable (or unwilling) to give us any objective way to measure “complex specified information”, or how to differentiate “specified” things from nonspecified. He is also unable to tell us who specifies it, when it is specified, where this specified information is stored before it is embodied in a thing, or how the specified design information is turned into an actual thing.

Dembski’s inability to give any sort of objective method of measuring Complex Specified Information does not prevent him, however, from declaring a grand “Law of Conservation of Information”, which states that no natural or chance process can increase the amount of Complex Specified Information in a system. It can only be produced, Dembski says, by an intelligence. Once again, this is just a rehashed version of the decades-old creationist “genetic information can’t increase” argument.

With the Explanatory Filter, Dembski and other IDers are using a tactic that I like to call “The Texas Marksman”. The Texas Marksman walks over to the side of the barn, blasts away randomly, then draws bullseyes around each bullet hole and declares how wonderful it is that he was able to hit every single bullseye. Of course, if his shots had fallen in different places, he would then be declaring how wonderful it is that he hit those marks, instead.

Dembski’s filter does the same thing. It draws a bullseye around the bullet hole after it has already appeared, and then declares how remarkable it is that “the designer” hit the target. If the bullseye had been somewhere else, though, Dembski would be declaring with equal intensity how remarkably improbable it was that that bullseye was hit. If ID “theory” really wanted to impress me, it would predict where the bullet hole will be before it is fired. But, ID does not make testable predictions of any sort.

Dembski, it seems, simply wants to assume his conclusion. His “filter”, it seems, is nothing more than “god of the gaps” (if we can’t explain it, then the Designer must have done it), written with nice fancy impressive-looking mathematical formulas. That suspicion is strengthened when we consider the carefully specified order of the three steps in Dembski’s filter. Why is the sequence of Dembski’s Filter, “rule out law, rule out chance, therefore design”? Why isn’t it “rule out design, rule out law, therefore chance”? Or “rule out law, rule out design, therefore chance”? If Dembski has an objective way to detect or rule out “design”, then why doesn’t he just apply it from the outset? The answer is simple – Dembski has no more way to calculate the “probability” of design than he does the “probability” of law, and therefore simply has no way, none at all whatsoever, to tell what is “designed” and what isn’t. So he wants to dump the burden onto others. Since he can’t demonstrate that any thing was designed, he wants to relieve himself of that responsibility, by simply declaring, with suitably impressive mathematics, that the rest of us should just assume that something is designed unless someone can show otherwise. Dembski has conveniently adopted the one sequence of steps in his “filter”, out of all the possible ones, that relieves “design theory” of any need to either propose anything, test anything, or demonstrate anything

I suspect that isn’t a coincidence.

The reference to the Sicilian from “The Princess Bride”, one of my favorite movies ever, was pure gold.

Thanks for the belly laugh.

Michael Evans: You didn’t see the Princess Bride? Inconceivable!

Seriously? I just like gratuitous Princess Bride references.

Caledonian:

Why? Confidence can easily be misplaced in inference. Uncertain things, they are.

When I use “inference” it is typically in the sense of logical inference, which encompasses a set of sound methods for deriving true propositions from other propositions. I’ll concede that this might not be the informal sense of the term, and maybe I should have just said that design inference is clearly not a sound method even by Witt’s admission, so what good is it supposed to be? Actually scientific induction is not sound in the logical sense either, but it can be formalized in terms of Solomonoff induction in which one can attach quantitative values to the confidence one ought to place in a conclusion. Confidence may not be perfect, but neither is it misplaced.

I think that as an inference method, design inference is more analogous to the “causality inference”: i.e. observing a correlation between A and B and concluding that A causes B. Note that even in an informal sense, this is generally called a fallacy rather than an inference methodology.

Let’s take Witt’s comment and adapt it to a defense of the “causality inference”. The argument would run something like the following (I’m reusing some phrasing, but this is an analogy, not a quote or paraphrase.)

You claim that since [somebody’s] set of causal relationships proved misleading when he tried to use the causality inference, then all sets of causal relationships will prove misleading. This is the fallacy of hasty generalization.

I would ever claim that. Indeed often when one observes a correlation and concludes causality, this is in fact the case. If I know it rained yesterday, and find that the sleeping bag I left outside is soaked, I’ll probably jump to the conclusion that it was soaked by the rain. I won’t even mess around wondering if this is the most parsimonious explanation. I might be wrong, but in a lot of cases, I’ll probably be right and maybe in the future I’ll be more careful about leaving things outside. This makes “causality inference” a useful heuristic, but not a sound method of deriving conclusions.

The so-called “design inference” is at least as susceptible to false positives and less generally applicable than the “causality inference.” The best defense Witt can muster for concluding design is that sometimes it leads to correct conclusions, just not in Lowell’s case. But this defense could be use for nearly any common logical fallacy. The fallacies wouldn’t be so appealing to the human mind if they did not sometimes lead to correct conclusions.

Perhaps Witt missed the ultimate in hasty generalizations: human designs require human minds, therefore any design requires some sort of mind. Sound familiar?

Witt, like PE Johnson before him, is mighty proud of his ability to read, but apparently not to think. This is a common flaw I had come to expect amongst freshmen college students who would complain after a science or math exam that they’ve read the assigned chapters in a textbook, “got the definitions”, and yet could not master the concepts to answer basic questions. Some of the times, they end up being English majors.

Ref: Lenny’s Comment #87611 , above

I started just to write a couple lines, but it grew.

Surely an essay on this topic should mention The advantages of theft over toil. “Theft over toil” has fun with the so called filter (which is actually just some rules Dembski made up so he could ‘win’). Among other things it explores the effect of ‘side information’ which is not supposed to change the outcome according to Demb - he thinks he can ‘read design off the event’.

More on false positives and Demb’s various claims: false positives

Example to show how EF works: stone circles

explanation (oops).

Demb would object to your statement that he draws the bull’s eye around the arrow. Specification is supposed to take care of that. If you come up a specification to fit the event (for example, his specification of flagella) that is not a specification. It is a fabrication. He justifies his flagellum specification be claiming that part of it - ‘outboard motor’ - was in use by humans shortly _before_ they knew much about flagella.

His “probability” is a set up. Since law (regularity) has been ruled out, he must use some ‘uniform distribution’ - assume each of a large number N of particular things have the same probability 1/N. In reality, pretending to calculate the probability of an outcome apart from the process leading up to it is unjustified.

What about evolution, which constantly intercalates chance and selection? Demb somewhere claims there is a theorem to the effect that the final result of such a process is also the result of a simple two part process: one large deterministic step and one large random step. So, he goes on, he need only use the probability of the random part to apply his filter. This is a creo denial mechanism. NS has precisely the characteristic of producing results that are not to be expected from the mathematical decomposition into two parts that he invokes. Creos cannot bear to contemplate this, so he clears it out of his mental picture.

Of course you are right that it is incredible hubris, indeed impudence, for him to declare that can “sweep the field clear of chance hypotheses”. [recall that ‘chance hypothesis’ in creo-speak covers both chance and anything other than the Designer’s purposeful action, even if deterministic.]

What about science? Science starts with “Don’t know”. This is the default, and is displaced only by evidence. By useing Design as he does in his made up rules aka filter, Demb makes ‘the Designer did it’ the default. Using something else besides Don’t know as the default == making the argument from ignorance to your preferred explanation. It also makes ID not science, by design.

Pete Dunkelberg said: (as neat a statement about neo-Creationism/ID as I have seen) .…..It also makes ID not science, by design .…(by culture engineers …aka spin doctors .…and ultra ironically social Darwinians)

I like the way Witt uses the term ultra-Darwinists for debunkers of neoCreationist pseudoscience making his argument (and Creationism in general) even more irrational .…by design.

The real beauty of that, is every time he uses the term ultra-Darwinists he erodes ultra Creationist support since in place of the word *g*o*d* he uses a term synonymous with atheist allowing the definition of NOT_DevineCreator to be the meme in his message .…its almost as if he is ashamed to mention the “g” word what kind of religion is that.…atheism? Someone should let the inquisition know.(.….who was that White House press flack who tried to get NASA to renounce the “Big Bang”)

I can see the line of questioning now. Press: Mr Witt you keep saying ultra-Darwinists instead of Atheist do you deny God? Witt: But you know that’s what I mean when I say ultra-Darwinists Press: Well no… since “Darwinists” are winning by miles and anyone who is an ultra-Darwinists is guaranteed to win over the idea of a neo-Creationist you are making Atheists heroes and instead of calling them god hating scum atheists you are calling them ultra-Darwinists making Darwin an even bigger (ultra)hero. Witt:But you know I can’t say god did it, you just have to believe me. Press: So .…not saying god but Darwin all the time is good is it? Witt: Yeah sure good versus evil, truth vs lies, fact vs fiction, Darwin vs nothing. Press: Mr Witt who do you pray to .…..Darwin ? Witt:How dare you! God created everything you know that! And no matter what the scientific evidence is, ID says that! Press: Can I print that? Witt: Of course not… don’t be stupid.

Boy, the next thing you know, Witt will be advocating involvement in a land war in asia. (sorry, couldn’t resist)

Of course the really huge difference between Lowell and others who posited aliens on Mars, and IDists, is that Lowell only supposed that organisms like ourselves were the designers. One might say that he was inferring evolved intelligence because he knew some of the purposes and capabilities of evolved intelligences with which we are familiar.

Was he looking at organisms and supposing that they had been designed? No, nothing stupid like that. We have no difficulty in the vast majority of cases in distinguishing between organisms and designed objects. No, he had seen canals made by humans, and he knew that intelligent beings on Mars might be able to make similar structures for similar purposes (there is the problem that the canals appear to be ridiculously straight, but that seems to be a simple mistake on Lowell’s part).

Little analogy exists between IDists and Lowell at all. Lowell tried to rule out “natural causes” of the apparent canals in the usual manner. IDists look at organisms having all of the marks of evolution, and say, “no, they were designed”. Lowell made mistakes, but nothing so egregious and wrong-headed as the IDists do.

Lowell’s “designers” were reasonably “knowable”, humanoid beings who needed water, food, and knew how to design and build canals. These are reasonable causal agents understood from the known capabilities of humans. How does one check out if these creatures really exist? Look for structures that wouldn’t appear without reasonably intelligent entities, but would be thought likely enough with these entities.

We looked, and didn’t find. End of story, unlike the ongoing manipulations of evidence and science by the various types of creationists. Make a reasonable prediction, check it out, and if it fails, abandon the hypothesis. Despite Lowell’s mistakes, he is a vastly superior scientist to Witt and others who cannot (or will not) make entailed design predictions and who refuse even to say what sort of intelligence the “designer” might resemble.

Apparently, all we know about the IDists’ designer is that he designs in a way that is indistinguishable from the results expected from evolution. By contrast with the predictions about humanoid aliens, then, the IDists’ designer is completely superfluous, as his products are unlike human designs, and like what evolution would produce (never mind the math done without data to “show that the flagellum couldn’t evolve”). There is thus no reason for the “design inference” whatsoever, other than religion.

This is what will forever prevent ID from changing over from religion into science. For since the IDists cannot distinguish the designer’s work from “natural processes”, the only reason why religious people can have for bringing the designer up is that they don’t like non-religious explanations for life.

Lowell didn’t predict that designers would produce apparently “natural forms” in either the biological or the geological realms. Only IDists can so twist logic and evidence that they conflate design and designer, supposing that entities which design must somehow be designed (glaring gap at God, of course, but consistency is not the IDist’s forte). The causal chains are thus closed, and there is little or nothing possible to open up such closed minds.

Glen D http://tinyurl.com/b8ykm

Duel to the pain.

But watch out for the RUS’s.

I hear that Seattle is infested with them.

The reason is because non-designed objects often present features falsely suggesting design.

As an aside, it’s also quite plausible that designed objects do not, upon rudimentary investigation, yield a positive.

As another aside (to the side of the first, naturally), I agree that Witt is not even in the same league as Lowell.

Lowell had a hypothesis. He made some testable predictions. They failed to pan out, and his theory was discarded. This is the essence of science, and to a point, I would say it probably was valuable because we now know more of what is not the case than we did before.

Witt, so far as I have witnessed, has no hypothesis or testable prediction yet, and thus, has no science. ID has not yet even advanced to the point where it can reach as high as failing in the manner Lowell did.

I challenge you to find that argument in my post — it’s not there. I did however comment on how Lowell’s use of the claim that “It was the mathematical shape of the Ohio mounds that suggested mound-builders” to bolster is argument that the Mars canals were also designed is (quite unarguably, in my opinion) very similar to Behe’s argument that the fact that the very shape of Mt. Rushmore points to a sculptor suggests that an intuitive design inference about the flagellum is justified. Note here that both Behe’s and Lowell’s arguments about the Ohio mounds and Mt. Rushmore are obviously correct - that’s not the question. It is the usefulness of using such arguments to prop up an unrelated “design inference” that is questionable, and should give the ID advocate some thought.

This really is a hasty generalization over analogies. If it’s “obviously correct” that the mathematical shape of X suggests X-builders, then the mathematical shape of Y should also suggest Y-builders, unless there’s some relevant difference between X and Y that makes the inference invalid for Y. The design inference for Y is not “unrelated”, it’s the exact same inference – mathematical shape suggests a builder. But no such commonality between Mt. Rushmore and the flagellum can be found. Lowell’s analogy is valid, and Behe’s is not – quite unarguably.

As far as I know Lowell didn’t explicitly calculate the probability based on chance alone of all the multiple canal intersections he thought he observed on Mars - although he could easily have done so, since he knew the number of “canals”, their approximate length and width, as well as Mars’s size and the angle under which he was observing it — but he was probably correct in stating that that the result would have been staggering in its improbability

This is quite wrong, and misses an excellent opportunity to illustrate Dembski’s vacuity. Any estimate of the probability of these intersections depends on the process by which they occurred – there is no independent “probability” of lines occurring in some specific configuration. Of course, Lowell could not imagine some process by which such lines could naturally come to be placed such that there would be such intersections – and neither can we. That doesn’t mean that there isn’t such a process, but it does strongly suggest intentional arrangement – if there really are such intersecting lines. Of course, we now know that there aren’t, and that the “placement” was via Lowell’s imagination, with some help from artefacts of his tools (the most extreme case was his observation of his own blood vessels as a network on the surface of Venus).

OTOH, we can imagine a process by which biological systems come into their configurations, and when we calculate the probabilities in terms of that process, we get much higher numbers than Dembski gets when he ignores that process and measures probabilities on the assumption of a normal random distribution. This argument from probability is pure question begging.

As far as I know Lowell didn’t explicitly calculate the probability based on chance alone of all the multiple canal intersections he thought he observed on Mars - although he could easily have done so, since he knew the number of “canals”, their approximate length and width, as well as Mars’s size and the angle under which he was observing it — but he was probably correct in stating that that the result would have been staggering in its improbability

This is quite wrong, and misses an excellent opportunity to illustrate Dembski’s vacuity. Any estimate of the probability of these intersections depends on the process by which they occurred – there is no independent “probability” of lines occurring in some specific configuration. Of course, Lowell could not imagine some process by which such lines could naturally come to be placed such that there would be such intersections – and neither can we. That doesn’t mean that there isn’t such a process, but it does strongly suggest intentional arrangement – if there really are such intersecting lines. Of course, we now know that there aren’t, and that the “placement” was via Lowell’s imagination, with some help from artefacts of his tools (the most extreme case was his observation of his own blood vessels as a network on the surface of Venus).

OTOH, we can imagine a process by which biological systems come into their configurations, and when we calculate the probabilities in terms of that process, we get much higher numbers than Dembski gets when he ignores that process and measures probabilities on the assumption of a normal random distribution. This argument from probability is pure question begging.

About this Entry

This page contains a single entry by Andrea Bottaro published on March 17, 2006 9:07 PM.

Prof. Steve Steve in London was the previous entry in this blog.

Same ol’, same ol’ is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter