Obsessively barking up the wrong tree

| 29 Comments

I apologize to James Hall for using his phrase in describing what Robert Crowther, and other Intelligent Design proponents, seem to be involved in when they are objecting to the simple fact that Intelligent Design is a straightforward argument from ignorance.

The problem is that ID proponents have used equivocating language which has led to much confusion amongst its followers. I cannot blame Crowther for taking serious the claims of his DI fellows, but merely claiming that ID is not an argument from ignorance is merely begging the question.

While it is relatively straightforward to reach the conclusion that ID is an argument from ignorance, it does require some careful analysis of how various terminologies are being used by ID proponents.

So what is the design inference? Simple, it is the set theoretic complement of regularity and chance, or in other words, that which remains once we have eliminated known processes. Note that ID provides no positive argument but merely refers to our ignorance as ‘designed’.

So what about the complexity argument? We see complexity in the world around us and ‘invariably this complexity can be traced to a designer’? What’s wrong with this argument? Well, for starters, complexity in ID - speak is nothing more than the negative base 2 logarithm of our ignorance. In other words, IFF we can explain something then the complexity disappears. So why would ID use such equivocating language?

In other words, ID’s argument is that we see a lot of things that we cannot yet explain. To conclude ‘thus designed’ is not different from our forefathers assigning earthquakes, solar eclipses and other unexplained events to deities.

So next time you hear an ID proponent argue that ID is not just about ignorance, ask them how they explain the bacterial flagella.

So perhaps Crowther can enlighten us: How does ID explain the bacterial flagella? Oh, I forgot, ID is not in the business of answering such pathetic requests…

Must be hard to be communications director with so many ID proponents making such silly comments.

29 Comments

It has occurred to me that calling ID an “argument from ignorance” (which it certainly is) and “creationism” is somewhat of a contradiction. Granted, the classic creationist positions themselves are mostly “arguments from ignorance,” but while they steadfastly avoid any indication of (or desire to test) how their alternative processes occur, they at least give clear proposals for the whats (e. g. humans and chimps are products of separate abiogenesis events) and whens (e.g. “mainstream science is correct” or “in a 6-day period 6000 years ago”). That provides testable hypotheses, about which YECs and OECs at least occasionally publicly debate, if not actively test, their disagreements.

For years I have been attempting to avoid this semantic trap by urging everyone not to say that ID “is” creationism, but rather that it indirectly promotes the various versions, which it does by exploiting common misconceptions about evolution, and a common tendency for people to infer their favorite fairy tales from the slightest doubt about evolution.

I’ve personally got no problems with ID not being a mechanistic theory. Dembski (who I’m going to assume is the person most in the know) simply claims that the patterns in nature indicate design. If scientists found a numerical property of DNA that absolutely indicated evolution, who would argue with this?

The problem, of course, is that Dembski’s claim has not the slightest backing of any evidence whatsoever, and if I were feeling cynical I might claim that that’s because it’s a dishonest scam to remove a threat to fundamentalist Christianity from schools and have kids indoctrinated with funky brands of creationism instead, but hey, I’m not a cynical person, and I’m sure he has his reasons for making these claims and then refusing to answer simple questions or do research.

I sometimes wonder how much the fans on his blog at UD actually know about what they’re supporting.

Perhaps they are just barking - not necessarily up the wrong tree!

Michael

“once we have eliminated know processes” should probably be “once we have eliminated known processes”

ID as creationism can make predictions such as the ones about Junk DNA but such predictions are not based on ID’s premises but rather on auxilliary hypotheses, ID claims are unnecessary.

So what is the design inference? Simple, it is the set theoretic complement of regularity and chance, or in other words, that which remains once we have eliminated known processes. Note that ID provides no positive argument but merely refers to our ignorance as ‘designed’.

The position is even weaker than that. The EF doesn’t eliminate known processes. Take them at face value: Fine, life is designed. By natural selection. What coherent response does ID have to this? Vague accusations of equivocation or inconsistency on the terms “intelligent” and “design,” proving again that ID is at root nothing more than a word game, easily won by invoking “cdesign proponentists.”

PvM Wrote:

So why would ID use such equivocating language?

Why, indeed. Why would Dembski use the word “complex” to describe things as simple as a rectangular monolith?

Because equivocation is a handy rhetorical device, that’s why. For instance, you can use it to falsely imply endorsement of your ideas by others. Both Leslie Orgel and Paul Davies used the term “specified complexity” before Dembski did, as Dembski has reminded us several times. But neither Orgel or Davies were referring to Dembskian complexity, a fact that Dembski fails to mention.

And equivocation is a great tool for pseudologic. For example, Dembski reasons:

Dembski Wrote:

In every instance where the complexity-specification criterion attributes design and where the underlying causal story is known (i.e., where we are not just dealing with circumstantial evidence, but where, as it were, the video camera is running and any putative designer would be caught red-handed), it turns out design actually is present; therefore, design actually is present whenever the complexity-specification criterion attributes design.

But a video camera would catch someone “designing” only according to the normal usage of the word – that is, we would see somebody drawing a schematic or whatever. It would not catch “design” in the Dembskian sense, i.e. the set theoretic complement of regularity and chance.

And don’t even get me started on his equivocations on the word “chance”.

Why, indeed. Why would Dembski use the word “complex” to describe things as simple as a rectangular monolith?

Also, it’s to avoid, either consciously or unconsciously, the fact that complexity is not the mark of design, rational solutions are. Nothing rationally designed exists in organisms (some candidates could be brought up, but the historical marks and constraints ultimately bely that conclusion). So Dembski changes the criteria, and even the meaning of words, in order to make his pseudoscience.

To be fair, specified unlikely configuration (not complexity per se, and of course “specification” is problematic) could be one possible criterion for detecting design. But it has to be accompanied by other marks, such as rational design (straight lines in many contexts are initial tip-offs), copying of a successful function for an apparent purpose, or novelty. This is why their stupid SETI analogy is so poor, since of course we’d never identify life such as that found on earth as being the product of design, unless it appeared to have been copied for some reason.

So yes, equivocation is handy for pseudoscientists. Even more handy is redefining words for the single reason of obscuring the differences between actually designed devices and biological organisms and their parts. IOW, this is not equivocation in its most usual sense, it is outright lying in order to claim the equivalence of simple designs and the complexity of evolved systems.

Dembski could never make an honest case for the design of life, so he settled for the next best thing, a dishonest case and a lot of obscuring mathematics and false statements that would be bought by the relatively naive targets of his scheming.

Glen D http://geocities.com/interelectromagnetic

I’m reminded of the debate over Thomas Kuhn’s The Structure Of Scientific Revolutions, the 1960s book which said that at any particular time, science was primarily defined by the nature of the society in which it was being done (a point of view which lends itself to interpretations congenial to creationism, albeit mistakenly). Some of the most vitriolic arguments were characterised by what the author described as misunderstandings of the book: it certainly proved easy to come to radically different interpretations of what Kuhn actually meant.

It was also the book that launched the dread word ‘paradigm’ into widespread circulation, in a way that directly contributed to the misunderstandings. A linguistic analysis of the book showed that the word had been used with at least 21 subtly different meanings, which unintentionally added a great deal of ambiguity and confusion. In later editions, Kahn preferred to use “disciplinary matrix” to describe the collection of intellectual tools and ideas connected with a particular mindset: I don’t think he ever set out to create the problems that devilled the book.

But if you do want to create a fuss that looks scientific but resists cardinal disproof, make sure you keep varying the exact way in which you use words. You won’t get anywhere scientifically, but you can always muddy the waters if you don’t like the way an argument’s going.

I don’t want to muddy the waters too much, but I agree with CJO; that ID is an argument from ignorance shows up in that it can’t produce a means to identify design. It consists of pointing to some feature and saying ‘goddidit’ whenever the pointer feels like it.

For example, this:

secondclass Wrote:

Dembskian complexity [Emphasis removed]

I don’t know about Orgel’s and Davies’ attempts of definition of complexity measures or their uses, but at the end of the day there is no Dembskian complexity. He doesn’t stick to any one definition in his papers and has never been able to present any example or result AFAIK. It is pseudomathematics.

Mark Chu-Carroll has a revealing exercise where he enforces the most likely definition from Dembski’s ‘efforts’ as frozen and extracts it’s real meaning. He concludes, rightly IMO, that it is contradictory.

[As an information theoretic measure, “specified” will mean low information content (compressibility or “simplicity”) and “complexity” mean high information content (non-compressibility). Measuring information, which is Dembski’s catchphrase, in this way you are frakked.

And that it collapses are tied into PvM’s observation, that the design inference is trying to pick out something that isn’t there. This is why Dembski has to waffle in his definitions, otherwise it becomes too obvious that he is failing.]

Finally, I find it humorous that Crowther implies that scientist and literate public has OCD when IDiots can’t change their tune. Or in his case, their crowing.

Heh. Always update before posting. Glen made the same argument as I did, and in no way is my reference to “muddy the waters” connected to Goodwin’s. I hope. :-)

“once we have eliminated know processes” should probably be “once we have eliminated known processes”

But it is an interesting mistake. We could construe the phrase “know processes” to mean “ways of knowing.”

Pigwidgeon wrote:

“If scientists found a numerical property of DNA that absolutely indicated evolution, who would argue with this?”

Well, I don’t know what a “numerical property” is, but I do know of a feature of DNA that “absolutely indicated evolution”.

Retroviral transposons (SINEs) are mistakes in genomes that persist through evolutionary time and are shared in descendants of an infected lineage. They can cause many problems for the species in which they opccur, including extinction. They are most reasonably interpreted as plagarized errors inherited by common descent and cannot be reconciled with any sort of intelligent design hypothesis. There are also many other features of DNA that are only explicable in the light of evolution, but to my mind this is the most definitive example.

As to who would argue with this, so far I would say no one. I have asked the question of many creationists, including Dembski, and all have either agreed that this is strong evidence of common descent or they have ignored the question. (The exception being our friend Mark who says the Bible wins regardless of the evidence just because he says so).

I’ve personally got no problems with ID not being a mechanistic theory.

Sorry to hear it.

Dembski (who I’m going to assume is the person most in the know) simply claims that the patterns in nature indicate design.

Who cares what anyone one “simply claims”?

If scientists found a numerical property of DNA that absolutely indicated evolution, who would argue with this?

I have no idea what “a numerical property of DNA that absolutely indicated evolution” might refer to, but if we found one there’s no doubt that creationists of all stripes would argue with it, just as they argue with all the other evidence, much of which nears “absolute indication”, for evolution.

In every instance where the complexity-specification criterion attributes design and where the underlying causal story is known (i.e., where we are not just dealing with circumstantial evidence, but where, as it were, the video camera is running and any putative designer would be caught red-handed), it turns out design actually is present; therefore, design actually is present whenever the complexity-specification criterion attributes design.

This “argument” is wrong on so many levels. Here’s a similar argument:

A: I’ve finally met a Republican, Mary, who isn’t a jerk. B: That’s impossible. A: Huh? How could it be impossible? B: Every Republican you’ve ever met is a jerk. A: Every one except Mary. B: So whenever someone’s a Republican, they’re a jerk. A: That doesn’t follow logically from your previous statement, and it’s contradicted by the existence of Mary. B: And since Mary’s a Republican, she’s a jerk. A: Uh, no, she’s not.

This “reasoning” is at the heart of the argument from design, from Paley to Dembski. Whenever we see apparent design, there’s an intelligent designer – well, no, not in the case of biological organisms. Therefore biological organisms, which appear to be designed, are the result of an intelligent designer – well, no, that’s a really really really stupid case of petitio principii plus hasty generalization.

As to who would argue with this, so far I would say no one.

How incredibly naive. You’ve already mentioned Mark Hausam. Other creationists will argue that evolution is supposed to be survival of the fittest, and according to evolution bad traits are weeded out, so the existence of SINEs disproves evolution. This is so blatantly obviously the sort of argument that creationists would make that I really have to wonder how you could overlook it. Just for yucks, after typing that I went and googled creationists argue against SINEs, and got http://www.talkorigins.org/faqs/molgen/ which lists eleven arguments by creationists; right there at number 3 is “If all these sequences were really nonfunctional, they would have been eliminated over evolutionary time”.

Forget “know thine enemy” – at least have a bloody clue about thine enemy.

This is is illustrative:

A creationist interpretation of pseudogenes offered by Woodmorappe is that some pseudogenes may be “the result of degenerative changes in living organisms since the Fall.” This interpretation seems plausible, and–if we ignore the “Fall” part–not very different from the evolutionary idea that pseudogenes arise by random genetic accidents. However, this interpretation completely ignores the fact that many pseudogenes are shared between apes and humans, located in the same positions and sharing the same genetic defects, apparently the result of the same genetic accident or “degenerative change” in a common ancestor. (If these shared pseudogenes arose after the “Fall” as suggested by Woodmorappe, did the “Fall” perhaps occur before man diverged from the apes?)

Who would argue against the claim that SINEs etc. are only explicable in the light of evolution? Any random creationist. How would they argue it? By ignoring facts and reason, shifting goalposts, prevaricating, confabulating, misrepresenting .… Duh.

I don’t think he ever set out to create the problems that devilled the book.

Yes, well, that’s some indication of the quality of his thought. Kuhn’s biggest contribution was, I think, to spur Lakatos to produce a far better analysis.

straight lines in many contexts are initial tip-offs

This page offers a hint as to why, but I’m too lazy to explain it, other than to say that artifacts tell us a lot about the tools and mechanisms available to the creator, and it’s even harder to make straight lines out of fractals than it is from circular motion.

straight lines in many contexts are initial tip-offs

Yeah, but there’s nothing special about straight lines, ya know.

The primary reason us human designers use them is that they’re easy to calculate and straightforward to manufacture. That is, they’re cheap and fast.

But they seldom represent an optimal solution, and any device where it’s really critical to maximize performance or minimize material weight (think things like aircraft and artificial body parts) typically has structural elements that form graceful curves following the load paths.

It’s only recently, with better computer modeling and CNC production techniques, that it’s been cost-effective for human designers to turn out the kind of structural optimization that mother nature has been providing for eons.

Popper's Ghost Wrote:

it’s even harder to make straight lines out of fractals than it is from circular motion.

PG, that was a good find. (Though the actual constraint is that the circular motion is constant or at least not reversing, otherwise it is easy.) It reminded me of the work with asymmetric cog wheels, also leading to some beautiful math.

The connection to NP-completeness was especially illuminating - that paper was terrific albeit not in a published state. Not that I am an QM expert, but I have never seen anyone use the time-varying Schrödinger equation to good effect before.

For example, who knew that the uncertainty principle can explain why QM systems can form bound states in potentials where classical systems would not be trapped? Not me in any case. So, another bookmarked site.

Btw, that paper seems to show why it is preposterous when creationists claims that adaptations are optimal instead of adequate. The ‘mechanical’ downhill principle would be an equivalent formulation AFAIU - and it must fail (or NP = P) as the author shows in multiple proofs.

PG, that was a good find.

It wasn’t exactly a “find” – I’ve been familiar with the linkage problem for many moons, long before one could just drop search terms into google. :-)

(Though the actual constraint is that the circular motion is constant or at least not reversing, otherwise it is easy.)

I’m not understanding you. It’s a mechanical linkage; you can’t straighten out the path by changing the rate or going backwards. If it’s easy, what’s your solution?

The connection to NP-completeness was especially illuminating - that paper was terrific albeit not in a published state.

Ah yes, the “Plane mechanisms and the ‘downhill principle’” link – that was a find. :-) I haven’t (and won’t) read the whole thing, but it reminds me of a paper I read a while back that showed that Steiner trees (NP-complete) can’t be solved by soap bubbles because they can only achieve local, not global minima.

Yeah, but there’s nothing special about straight lines, ya know.

Other than that they give the shortest path between two points.

That is, they’re cheap and fast.

So they optimize cost and speed of production.

But they seldom represent an optimal solution

Only if you’re trying to optimize cost or speed of production or the distance between two points. They’re also good when you want rigidity, but I guess that’s “seldom”.

Looking at my bookshelf, I’m also reminded that they’re good for stacking and efficient packing. And then there are the drawers in my desk … I suppose they could be curved; they wouldn’t take that much more room, although the contents would tend to move around a bit, not staying level and all …

Only if you’re trying to optimize cost or speed of production or the distance between two points.

Hopefully, the FSM (my designer of choice) had neither of these goals in mind when she drafted me with Her Noodly Appendage.

Popper’s Ghost:

Sorry, the usual weekend hiatus happened due to more important weekend social obligations.

Popper's Ghost Wrote:

It wasn’t exactly a “find”

Agreed. It was a find for me. ;-)

Popper's Ghost Wrote:

you can’t straighten out the path by changing the rate or going backwards.

True, if a linkage is implied. With slippage you can extract linear forcing from a straight moving axis (analogous to the famous wheel).

Popper's Ghost Wrote:

but it reminds me of a paper I read a while back that showed that Steiner trees (NP-complete) can’t be solved by soap bubbles because they can only achieve local, not global minima.

Exactly, Scott Aaronson’s paper I believe. IIRC more exactly they can achieve global minima but haphazardly - you will not be sure of the global minima nor predict a convergence time since physical systems gets ‘confused’ or stuck by local minima.

But that is enough to prevent solution in the P sense. (And, incidentally, looks a lot like the usual situation when heuristics can solve NP-complete problems. Apparently you can get good approximations but not exact solutions quickly, and a few cases where convergence is exponential.)

True, if a linkage is implied. With slippage you can extract linear forcing from a straight moving axis (analogous to the famous wheel).

Um, the trick is to get straight lines without using any straight lines. :-) Note that the given linkage solution doesn’t require any straight lines, just rigidly joined pivot points.

Exactly, Scott Aaronson’s paper I believe.

Yes, that’s the one.

IIRC more exactly they can achieve global minima but haphazardly - you will not be sure of the global minima nor predict a convergence time since physical systems gets ‘confused’ or stuck by local minima.

Well yes, of course, they might happen to fall into a global minimum, although it rapidly becomes increasingly unlikely as the number of points increases.

And, incidentally, looks a lot like the usual situation when heuristics can solve NP-complete problems. Apparently you can get good approximations but not exact solutions quickly, and a few cases where convergence is exponential.

It doesn’t look like it to me. There are heuristics can reach exact optimal solutions in the majority of “practical” problems, whereas soap bubbles virtually never produce optimal Steiner trees when there are more than a few points. Heuristics are flexible, variable, ad hoc … whereas soap bubbles employ exactly one unchanging algorithm to every problem. Those who expect soap bubbles to solve a problem have an erroneous intuition that that one physical algorithm is optimal – they don’t see it as a heuristic or approximation, and in that they are right.

I agree to all of the above except the exact solutions. But that isn’t because I have any personal experience or theoretical knowledge in this are, but it is a vague memory of either Mark Chu-Carroll’s exposition on Good math, Bad Math or possibly Scott Aaronson’s.

It is a minor (unsupported) detail though, because we can’t be sure of the convergence rate in any case.

In case you ever return here, Torbjörn … you shouldn’t put any stock in vague recollections. Consider the set of all practical instances of some NP-hard problem that one particular human being will be interested in solving in her lifetime. That set is finite, which means that the entire set can be solved in (large) constant time. Heuristics can be employed to reduce the solving time of such a set, or a subset of such a set, at the expense of the infinity of problems outside the set.

A lot of computational progress has been made as a result of grasping that we finite beings never have to – or do – deal with actual infinities, and so we don’t need to find algorithms that work equally well, or at all, on every member of an infinite set.

Here’s one relevant comment in sci.math.research that came up from a google search:

In many practical applications one *needs* the solutions to NP-hard problems. For example, checking for tautology is a basic operation in logic synthesis; algorithms for doing this appear in “Logic minimization algorithms for VLSI synthesis”, Brayton et al, Kluwer Academic Publishers, 1984.

Although these tautology algorithms are based on some simple-minded heuristics, they yield exact solutions. The aim of the heuristics is to get the procedures to execute in a reasonable amount of time on many reasonable examples.

About this Entry

This page contains a single entry by PvM published on July 11, 2007 11:40 PM.

Threats against University of Colorado Biologists was the previous entry in this blog.

Reinventing the worm is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter