Vacuity of Intelligent Design

| 41 Comments

Dembski, apparantly unable (or unwilling?) to address the claims and observations by Intelligent Design critics that Intelligent Design is scientifically vacuous, seems to have changed his approach to: well if ID is scientifically vacuous then evolutionary science is evidence-free.

This all is particularly ironic because Intelligent Design is both evidence-free and scientifically vacuous for the simple reason that intelligent design cannot make any useful predictions since the design inference is based on a gap argument, also known as an argument from ignorance.

Dembski, via one of his ‘colleagues’, asks the following question

What are the other vexing questions facing biologists that we are led to believe have already been solved? How about the origin of the information in the first cell? How about the origin of molecular machines? What about Haldane’s dilemma?

Let’s look into these question in some more detail. For instance the origin of information in the first cell: Why would Dembski be interested in the origin of information in the first cell, one may wonder? Simple, because science has shown that information can in fact increase in a cell under purely natural processes of regularity and chance. Unable to eliminate chance and regularity, the design inference remains quite powerless. But all hope should not be abandoned, one can always move the origin of ‘information’ to an earlier time in history, such as the ‘first cell’ or if that does not work, to the origin of the universe. This concept is known as ‘front loading’ and merely acknowledges that given a particular initial condition, natural processes are sufficient in explaining its evolution. In other words, Dembski’s move to front loading has made Intelligent Design even more vacuous. So what about the concept of information? Much has been written on how Intelligent Design defines and in the eyes of some, redefines and muddles, the concept of information. So what is information in ID speak? It’s the negative base-2 logarithm of the probability p. If an event has a probability p, then Dembski defines the information of such an event to be -log 2 p. Nothing wrong with that other than that ID activists confuse the concept of information ala Dembski with how the term is used in science. So what’s the problem with Dembski’s definition? First of all, how is the probability p calculated? Is it the probability of the event happening under the assumption of a uniform distribution function? Is it the probability of the event happening under the assumption of a particular chance and regularity pathway? Irregardless of which one of these definitions is used, there are some major problems. Let’s take the first definition. Under this definition, there is no reason to presume that chance and regularity processes cannot be responsible for the event, for the same reason that ‘design’ is also a possibility. The second definition is more interesting because it shows the vacuity of the design inference. Once a particular natural pathway has been shown, the probability of the event becomes close to 1 and thus the amount of ‘information’ in the event drops to zero. In other words, by defining information in this manner, Dembski has all but guaranteed that natural processes cannot generate information. In addition, one may argue that if a designer was involved, then the probability of the event would also be close to 1 and thus the information would also be close to zero. In other words, the concept of information is a meaningless concept as proposed by Dembski. So how does real science deals with the concept of information? A good example is found in the work by Tom Schneider. Schneider uses ‘Shannon Information’ to show how natural processes can increase the amount of information in the genome. In other words, there is at least in principle no reason to reject natural processes as being responsible for information in the first cell. Of course, there is also in principle no reason to reject that the first cell was designed. It all comes down to the evidence and to a comparison of hypotheses generated by science versus the hypotheses generated by Intelligent Design. Fair enough, after all, lacking any such hypotheses from either side, one may at most conclude that ‘we don’t know’. And there is nothing wrong with such a position, at least from a science perspective. From an Intelligent Design perspective, our ignorance should be counted as evidence in favor of something called ‘design’. The question is why? Because of ‘specification’… So what is specification? Well, according to Dembski, specification in biology is merely ‘function’. But wait a minute, function is exactly that which one would expect to arise under the processes of chance and regularity (Dawin’s theory of evolution), so again, Intelligent Design cannot claim that specification somehow resolves our ignorances to one side or another.

Which means that we return to hypotheses and the very important question: What hypotheses does Intelligent Design propose to explain a particular system or event it claims to have been designed? Remember, we have already determined that design itself is not sufficient as natural processes such as variation and selection can lead to complex specified information.

In the past, people have in fact asked Dembski exactly this question and his response is quite helpful in establishing the scientific vacuity of Intelligent Design. Rafe Gutman described a plausible scenario as to how science explains the complement system and asked Dembski to provide an explanation based on Intelligent Design. Dembski reponded as follows:

Dembski Wrote:

As for your example, I’m not going to take the bait. You’re asking me to play a game: “Provide as much detail in terms of possible causal mechanisms for your ID position as I do for my Darwinian position.” ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories. If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots. True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”

Link

In other words, not only will ID remain scientifically vacuous but also content free as ID cannot match the ‘pathetic level of detail in telling mechanistic stories’ (aka hypotheses).

Let’s see what else we can say about the original question “ How about the origin of molecular machines? “

Again, we first can establish that Intelligent Design neither provides any detailed hypotheses nor a scientifically relevant foundation. So how well does science do? Before I answer this question, let me point out that science is in the comfortable position that from the start it is competing with ‘we don’t know’. The same of course applies to ID but as I have shown, ID cannot even really compete with the null-hypothesis. So how well does science do in explaining the origin of molecular machines? An often quoted example of ‘design’ is the bacterial flagellum. Nick Matzke has presented quite a detailed overview of the origin and evolution of the bacterial flagellum. But Matzke not only presented testable hypotheses, he also made some predictions which have recently been shown to be supported by additional data. So, a valid question seems to be: What has Intelligent Design contributed to our understanding of the bacterial flagellum? The answer is a ‘shocking’ nothing, nada… Don’t take my word for it, check out the content free, and science free website Uncommon Descent.

41 Comments

By all means, do check out Uncommonly Dense. But know in advance, you aren’t going to find knowledgeable posts of the kind you find here, nothing resembling, “No genes were lost in the making of this whale”, or “Jellyfish lack true Hox genes!”. What you will find, however, is lots of comedy, and lots of Jesus.

Actually, -log2(P) is the standard definition for self-information, so Dembski isn’t cheating by using this definition. The problems occur when, as Pim notes, he glosses over his premises for calculating P, or when he defines bizarre spin-offs, like CSI or added information. Dembski is either utterly incompetent or deliberately obfuscative with regards to information theory.

Dude, you used irregardless. That’s not a word.

Yeah, you know it’s supposed to be “irregardlessly”.

Glen D http://tinyurl.com/b8ykm

Not being all too knowledgeable in information theory I was more interested in how these ideas apply to biology. So I read up to the part where the calculation to find the probability of the evolution of the flagellum included a calculation of the probability of all the proteins forming form random combinations of amino acids. Then I put the book down and backed slowly towards the door.

Ironically, by doing what he’s doing, Dembski demonstrates that ID itself represents the sum of what he thinks mutations must ultimately add to: a complete loss of information. Talk about self-fulfilling prophecy.

ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories.

There may be some merit in commenting further on this matter.

Of course ID is sold to the public and to the educational establishment as a “mechanistic theory”, since designers are typically understood to be making their designs via mechanisms. In fact, Behe and Dembski like to point to biological machines as being like (though they are not) what humans produce in, of course, mechanistic ways.

All of their “design analogies” are only to objects and systems that are made “mechanistically” by “designers”. They have no analogies at all if their “designer” does not produce designs via the mechanistic linkages that humans use. They waffle between trying to suggest that the “designer” is like us, and this other type of claim, which in fact suggests that there is no analogy between our work of design and God’s.

They have to equivocate because there is little that is analogous between our designs and what we see in nature, at least on the detailed level. However, even though it is obvious from various statements, including this one by Dembski, that their “designer” is the God of unknown designs and purposes, they absolutely have to sell ID as something within the realm of investigation, and thus, of mechanism.

If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots.

It would make complete sense to try to ape the scientific method, both in order to attain scientific status, and also because intelligence is something that is considered to be investigable and “mechanistic” in the sense that Dembski is using.

If Dembski were a proper philosopher, he’d know that science only can investigate mechanisms and processes which are theoretically open to “connecting the dots”. If you don’t “connect the dots”, you’re not doing science. While not all dots are able to be connected at the present time (not where human designing is concerned), neuroscience is considered to be a science precisely because it aims to only connect cause and effect where practically possible, and never to simply suppose that intelligence has some “supernatural” component. And if neuroscience runs into phenomena which cannot be investigated because cause and effect cannot be matched up no matter how long and hard it has tried (not in a QM way, either), it will have to humbly admit that science has failed in that area. One would not know if the failure was due to a break in the physical realm, however, or due to practical human limits (actually, it is possible that we would know, since we can sometimes know why we cannot know something, but we can’t count on this happening every time we fail to learn what is desired).

Now of course Dembski is ignorant and obtuse in many ways, so it is true that he thinks that intelligence is “supernatural” or some such thing. So, just as his “debunking of evolution” depends upon beliefs that presuppose that evolution isn’t responsible for the order of life (thus the search space is narrow for Dembski), his use of “intelligence” as a scientific explanation actually involves his faulty and ungrounded belief that intelligence is beyond scientific explanation.

Where he is profoundly ignorant and/or dishonest is in suggesting that such supernatural intelligence would be any kind of a scientific explanation for the human processes of designing. For, if our intelligence was not explainable (at least in theory and intention) by mechanistic connections between cause and effect, human design itself would be beyond scientific comprehension. Thus human designs would not be analogous with any scientifically investigable designer, even if our designs appeared to be similar to the forms of organisms. More explicitly, if the human mind is “supernatural”, design isn’t scientific (or it is only scientific in the barest, stamp-collecting, manner) even with respect to human artifacts (sure, the muscles might still be explained scientifically, but muscle activity is hardly what is meant by the term “design”).

To be fair, we might be able to do some science even if our minds were supernatural, yet finding the line between science and the beyond of human intelligence would presumably be very difficult. And yes, we might be able to “scientifically” assign design to supernatural human intelligence through sheer empiricism, however the connections that work so well throughout physics would be in question due to the intervention of supernatural human intelligence. So there wouldn’t be much, if any, of a science at all about human activities if our intelligence were supernatural. About all we could do is to catalog human creations vs. animal creations vs. divine creations. We’d be back to stamp-collecting, the old “natural history”.

Dembski already assumes that human activities are about as scientific as stamp collecting, and uses this false assumption to analogize to divine activities (without showing that anything is analogous–he only shows that organisms are complex, hardly a new idea).

But we know that human design is indeed a scientific explanation, for we have only indications that humans operate according to mechanisms (in Dembski’s sense) that are indeed susceptible to the sorts of cause-effect connections that are required in classical science. In fact, we know that human designs are subject to evolutionary constraint, one reason why human design is susceptible of hypothetical identification and subsequent affirmation or negation.

Dembski wouldn’t write his profoundly ignorant and clueless nonsense if he even began to understand science (assuming that he’s relatively honest about his premises, which he may not be). In some of his writings, he quite explicitly revives Aristotle’s “causes” (though “intelligence” doesn’t actually fit in there comfortably, since Aristotle seemed unable to decide whether or not “mind” is natural), which in fact are useless in today’s science. The closest any of Aristotle’s “causes” come to modern notions of causation is in his “efficient cause”, and even it is hardly on the level of a “physical cause” in Aristotle’s writings.

The fact of the matter is that we go around and around, showing how each bit of ID nonsense fails, while at least several (probably most) IDists are as clueless about science as Dembski is. Intelligence is not a cause, it is only a name that we give for a collection of causes and effects that yields more or less predictable results in at least some situations. Dembski’s absurd presuppositions regarding the mind are part of his “basis” for denying the science of evolution. He’s “doing science” using ancient metaphysics, not at all using proven methods, and he thinks that his prejudices are superior to those learned in dealing with actual evidence.

The most amazing thing about Dembski is his near-total ignorance with respect to science, especially the core concepts of science. We have never been able to do science using his presumptions about it, and in fact Galileo was instrumental in moving beyond the Aristotle/Aquinas notion of “causality”. The ID guys are not only treading in the footprints of Galileo’s persecutors (including the “scientists” who gunned at Galileo), they actually have largely the same faulty conception of science that pre-Galilean Europe had.

Or to put it another way: Dembski would strenuously object to the use of his own “methods” to convict him in a court of law. He would demand that the “dots be connnected” before he was forced to pay out a huge sum of money, or chucked into prison. In religion, he is content with quite different standards, and he wishes to force his religious apologetics into science and into science teaching.

Glen D http://tinyurl.com/b8ykm

I do love it when IDists shoot themselves in the faces by saying “not a mechanistic theory.” Scientific Theories are specifically mechanistic. Describing MECHANISMS and PATHWAYS is one of the key bits that distinguishes a Theory from a simple statement of generalized relationships. Theories describe them (not in the Dan Brown use of “description” either), explain them, and show you how to make testable predictions. Making predictions is one of the key things that ID lacks. Ironically, what does ID use to appeal to the validity of the Design concept? Archeology. Supposedly archeologists use design inferences all the time. Why is it, then, that archeologists use the idea of a designer to make predictions, explain relationships, and propose mechanistic pathways? By looking carefully at a certain occurance of stone tool, it is often possible to predict what sort of techniques were used to make it. ID can offer no such claim in regards to biological systems. Archeology can also use patterns of design techniques to build up a framework, a history of cultural exchange and migration. Finding stone points of certain designs in the pre-Columbian New World helps us identify Old World groups that may have made the migration, approximately when they arrived, and when they were displaced by or made trade with a competing society with different techniques of stonecrafting. You can selectively eliminate some possibilities and favor others as you look at the evidence for design in human tools. Archeologists and anthropologists use “design inferences” to make very testable predictions of a step-wise, mechanistic fashion. Not only that, they are able to draw very useful conclusions about the designers, and how they operated. If IDists are really using the same sort of technique, I predict that they should be able to do the same.

Chris Hyland Wrote:

Not being all too knowledgeable in information theory I was more interested in how these ideas apply to biology.

I do recommend Dawkins for that. Here’s a nice article in which he sneaks in a lesson about information and biology while relating the story of dishonest creationists interviewing him for a video.

So I read up to the part where the calculation to find the probability of the evolution of the flagellum included a calculation of the probability of all the proteins forming form random combinations of amino acids.

All you need to know about that is 1) That you can’t predict the probability of that happening because it depends on information we don’t have. This is stated outright by a guy named Borel, who IDists and other Creationists frequently misquote in an attempt to show that you CAN predict them and that they show the improbability of Evolution. Huzzah! 2) Any “random chance” calcuation about protiens forming is bogus “irregardless” of (1) because chemistry isn’t just random chance. If it were, nothing would work. So really “random chance” modelling has nothing to do with showing how proteins could form spontaneously. At the best, what you will do is establish an absolute maximum improbability that real life will never mimic because real life has rules and laws, which are selective forces that act upon random chance to reduce the number of likely outcomes.

I think Random Chance Chemists would have a heck of a time trying to identify 92 natural elements that all behaved the same way in every situation: randomly. It’s exactly because chemistry ISN’T just random that we know anything about it.

Glen Wrote:

He’s “doing science” using ancient metaphysics, not at all using proven methods, and he thinks that his prejudices are superior to those learned in dealing with actual evidence.

Indeed. I picture the Discovery Institute as a bunch of pipe-smoking men in armchairs. Don’t bother them with your post-enlightenment empiricism – they’re too busy debating which platonic solid is linked with which classical element.

Dembski wrote:

“True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”

PvM continues:

“In other words, not only will ID remain scientifically vacuous but also content free as ID cannot match the ‘pathetic level of detail in telling mechanistic stories’ (aka hypotheses).”

ID, like other “creation sciences” is generating a hypothesis, they are searching for the sudden appearance event, the poof event, or the lack of a precursor in biological organisms. Various phenomena such as the Cambrian explosion, the origin of life, and the origin of the universe all fit the bill. Unfortunately the search for fundamental discontinuities is nothing new. ID, like ships of old seems to be sailing close to the edge.

Delta Pi Gamma (Scientia et Fermentum)

But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”

dis·con·ti·nu·i·ty (dÄ­s-kŏn’tÉ™-nÅ«‘Ä­-tÄ“, -nyÅ«‘-) pronunciation n., pl. -ties.

1. Lack of continuity, logical sequence, or cohesion. 2. A break or gap. 3. Geology. A surface at which seismic wave velocities change. 4. Mathematics. 1. A point at which a function is defined but is not continuous. 2. A point at which a function is undefined.

steve s notes that one definition of discontinuity is:

Geology. A surface at which seismic wave velocities change.

This suggest a ripe new field of study, Intelligent Design geology (IDG). IDG uses seismic waves to locate small localized faults and interfaces between buried dissimilar rock types. As all hard rock miners know rich mineral veins are often located along fault lines, especially between dissimilar rock types. This could lead to the reactivation of long abandoned mines as well as discovery of new sources of precious metals providing needed research funds. Clearly the location of rich mineral veins are patterned in a complex specified manner perhaps containing information.

Perhaps I’m starting to resemble definition 1.

Delta Pi Gamma (Scientia et Fermentum)

ID, like other “creation sciences” is generating a hypothesis, they are searching for the sudden appearance event, the poof event, or the lack of a precursor in biological organisms.

No they’re not. They are doing ZERO RELEVANT RESEARCH. The success of their movement lies with biology’s inability to fill tens of millions of years of fossil records. Where there is a question mark in our models, there you will find a creationist screaming “AH-HA! TOLD YOU SO, YOU PEDDLER OF ATHEISM, YOU!”

Formulating a hypothesis with which to do research is step two in the scientific method, proceeding from “make an observation”. They cannot make observations which imply supernatural or divine intervention, and so, as PvM posted, they mangle words, misrepresent the data, and jump to irrational conclusions to MAKE it seem like they’re making rational observations about nature. The fact that, after decades as a movement, they’ve still not submitted a scientific hypothesis, should be more than enough evidence of just how useful these guys are to science and just what their goals are.

Syntax Error: mismatched tag at line 9, column 20, byte 282 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

As expected, I was asked to leave Uncommonly Dense by D_mbski himself.

(snicker)

Thanks Bill, that just shows what a great champion for ‘academic freedom’ you are.

(double snicker)

Re “I think Random Chance Chemists would have a heck of a time trying to identify 92 natural elements “

Not to mention 24+ unnatural elements.

Henry

So I read up to the part where the calculation to find the probability of the evolution of the flagellum included a calculation of the probability of all the proteins forming form random combinations of amino acids.

How about all the proteins arriving at the right place at the right time randomly? If indeed it is random and vastly improbable, the Designer must be very busy doing it for every bacterium every time.

Dude, you used irregardless. That’s not a word.

Sure it is, it’s separated by whitespace and consists of a sequence of letters forming syllables.

See Dictionary entry

Usage Note: Irregardless is a word that many mistakenly believe to be correct usage in formal style, when in fact it is used chiefly in nonstandard speech or casual writing. Coined in the United States in the early 20th century, it has met with a blizzard of condemnation for being an improper yoking of irrespective and regardless and for the logical absurdity of combining the negative ir- prefix and -less suffix in a single term. Although one might reasonably argue that it is no different from words with redundant affixes like debone and unravel, it has been considered a blunder for decades and will probably continue to be so.

-log2(P) is the standard definition for self-information, so Dembski isn’t cheating by using this definition.

I know, but his usage of information is highly non-standard, leading to conflation of terms.

All of their “design analogies” are only to objects and systems that are made “mechanistically” by “designers”. They have no analogies at all if their “designer” does not produce designs via the mechanistic linkages that humans use. They waffle between trying to suggest that the “designer” is like us, and this other type of claim, which in fact suggests that there is no analogy between our work of design and God’s.

I’d argue that chance and regularity very accurately capture ‘design’ by humans for instance. While the details are not always tractable, intelligent design follows the same pathway of regularity and chance. After all, how else would we be able to predict the behavior of groups of people?

ID, like other “creation sciences” is generating a hypothesis, they are searching for the sudden appearance event, the poof event, or the lack of a precursor in biological organisms. Various phenomena such as the Cambrian explosion, the origin of life, and the origin of the universe all fit the bill.

Origin of the universe, yes, likely. The Cambrian explosion took place over tens of million of years, with known precursors. Origin of life… Hard to tell, too few data points

How about all the proteins arriving at the right place at the right time randomly? If indeed it is random and vastly improbable, the Designer must be very busy doing it for every bacterium every time.

:-) Yeah, ID can hardly be considered to be self consistent which is yet another scientific reason to reject it.

The whole argument presented by Dembski about “where did the information come from” is actually much worse than the characterization here. The point is not merely that “natural processes can increase the amount of information in the genome”. Rather, any stochastic process necessarily generates information. No other qualifiers necessary. For anybody actually familiar with information theory (and one presumes Dembski’s target audience must be people who aren’t) his argument is simply baffling; it just makes no sense whatsoever.

Shannon explains this rather well in his original 1948 paper “A Mathematical Theory of Communication” (Google for it and read it - it’s one of the most important papers of the 20th century) I can’t talk enough about how insightful it is. For those not familiar, this paper introduced the concept of information theory, particularly in the context of communication systems, and it’s formulations and arguments are still very regularly referenced today. You can’t go to a conference on, for example, wireless communication systems (my field) without hearing people talk about the “Shannon bound” at length. In brief, Shannon was able to set bounds on the performance of a forward error correction system (such as relied upon by CDs, DVDs, cellphones, communication satellites and so on) decades before the technology existed to implement such systems. All of this relies fundamentally on Shannon’s formulations for information. It’s not often a paper comes along that far ahead of it’s time.

Dembski prefers to use Kolmogorov-Chaitin complexity measures, rather than Shannon’s information measure. However, the two are essentially different ways of measuring the same thing; Shannon’s formulation is typically used by communication-heads like me, whereas Kolmogorov-Chaitin is typically used by computer science-heads, because they happen to be more useful analytical tools in each case. Nonetheless, they’re essentially isomorphic.

One important other thing: Shannon defines “information entropy” in his paper, and creationists often try to link this into the second law of thermodynamics and use it to imply that information only ever decreases. A typical rebuttal is to suggest that information entropy and thermodynamic entropy are unconnected, citing the apocryphal story that Shannon picked the term entropy because “nobody understands what it means”. This assertion is clearly false; not only are the two forms of entropy connected, but Shannon was well aware of it at the time (his paper points this out, and references a contemporaneous book on statistical mechanics).

The thing is, the creationist argument is actually backwards; if you apply the second law of thermodynamics to Shannon’s entropy measure (and you can - Shannon’s entropy is linearly related to statistical thermodynamic entropy), you discover the following:

In a closed system, the total amount of information can never decrease

This is counterintuitive, but makes complete sense once you acquire a feel for what information really is. Sadly, IDiots like Dembski appear impervious to understanding the most basic tenets of the theory in which they claim to be “experts”. What a farce.

In a closed system, the total amount of information can never decrease

This is counterintuitive, but makes complete sense once you acquire a feel for what information really is. Sadly, IDiots like Dembski appear impervious to understanding the most basic tenets of the theory in which they claim to be “experts”. What a farce.

Fascinating, I had reached such a conclusion recently based on intuition more than based on information theoretical foundations but it makes sense. However I am still not convinced.

First of all, Dembski’s conservation law of information is clearly not a conservation law, secondly, the concept of information seems a bit more tricky when it comes to the details.

PvM Wrote:
Glen Davidson Wrote:

All of their “design analogies” are only to objects and systems that are made “mechanistically” by “designers”. They have no analogies at all if their “designer” does not produce designs via the mechanistic linkages that humans use. They waffle between trying to suggest that the “designer” is like us, and this other type of claim, which in fact suggests that there is no analogy between our work of design and God’s.

I’d argue that chance and regularity very accurately capture ‘design’ by humans for instance.

Yes.

“Chance” is only a designation that we give to events for which we are not privy to their causal nature (classical causes, of course), but which are considered to be highly mechanistic in the sense meant by Dembski. In this sense, both evolution and human “design” are mechanistic, and indeed, intertwined in “human intelligence”.

My concern here is that “chance” could suggest to some creationists a kind of metaphysical “chance” which is supposed to be the antithesis of “necessity”, while in the scientific sense there is no conflict with the broader concept of “mechanism” in the sort of “chance” I mentioned above.

I did not want to let “chance” be mentioned in relation to my post without pointing back to the importance of detail and mechanism in “chance”.

PvM Wrote:

While the details are not always tractable, intelligent design follows the same pathway of regularity and chance.

In classical physics there is no real irreducible “chance”, of course (well, some small bleed-through occurs, true, like the decay of a particular atom killing a cat, either via experiment or that one bit of radiation that causes cancer). Everything is regularity, it’s just that in complex environments the details are messy and often not known.

Mechanism, in the broad sense, remains all-important regardless, with which I assume you would concur.

PvM Wrote:

After all, how else would we be able to predict the behavior of groups of people?

Quite, and we often find evolutionary constraints probable causal mechanisms for such predictability. After all, why else would social animals end up mirroring the actions of their fellows?

Glen D http://tinyurl.com/b8ykm

As secondclass notes, -log2(p) is shannon’s information. Usually you talk about 8 (say) states needing 3 (log 8) bits of shannon information to specify one of them. If you talk about a chance of 1/8 vs a 7/8 chance of being in an alternative state, log2(1/8) = -3 so -log2(1/8) = 3 = information needed to specify the state with probability of 1/8.

Normal distribution & probability given certain pathways has nothing to do with it - generally, IF there’s a probability of p then that outcome represents -log2(p) information, just like one symbol from an equiprobable set of eight represents 3 bits, so Dembski’s unconventional use of ‘information’ is not his use of -log2(p).

Dembski’s major errors are (1) he prattles on about the probability of things as if their probability is known or well defined, when neither is true and (2) designists try to conflate information with intended information or meaning, which is an easy intuition pump because it is a common use of the word information; in Dembski’s case this conflation is ‘innocently’ achieved by using the word specification to mean function, which implies design to most people. If he were less adept at covering his ass for a scientific audience, and more pitched at just an audience of believers in the manner of Hovind, he probably would have used the term ‘designed information’. If your audience accepts your sleight that information means intended information, then they have accepted your premise of intelligent design.

information: Information is measured as the decrease in uncertainty of a receiver or molecular machine in going from the before state to the after state.

“In spite of this dependence on the coordinate system the entropy concept is as important in the continuous case as the discrete case. This is due to the fact that the derived concepts of information rate and channel capacity depend on the difference of two entropies and this difference does not depend on the coordinate frame, each of the two terms being changed by the same amount.”

— Claude Shannon, A Mathematical Theory of Communication, Part III, section 20, number 3

excellent post from Bored Huge Krill. just to be explicit on something he is saying, a random string contains more information than the sort of string we perceive as non-random (which is more obvious if you are looking at KC than Shannon information). Where this is counterintuitive is where we slip into our intuition (which ID is always trying to pump) that information is intended information. For example most intelligent communications are quite information redundant and “look designed”. However, this is just the most familiar everyday situation - if you look at a compressed media file where intended information is conveyed near maximum efficiency, then it looks random, which is a prerequisite of transmitting the most information in a given bandwidth.

As far as conservation of information goes, this is another case of ID slipping into “intended conveyed meaning” without too many people noticing. If I generate a meaningful fact, like x=7, I can derive a whole pile of additional facts like 2x=14 which at face value involve more information (more bits to be transmitted) but with proper compression those derived facts really don’t require more information (i.e. they could be derived after transmission). Another helpful example might be that if I need two coordinates to specify something, I still only need the equivalent of two coordinates even when I radically change the coordinate system. Anyhoo, what is really not increased is meaning, and the sleight is that it’s very easy to draw attention to a lot of situations where this can be reformulated as a conserved information, but in general conserved information is a distraction not a law.

Another issue just occured to me: Dembski defines three possible explanations, chance, regularity and design but why should we accept this? Why should design not be reducible to chance and regularity and why should ID activists be allowed to make the assertion that it is not reducible to chance and regularity? In fact I argue that intelligence and chance and regularity are equivalent. Taner Edis showed how Godel’s theorom when extented to include randomness basically showed how intelligence and algorithms which include randomness are ‘equivalent’. By defining three distinct possibilities, Dembski has reached a conclusion he has yet to support namely that intelligence is irreducible to chance and regularity.

Dembski has reached a conclusion he has yet to support namely that intelligence is irreducible to chance and regularity

I think this is spot on

snaxalotl Wrote:

if you look at a compressed media file where intended information is conveyed near maximum efficiency, then it looks random, which is a prerequisite of transmitting the most information in a given bandwidth

there I go making the same conflation - random looking information is a prerequisite of conveying the most MEANING in a given bandwidth

PvM: I’ve been saying that for some time. It amounts to beggining the question against a materialist understanding of intelligence, which is sort of at issue.

Obfuscation is ID’s lease on life, and Dembski is the king of confusion. By overloading existing terms, offering new terms where none are needed, and adding unnecessary clutter to his arguments, Dembski creates the appearance of scientific controversy. If he were to state his arguments clearly and concisely, he would have no followers.

For example, I submit that Dembski’s explanatory filter can be reduced to a single sentence:

Large, unexplained regularities are of supernatural origin.

This sentence is obviously indefensible, and ID proponents would undoubtedly accuse me of mischaracterizing Dembski’s position. So I’ll parse the sentence to show that this is, in fact, an accurate rendering of Dembski’s EF:

Specificity is regularity – no more, no less. Dembski’s examples of specified events most often exhibit regularity in the form of redundant patterns, e.g. a pattern of rocks on the ground that matches a constellation, or a conceptual pattern that matches a physical pattern. Dembski also holds that compressible strings are specified, making the equivalence of regularity and specificity even more obvious.

The “large” condition rules out chance. Small regularities, like flipping 5 heads in a row, can occur by chance, but large regularities, like flipping 500 heads in a row, cannot. A deterministic element is always present in the production of large regularities.

The “unexplained” condition says that this deterministic element is not explained by known natural laws. According to Dembski, this condition rules out necessity, but of course it doesn’t take into account unknown or little-understood natural laws. Dembski tells us that we should disregard such possibilities.

So there you have the explanatory filter laid bare – easily understood, and clearly invalid.

For example, I submit that Dembski’s explanatory filter can be reduced to a single sentence:

Large, unexplained regularities are of supernatural origin.

I’d put it as: “If you can’t explain every detail of it, then goddidit.”

At the risk of pissing off Steve S (grin), here is my standard response to all the Dembski “filter” crapola:

Perhaps the most celebrated of the Intelligent Design “theorists” is William Dembski, a mathematician and theologian. A prolific author, Dembski has written a number of books defending Intelligent Design.

The best-known of his arguments is the “Explanatory Filter”, which is, he claims, a mathematical method of detecting whether or not a particular thing is the product of design. As Dembski himself describes it:

“The key step in formulating Intelligent Design as a scientific theory is to delineate a method for detecting design. Such a method exists, and in fact, we use it implicitly all the time. The method takes the form of a three-stage Explanatory Filter. Given something we think might be designed, we refer it to the filter. If it successfully passes all three stages of the filter, then we are warranted asserting it is designed. Roughly speaking the filter asks three questions and in the following order: (1) Does a law explain it? (2) Does chance explain it? (3) Does design explain it? . … . … I argue that the explanatory filter is a reliable criterion for detecting design. Alternatively, I argue that the Explanatory Filter successfully avoids false positives. Thus whenever the Explanatory Filter attributes design, it does so correctly.”

The most detailed presentation of the Explanatory Filter is in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. In the course of 380 pages, heavily loaded with complex-looking mathematics, Dembski spells out his “explanatory filter”, along with such concepts as “complex specified information” and “the law of conservation of information”. ID enthusiasts lauded Dembski for his “groundbreaking” work; one reviewer hailed Dembski as “The Isaac Newton of Information Theory”, another declared Dembski to be “God’s Mathematician”.

Stripped of all its mathematical gloss, though, Dembski’s “filter” boils down to: “If not law, if not chance, then design.” Unfortunately for IDers, every one of these three steps presents insurmountable problems for the “explanatory filter” and “design theory”.

According to Dembski, the first step of applying his “filter” is:

“At the first stage, the filter determines whether a law can explain the thing in question. Law thrives on replicability, yielding the same result whenever the same antecedent conditions are fulfilled. Clearly, if something can be explained by a law, it better not be attributed to design. Things explainable by a law are therefore eliminated at the first stage of the Explanatory Filter.” Right away, the filter runs into problems. When Dembski refers to laws that explain the thing in question, does he mean all current explanations that refer to natural laws, or does he mean all possible explanations using natural law? If he means all current explanations, and if ruling out all current explanations therefore means that Intelligent Design is a possibility, then Dembski is simply invoking the centuries-old “god of the gaps” argument — “if we can’t currently explain it, then the designer diddit”.

On the other hand, if Dembski’s filter requires that we rule out all possible explanations that refer to natural laws, then it is difficult to see how anyone could ever get beyond the first step of the filter. How exactly does Dembski propose we be able to rule out, not only all current scientific explanations, but all of the possible ones that might be found in the future? How does Dembski propose to rule out scientific explanations that no one has even thought of yet – ones that can’t be made until more data and evidence is discovered at some time in the future?

Science, of course, is perfectly content to say “we don’t know, we don’t currently have an explanation for this”. Science then moves on to find possible ways to answer the question and uncover an explanation for it. Dembski’s filter, however, completely sidesteps the whole matter of possible explanations that we don’t yet know about, and simply asserts that if we can’t give an explanation now, then we must go on to the second step of the filter:

“Suppose, however, that something we think might be designed cannot be explained by any law. We then proceed to the second stage of the filter. At this stage the filter determines whether the thing in question might not reasonably be expected to occur by chance. What we do is posit a probability distribution, and then find that our observations can reasonably be expected on the basis of that probability distribution. Accordingly, we are warranted attributing the thing in question to chance. And clearly, if something can be explained by reference to chance, it better not be attributed to design. Things explainable by chance are therefore eliminated at the second stage of the Explanatory Filter.”

This is, of course, nothing more than the standard creationist “X is too improbable to have evolved” argument, and it falls victim to the same weaknesses. But, Dembski concludes, if we rule out law and then rule out chance, then we must go to the third step of the “filter”:

“Suppose finally that no law is able to account for the thing in question, and that any plausible probability distribution that might account for it does not render it very likely. Indeed, suppose that any plausible probability distribution that might account for it renders it exceedingly unlikely. In this case we bypass the first two stages of the Explanatory Filter and arrive at the third and final stage. It needs to be stressed that this third and final stage does not automatically yield design – there is still some work to do. Vast improbability only purchases design if, in addition, the thing we are trying to explain is specified. The third stage of the Explanatory Filter therefore presents us with a binary choice: attribute the thing we are trying to explain to design if it is specified; otherwise, attribute it to chance. In the first case, the thing we are trying to explain not only has small probability, but is also specified. In the other, it has small probability, but is unspecified. It is this category of specified things having small probability that reliably signals design. Unspecified things having small probability, on the other hand, are properly attributed to chance.”

But Dembski and the rest of the IDers are completely unable (or unwilling) to give us any objective way to measure “complex specified information”, or how to differentiate “specified” things from nonspecified. He is also unable to tell us who specifies it, when it is specified, where this specified information is stored before it is embodied in a thing, or how the specified design information is turned into an actual thing.

Dembski’s inability to give any sort of objective method of measuring Complex Specified Information does not prevent him, however, from declaring a grand “Law of Conservation of Information”, which states that no natural or chance process can increase the amount of Complex Specified Information in a system. It can only be produced, Dembski says, by an intelligence. Once again, this is just a rehashed version of the decades-old creationist “genetic information can’t increase” argument.

With the Explanatory Filter, Dembski and other IDers are using a tactic that some like to call “The Texas Marksman”. The Texas Marksman walks over to the side of the barn, blasts away randomly, then draws bullseyes around each bullet hole and declares how wonderful it is that he was able to hit every single bullseye. Of course, if his shots had fallen in different places, he would then be declaring how wonderful it is that he hit those marks, instead.

Dembski, it seems, simply wants to assume his conclusion. His “filter”, it seems, is nothing more than “god of the gaps” (if we can’t explain it, then the Designer must have done it), written with nice fancy impressive-looking mathematical formulas. That suspicion is strengthened when we consider the carefully specified order of the three steps in Dembski’s filter. Why is the sequence of Dembski’s Filter, “rule out law, rule out chance, therefore design”? Why isn’t it “rule out design, rule out law, therefore chance”? Or “rule out law, rule out design, therefore chance”? If Dembski has an objective way to detect or rule out “design”, then why doesn’t he just apply it from the outset? The answer is simple – Dembski has no more way to calculate the “probability” of design than he does the “probability” of law, and therefore simply has no way, none at all whatsoever, to tell what is “designed” and what isn’t. So he wants to dump the burden onto others. Since he can’t demonstrate that any thing was designed, he wants to relieve himself of that responsibility, by simply declaring, with suitably impressive mathematics, that the rest of us should just assume that something is designed unless someone can show otherwise. Dembski has conveniently adopted the one sequence of steps in his “filter”, out of all the possible ones, that relieves “design theory” of any need to either propose anything, test anything, or demonstrate anything

I suspect that isn’t a coincidence.

For example, I submit that Dembski’s explanatory filter can be reduced to a single sentence:

Large, unexplained regularities are of supernatural origin.

I think a better summary is:

“Very improbable arrangements which mean something, are the result of intelligence.”

The failure of ID can be summarised as:

They don’t know how many arrangements ‘mean something’, so they can’t say meaningful ones are unlikely.

I think PvM’s series of Emperor’s Clothes exposes on ID would be even funnier if they were numbered:

Vacuity of ID MCLXIV.

That kind of thing…

I think PvM’s series of Emperor’s Clothes exposes on ID would be even funnier if they were numbered:

Vacuity of ID MCLXIV.

That kind of thing…

Good idea…

Vacuity of ID MCLXIV.

There is, of course, the irony that roman numerals are a very inefficient way of conveying this particular specified information.

Re “roman numerals are a very inefficient way”

Would 1164 be better? :)

Henry

Would 010010001100 be any better?

Would 010010001100 be any better?

Actually, this does illustrate something important about the whole “specified complex information” thing and how counter-intuitive it can be.

Given a binary system, the most efficient way to transmit the value 1146 unambiguously is 10001111010.

11 bits.

The string MCLXIV, on the other hand, was probably actually sent to your computer in ASCII encoding, padded to 8 bit bytes.

0100 1101 (M) 0100 0011 (C ) 0100 1100 (L) 0101 1000 (X) 0100 1001 (I) 0101 0110 (V)

48 bits.

And it was probably displayed on your screen in a matrix of maybe 18 x 60 pixels.

1080 bits.

Now, there are all kinds of reasons to use an inefficient coding system, and that’s fine. But it’s noteworthy that the first system, the one that looks the most like random data, actually encodes the most information in the least space. The last system, though it looks far more “specified” is actually only 1% as efficient at getting this particular chunk of information across - it’s 99% waste.

MCLXIV = 1164 MCXLVI = 1146

How’s the word for dyslexia with numbers? ;)

MCLXIV = 1164 MCXLVI = 1146

Maybe this is why I regularly failed Latin.

Ah well, like that old saying - there’s 10 kinds of people in the world. Those who understand binary, and those who don’t.

Henry

About this Entry

This page contains a single entry by PvM published on May 23, 2006 11:31 AM.

No genes were lost in the making of this whale was the previous entry in this blog.

Bazell says “quit whining” is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter