Dembski’s Explanatory Filter Delivers a False Positive

| 43 Comments

Andrea Bottaro has shown by example the irrelevance of William Dembski’s explanatory filter to archaeology. In this contribution, I want to show how in the real world the explanatory filter can lead to a false positive.

Dembski’s vaunted explanatory filter is no more than a flow chart designed to distinguish events of low probability. If the probability of an event is low enough and if Dembski can discern a pattern, then he concludes that the event must have been the product of design. Dembski admits that the explanatory filter may produce a false negative (fail to infer design where design exists) but claims it will never produce a false positive (infer design where none exists). In this article, I will give a real example wherein the explanatory filter could have yielded a false positive.

According to an article in Science [Quinn Eastman, “Crib Death Exoneration Could Usher in New Gene Tests,” Science, 20 June 2003, p. 1858], a British woman lost three babies to sudden infant death syndrome (SIDS or crib death) within four years. The Crown Prosecution Service applied the explanatory filter as follows: One death is tragic; two deaths are suspicious; three deaths are murder. The woman was prosecuted.

According to the BBC, the rate of SIDS in England and Wales is less than 0.5 death per 1000 live births. In effect, prosecutors reasoned that the probability of three SIDS deaths was 0.00053, or approximately 10-10. They concluded that this probability was so small that a design inference was warranted, and the woman was charged.

What the prosecutors did not know or ignored was that SIDS may be a genetic disease that runs in families. Indeed, the woman’s grandmother testified that three of her children died of unexplained causes before the ages of 6 weeks (in the 1940’s, before SIDS was recognized). A geneticist further testified that SIDS could run in families and suggested two possible mechanisms. The jury used this information - what Dembski calls side information - and acquitted the woman.

In short, the prosecutors applied the explanatory filter and arguably got a false positive, whereas the jury applied side information and drew the opposite conclusion. In the case that Bottaro describes, the archaeologists likewise applied side information and entirely ignored the explanatory filter. The forensic archaeologist, Gary Hurd, argues further that only the side information is relevant [“The Explanatory Filter, Archaeology, and Forensics,” Chapter 8 of Matt Young and Taner Edis, eds., Why Intelligent Design Fails: A Scientific Critique of The New Creationism, Rutgers University Press, New Brunswick, 2004].

The archaeologists and the jury used a probabilistic argument, in a way. Each bit of side information - the wear pattern on the shells, the presence of pigment - strengthened the argument that the shells were beads. The archaeologists would not have been able to prove design without concentrating on the side information. The explanatory filter, a diagnosis of exclusion, was wholly irrelevant or, in the SIDS case, possibly misleading.

Now, I can’t prove that the prosecutors were wrong - only that the filter may well have produced a false positive. In this regard, my example is no different from a favorite example of Dembski’s: the county clerk who “randomly” gave 40 Democrats in 41 elections the top ballot position [No Free Lunch, pp. 55-58]. Dembski has adduced no hard evidence to prove that the clerk cheated; all he can say is that the clerk most probably cheated, that is, that the 40 Democrats were most probably the result of design.

In the same way, I can’t say that the explanatory filter gave a false positive; all I can say is that it may well have given a false positive. But Dembski claims it is immune to false positives. It is not, precisely because you cannot know when you have all the side information, and often the side information is crucial. (Yes, I know, you can take the side information and reevaluate the probabilities to draw the negative inference. But that’s not the point, is it? The point is that Dembski claims the filter to be immune to false positives, and it can’t be, because you never know whether you have all the relevant side information. [See also John S. Wilkins and Wesley R. Elsberry, “The Advantage of Theft over Toil: The Design Inference and Arguing from Ignorance,” Biology and Philosophy, 36, 711-724, 2001.])

Bottaro’s archaeologists inferred design because they knew something about the designers and what they do with beads. The explanatory filter was irrelevant. If it cannot be applied to a simple case such as identifying beads or ferreting out a murderer, how can it be useful for identifying the artifacts of a designer whose habits and intentions are wholly unknown to us?

43 Comments

I don’t have Dembski’s book Intelligent Design here with me at work, and it’s been a few years since I read it, but I thought that the probability limit he proposed was 10^-150. If so, then how does that correlate with 10^-10?

Perfectly. The probability without side information is independent of the probability *with* side information. If, for example, the genetic disease makes it 100% likely that any child born from that mother will die, whereas the overall incidence of the disease is not 5 per thousand but 1 per million, and the mother had 25 babies which all died, then that meets the probability level of Dembski’s filter.

A better response from a Dembski supporter might be that the prosecuters’ inference was correct, but they misplaced the assumption of the identity of the designer. Obviously, the real murderer is the Intelligent Designer, who designed the gene that causes SIDS. I don’t know exactly what to make of that suggestion.

The 1e-150 number is Dembski’s “universal probability bound”. Probabilities smaller than this do not require justification of a “local small probability” bound. But Dembski’s framework for justifying “local small probabilities” is in TDI and has not been retracted, so probabilities larger than 1e-150 need not prove a show-stopper for making a design inference via Dembski’s EF/DI, though Dembski now seems intent upon insisting upon use of his 1e-150 number. Given the arcane nature of actually trying to apply Dembski’s full GCEA framework to any real-world problem, it’s likely that given a choice between “justifying a local small probability” and doing something entirely different the user will likely do something entirely different.

There’s no evidence that I know of that Dembski’s GCEA is used by anyone for any non-trivial application, and that includes Dembski. The best that can be said is that Dembski’s “Sloppy Chance Elimination Argument” might be in use, as it has been since Paley. (See the paper by Jeff Shallit and I for the description of the SCEA.)

Actually, I suspect that Dembski’s Caputo argument is also flawed. What if Caputo used a fair coin, but then changed the game if Republicans won? If Dems won the first toss, stop the game; if Republicans won the first toss, go to 2 out of 3, or later, 3 out of 5, etc. (He may have gotten tired the one time that the Repubs won.) Haven’t figured out the expectations, but it’s an interesting thought on a Thursday afternoon. So what is concluded to be a case of design is more likely to be a selected result from a random process.

Frank Wrote:

Actually, I suspect that Dembski’s Caputo argument is also flawed. What if Caputo used a fair coin, but then changed the game if Republicans won?

Caputo claimed that he drew the letters out of an urn independently for each election. So Dembski’s calculations are correct relevant to that particular hypothesis. But had the method of selecting names been different, then Dembksi’s calculation would have been wrong. (It would have also meant that Caputo lied, which is a whole ‘nuther reason to suspect foul play.)

That’s kind of the whole problem in a nutshell; you can use probabilistic arguments to weigh certain hypotheses, but when new information leads to new hypotheses, then all bets are off. The only way Dembski can claim to acurrately calculate the probability for the flagellum (or whatever) to evolve is if he knows for sure that no new information will lead to new hypotheses. And of course that’s a silly thing to believe, given that we don’t even know everything about how the flagellum functions, or how much variation there is between different species of bacteria. And what’s worse, the only hypothesis Dembski has considered is one which no biologist would accept anyway: a completely random assembly of amino acids. Even changing this hypothesis slightly to allow for some cooption of existing proteins would alter the calculation dramatically.

I am all but certain that Dembski’s “no false positives” claim means no more than that false positives are attributed to something other than the filter. In this case the side information called the probability calculation into question - and it is that, rather than the filter itself, that would be blamed. This leaves only one possibility to show a false positive - a specified event must be shown to be due to chance, and the probability must be shown to be below an appropriate probability bound. This may be possible but the point of attack would IMHO have to be Dembski’s idea of specification (i.e. Dembski’s idea of specification would have to be shown to be too permissive, allowing too many potential specifications).

Syntax Error: not well-formed (invalid token) at line 1, column 1018, byte 1018 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

Matt,

I think that intelligent design can be supported and defended by scientific principles and sound logic. A living organism is a complex machine that has organization. When an entity shows organization, it is made up of elements with varied functions that contribute to the whole and to the collective functions of the system. Such entities are properly called “complex machines” because they possess structures that are related to, and support functions, and they possess processes that are likewise related to, and support functions. It has been observed from experience that all such complex machines, those that have the same characteristics as living organisms, that adapt means (structures and processes) to ends (functions), are the result of intelligent design.

Human intelligence has created many complex functional machines, cars, airplanes, computers, TV sets, etc. which are the direct result of human design and engineering. But never has any such machine arisen from the raw materials in its environment without intelligent guidance. No car has ever assembled itself, no book has ever written itself, no painting has ever painted itself, the songs of Bob Dylan, of which there are over 500, not one single one ever composed itself, without the guidance and input from its author. No complex machines of the nature that I’ve described have ever been the product of random, accidental occurences.

My conclusion,, therefore, is that since living organisms are analagous to complex machines, and since complex machines don’t bootstrap themselves into existence without outside input, then living organisms are likewise the product of intelligent design. The evidence for intelliigent design is clear from an examination of the systems. The genome contains the instructions that direct the reproduction, growth and development of the organism, Where did these instructions come from? The human eye is a complex machine in which all of the structures and processes and their connections to the rest of the organism are carefully organized in such a way as to mediate the function of seeing. Such adaptation requires intelligent input.

Where do you think I’ve gone wrong?

Charlie Wrote:

No complex machines of the nature that I’ve described have ever been the product of random, accidental occurences.

Evolutionary biologists would agree with you. The process of selection, which produces the complexities in nature, is not random. Phenotypes do not have random fitnesses, but rather fitnesses determined by the environment they are in. IOW, same phenotype, same environment, same fitness.

The difference between organisms and most human inventions is that organisms self-replicate imperfectly, i.e. Darwin’s famous “descent with modification.” This allows for the process of selection to act on the populations of organisms to produce the complexities we see. Human invenetions, not being imperfect self-replicators, rely on humans to phyically do their “descent with modification” and selection in order for them to develop complexities.

However, some of that is starting to change as engineers and researchers begin to use evolutionary methods to design complex systems instead of directly enginnering them.

Reed wrote:

“Evolutionary biologists would agree with you. The process of selection, which produces the complexities in nature, is not random. Phenotypes do not have random fitnesses, but rather fitnesses determined by the environment they are in. IOW, same phenotype, same environment, same fitness.”

Natural selection can act only on what is already there. By itself it has no power to create, to design, to assemble, to produce any adaptations or variations. It has no power to intitiate new processes or to design new structures. It has no power to assemble these processes and structures into functional systems. This variation is totally the result of random processes such as mutation and genetic drift. All such variation, all such new processes and new structures are the result of totally random, accidental and fortuitous occurrences according to evolutionary theory. Natural selection cannot and does not take the place of intelligent guidance.

Reed wrote:

“The difference between organisms and most human inventions is that organisms self-replicate imperfectly, i.e. Darwin’s famous “descent with modification.” This allows for the process of selection to act on the populations of organisms to produce the complexities we see. Human invenetions, not being imperfect self-replicators, rely on humans to phyically do their “descent with modification” and selection in order for them to develop complexities.”

So, according to evolutionary theory, the origin of new processes and structures, and the integration of these processes and structures into a functional system is the result of…imperfect replication? Mistakes and errors? Fortuitous accidents? Nelson’s Law forbids this.

(See http://tinyurl.com/2m7u8)

Charlie Wrote:

By itself it has no power to create, to design, to assemble, to produce any adaptations or variations. It has no power to intitiate new processes or to design new structures. It has no power to assemble these processes and structures into functional systems.

This is simply not true. Selection has amazing power to create, design, and assemble. Selection keeps what works and promotes what works better. Two originally independent features can be brought together via selection by the simple fact that both are promoted.

This variation is totally the result of random processes such as mutation and genetic drift. All such variation, all such new processes and new structures are the result of totally random, accidental and fortuitous occurrences according to evolutionary theory.

Which has absolutely no bearing on whether selection is a random, accidental process or not. Selection is deterministic. The evolution of complex features is accomplished by the inclusion of the deterministic force of selection. There is a volume of theory, experiments, and data to support this conclusion dating back to Darwin’s original theory of evolution via natural selection.

Natural selection cannot and does not take the place of intelligent guidance.

Tell that to all the engineers and scientists who are currently using selection in place of their intelligent guidance.

So, according to evolutionary theory, the origin of new processes and structures, and the integration of these processes and structures into a functional system is the result of … imperfect replication? Mistakes and errors? Fortuitous accidents? Nelson’s Law forbids this.

Sorry, Charlie, but inventing your own “law” doesn’t disprove the power of evolutionary mechanisms. However, if you don’t understand that, then I propose to solve that problem by simply stating a new law. This law, I’ll call Rufus’s Law (my pen name). This law states, very simply, COMPLEX SYSTEMS THAT INTEGRATE STRUCTURE AND FUNCTION DO NOT EVER UNDER ANY CIRCUMSTANCES ASSEMBLE THEMSELVES FROM NOTHING, UNLESS THERE IS A DETERMINISTIC FORCE LIKE SELECTION INVOLVED.

Simply put, Rufus’s Law forbids Nelson’s Law.

I agree completely with what Reed wrote but would add 2 things:

Charlie is carrying the analogy too far. The mere fact that living things look like designed machines does not mean they are. (Sand dollars look like wheels; are they therefore wheels?) In fact we have definitive knowledge of artifacts designed by humans and no one else. We do not have definitive knowledge of artifacts designed by gods or space aliens. It is begging the question to say that all complex machines (including biological “machines”) known to us are designed.

Additionally, your statement that random variations cannot take the place of intelligent design is wholly unsupported by either cogent argument or empirical evidence. Taner Edis’s chapter in our edited volume shows, in fact, how randomness can be creative, can get you moving after you are stalled on a weak, local fitness maximum. Evolution is not random, but randomness may play a part.

If you think that organisms are designed, you will need compelling evidence to prove the point. Descent with modification is a fact, and evolution is so far the best and most complete inference we have to explain the fact. If you think evolution is not the best explanation, come up with something demonstrably better.

By the way, my posting was about the explanatory filter, not about intelligent design as such.

Charlie, I have a question about the the basic premise of your comment. It looks to me like you are saying in effect “if we don’t count organic life, no complex machines appear in nature.” Isn’t organic life, by definition, essentially natural complex machines? What would a natural complex machine look like if it was not organic life (that is, how would you distinguish between a non-designed natural machine and a designed one if they were made of the same materials, like water, protein, and amino acids)? On what grounds do you exclude organic life from consideration, when saying there are no complex machines in nature? Did you leave a step out of your argument, or did I just miss it?

John

There was a pair of posts on Evolutionblog on 14 April titled “Dembski on Disembodied Designers” and “The Limits of Intelligence” that also addressed this issue.

Reed wrote:

“This is simply not true. Selection has amazing power to create, design, and assemble. Selection keeps what works and promotes what works better. Two originally independent features can be brought together via selection by the simple fact that both are promoted.”

This is the point in the discusssion where you have to produce actual evidence that proves what you say is true. Can you site any empirical evidence, either observational or experimental that supports your belief in the power of selection to do what you think it can do? Or is it just an unsupported assertion?

Reed wrote:

“Tell that to all the engineers and scientists who are currently using selection in place of their intelligent guidance.”

I assume that you’re referring to genetic algorithms. They are often promoted as proof that darwinian evolution works, but they in fact, do not. It looks that way to the uninformed because it is broken up into two steps, with evolutionists only considering the second step. In the first step, programmers design the algorithm, set the parameters, starting conditions and they determine the validity of the output. To say that they use selection in place of intelligent guidance is simply not true.

Reed wrote:

“Simply put, Rufus’s Law forbids Nelson’s Law”

Not so. Nelson’s Law is totally scientific because it is falsifiable and it relies on observational and experimental data. All you need to do to falsify Nelson’s Law is to demonstrate one complex machine that assembled itself without the help of intelligent guidance. You cannot use living organisms to demonstrate this because their origin is unknown and you cannot demonstrate that they assembled themselves without intelligent input. Your “Law” on the other hand contains a condition that you must prove can occur. In other words, you must demonstrate that selection has the power vested in it by you *before* you can incorporate it into your law as a condition. Are you able to do this?

Matt wrote:

“The mere fact that living things look like designed machines does not mean they are.”

There’s much more involved than appearance. They are complex, highly organized machines because they have numerous processes with different functions, they have numerous structures with different functions and all of the processes and structures and functions support each other and are integrated in such a way as to support the overall function of the organism, which is to maintain the living state.

Matt wrote:

“Additionally, your statement that random variations cannot take the place of intelligent design is wholly unsupported by either cogent argument or empirical evidence.”

Luckily, the burden of proof does not fall to me. You are the one making the claim that random variation and selection has the power to create, so it is your burden to prove it. In a spirit of accomodation, however, I would suggest that the lack of any evidence that it can do what is claimed is sufficient. No complex, highly organized process or structure has ever emerged without intelligent input.

Matt wrote:

“If you think that organisms are designed, you will need compelling evidence to prove the point. Descent with modification is a fact, and evolution is so far the best and most complete inference we have to explain the fact. If you think evolution is not the best explanation, come up with something demonstrably better.”

I don’t need compelling evidence to hypothesize the possibility, which is all I’m doing. I’m responding to those folks who reject this possibility. I don’t know how living systems came to be and neither do you. As scientists, we must consider all the possibilities and intelligent design is one of those possibilities. My hypothesis, yet to be demonstrated, is that intelligent input from outside was necessary and that random, accidental occurrences are insufficient. All I ask is that you not dismiss that possibility until all of the evidence is in.

John wrote:

“Did you leave a step out of your argument, or did I just miss it?”

Living organisms are complex, highly organized systems or biochemical machines by the definition I gave. The problem is, they are the subject of our inquiry and we do not know their origin, so we can’t use them as examples, either for or against the premise. The argument is that since all other such machines are the result of intelligent input, then these machines are probably also the result of intelligent input. It is an argument by analogy, and is only as good as its ability to persuade. But then again, *all* scientific “truths” are the result of analogies, so its a pretty good method.

No, Charlie, living organisms are our example, and we do know their origin, about as well as we know anything else in science. The hypothesis that they arose by natural selection has been productive of new, useful hypotheses, and nothing that has been observed after 150 years of investigation has contradicted it. The Intelligent Design hypothesis, on the other hand, has never created a productive research program, nor is it supported by the evidence that’s been observed (eg. the ever-popular panda’s thumb). Scientifically, ID Is Dead, which is why it’s proponents are reduced to political lobbying rather than scientific research to try to get it taught …

Response to the Charlie’s statement, “There’s much more involved than appearance. They are complex, highly organized machines because they have numerous processes with different functions, they have numerous structures with different functions and all of the processes and structures and functions support each other and are integrated in such a way as to support the overall function of the organism, which is to maintain the living state”:

I’m sorry, but it is still an analogy, and analogies, though they can be instructive, will always fail at some level. Additionally, we have very good ideas how the parts of many complex systems, such as the flagellum, have come together to form an apparently unified whole. Again, see my edited volume with Edis for discussions of flagella and also avian wings.

Also, regarding Charlie’s statment, “My hypothesis, yet to be demonstrated, is that intelligent input from outside was necessary and that random, accidental occurrences are insufficient. All I ask is that you not dismiss that possibility until all of the evidence is in”:

That is a very fair statement and a far cry from your earlier, more strident insistence that natural selection could not have produced the complexity we observe. However, it is also fair to say, as Scott has stated, that the evidence in favor of natural selection is overwhelming and that natural selection has been extremely productive. I would add that the vast majority of scientists, even those who are theists, think that the theory has been adequately proved. Asking us to wait for all the evidence is a ruse; enough of the evidence is in, and it is as conclusive as anything in science.

As I noted in my original posting, the explanatory filter is a diagnosis of exclusion. So, I think, is your argument: If natural selection is not the cause, then it must be intelligent design. A false dichotomy at best. Maybe it is neither.

In addition, when my doctor says, “Well, you don’t have A, you don’t have B, you don’t have C, so we conclude that you have Z,” what she really means is, “I haven’t the foggiest idea what you have. When you have all these symptoms and we get negatives on the objective tests, then we say you have Z.” So saying I have Z is equivalent to saying they don’t know what I have.

In the same way, when you say it “must” be an intelligent designer, you are really saying that you haven’t the foggiest idea.

Charlie, for some insight into the creative power of mutation + natural selection, i suggest you read:

this arcticle about the evolution of an IC system to process lactose, as observed in laboratory conditions.

And this paper about the evolution of complex functions in a set of digital “organisms”

A key idea to keep in mind as your read them is the “scaffolding” process by which simpler, useful ( and there for promoted by selection ) functions are co-opted to perform a different, more complex function.

Another good idea to keep handy is gene duplication. By creating multiple copies of a given gene, these duplications allow mutation+NS to tweak existing genes without disrupting their current functions. Thus new functions can be created from related ones, while the originals are preserved intact.

There is evidence that this happens quite alot indeed, such as in the development of the immune system cascade

Also, i’ve not done the calculations, but my guess is that if you were to apply the sort of misguided probability argument that IDers are fond of to the lactose system, you could probably get it under dembski’s “universal probability bound”

Just take the lengths of all the proteins involved, calculate the chance of it randomly assembling ( ignoring the fact that protein’s functions are determined by their shape rather than their exact chemical composition and thus many chemical compositions can accomplish the same job ) and multiply all those probabilities together, as each protein is necessary for the process to function. Presto, we have an incredibly improbable event on our hands.

For making the sorts of calculations Brian mentions, please have a look at my Finite Improbability Calculator.

Wesley

I think we ought to stop running from the word “designed”. When IDers and creationists use the word, what they have is the implicit prepositional phrase “by an intelligent entity”. However, they don’t really mean just a blueprint or a CAD drawing. What they mean is that the object is a manufactured artifact. That is, it is artificially made and placed in the environment where we find it.

Now, I propose that instead we acknowledge that organisms are “designed”, but that we add the phrase “by Darwinian selection”. Because that is what Darwinian selection is, an algorithm to get design. Organisms are designed, but they are not manufactured.

Sorry, my previous e-mail address is one I no longer use. The one attached now is the correct one.

Matt wrote:

“I’m sorry, but it is still an analogy, and analogies, though they can be instructive, will always fail at some level.”

All of science is based on inductive logic or reasoning and analogy is the simplest form of inductive logic. There is no certainty in science, we can only say what is most likely. And analogies are an important tool for completing that task. I don’t see why you say that analogies aways fail at some level. Perhaps an example would help?

Matt wrote:

“Additionally, we have very good ideas how the parts of many complex systems, such as the flagellum, have come together to form an apparently unified whole.”

The only thing you have wrt the bacterial flagellum is that some parts are related to parts of the type III protein secretion system. None of the explanations I have read say how these parts may have been organized into a functional system with many structures and processes. The word “arose” is used a lot!

Matt wrote:

“That is a very fair statement and a far cry from your earlier, more strident insistence that natural selection could not have produced the complexity we observe.”

I tend to differentiate between what I believe and what I can prove. I still hold strongly to the view that natural selection is incapable of doing what you say it can do and I firmly believe that some form of intelligent input was needed although clearly, I cannot provide compelling empirical evidence to support this view.

Matt wrote:

“…the evidence in favor of natural selection is overwhelming and that natural selection has been extremely productive…”

Again, this is the part in the discussion where you must produce actual evidence to support this contention. I see that natural selection can effect the frequencies of genes in populations, but I see no evidence that it has the ability to create and assemble new processes and structures.

Matt wrote:

“In the same way, when you say it “must” be an intelligent designer, you are really saying that you haven’t the foggiest idea.”

Exactly correct. I haven’t the foggiest idea…and neither do you.

“The archaeologists and the jury used a probabilistic argument, in a way. Each bit of side information - the wear pattern on the shells, the presence of pigment - strengthened the argument that the shells were beads. The archaeologists would not have been able to prove design without concentrating on the side information. The explanatory filter, a diagnosis of exclusion, was wholly irrelevant or, in the SIDS case, possibly misleading.”

I submit that the reason there was a false positive is that Dembski is using a fallacious means to determine if an object is an artifact. Dembski looks ONLY at the object. Instead, when we decide that an object is a manufactured artifact, we look at both the object AND the environment. Only when there is no process in the environment that will produce the object do we decide that intelligence and manufacture is involved. The environment is what Dembski calls “side information”. But it is not just off to the “side”; it is essential in determining if the object is manufactured.

In the case of the shell beads, it is the presence of pigment and the wear pattern, that leads us to conclude manufacture precisely because there is no process in the environment that could produce those.

The problem for Dembski, of course, is that there is a process in the environment that can produce the designs in organisms: natural or Darwinian selection. So first Dembski has to eliminate Darwinian selection as a cause. Since Darwinian selection can access any design – http://www.cbs.dtu.dk/staff/dave/articles/jtb.pdf – Dembski’s explanatory filter for biological organisms fails.

Matt: “Again, this is the part in the discussion where you must produce actual evidence to support this contention. I see that natural selection can effect the frequencies of genes in populations, but I see no evidence that it has the ability to create and assemble new processes and structures.”

http://www.cbs.dtu.dk/staff/dave/articles/jtb.pdf Make sure you look at the references in that paper. That is the evidence you are looking for.

Paul wrote:

“That is the evidence you are looking for.”

I’m glad you pointed out this paper because it clearly demonstrates the fatal weaknesses of darwinian evolution. First of all, the authors base their conclusions on irreducible complexity. I’ve discussed my concerns about this previously and I do not necessarily support the idea. See:

http://www.pandasthumb.org/pt-archi[…]/000001.html

But to cut to the real problem, allow me to quote from the paper itself. The author postulates a possible evolutionary sequence from reptilian to mammalian jaws. He states:

“A tympanum evolved on a ventrally located process of the lower jaw.”

In the next step the author states:

“the ability to masticate evolved…”

In step 3, the author states:

“A second joint evolved from secondary bones…”

In step 4, the author states:

“the quadrate and articular became less massive and more loosely connected…”

In step 5, the author states:

“the modification of the quadrate and articular enabled transmission of higher frequency sound, leading ultimately to their conversion into the incus and malleus…”

Now don’t forget, it’s not natural selection that’s causing this to happen. These structures and other changes must occur *before* natural selection can act on them. No clue is given by the author on how the tympanum “evolved”. Apparently, it was the result of random mutations. I don’t really see how this is different from “evolution pixies” or for that matter, magic. No clue is given by the authors as to how the genes that control these processes emerged. They just “evolved” I guess, whatever that means. This paper should not have been published in a scientific journal because it’s nothing more than a story the authors made up to support their belief in darwinian evolution. It offers no observational or experimental evidence as to how these processes and structures “evolved” or were converted into functional systems. They would have us believe that the emergence of these structures and processes and their integration into a highly organized hearing system was somehow the result of purely random processes and fortuitous accidents. I don’t believe it.

Charlie Wrote:

This is the point in the discusssion where you have to produce actual evidence that proves what you say is true. Can you site any empirical evidence, either observational or experimental that supports your belief in the power of selection to do what you think it can do? Or is it just an unsupported assertion?

Harlt and Clark (1997) Principles of Population Genetics. See the chapter on selection. I will eventually cover selection in my EvoMath series, but I have to do other things first.

I assume that you’re referring to genetic algorithms. They are often promoted as proof that darwinian evolution works, but they in fact, do not. It looks that way to the uninformed because it is broken up into two steps, with evolutionists only considering the second step. In the first step, programmers design the algorithm, set the parameters, starting conditions and they determine the validity of the output. To say that they use selection in place of intelligent guidance is simply not true.

As someone who has worked with genetic algorithms, genetic programming, and the like, I am confident in saying that programmers do not “design the algorithm.” They design an evolvable system and then set it loose. Natural selection does take the place of intelligent guidance simply because intelligent guidance is not involved in the final result. The second step, as you call, it is the step that involves no guidance, but produces amazing results via descent with modification. This step is the one that disproves your statement. Referring to an earlier step that involves “intelligent guidance” does not change this fact.

Not so. Nelson’s Law is totally scientific because it is falsifiable and it relies on observational and experimental data.

“Nelson’s Law” relies on tautology and false analogy. “Everything that we’ve observed made by mankind did not arise without intelligent input.” Well, no shit, I could have told you that in elementary school. However, you’ve yet to demonstrate that “Nelson’s Law” is applicable beyond human creations and specifically to life itself.

All you need to do to falsify Nelson’s Law is to demonstrate one complex machine that assembled itself without the help of intelligent guidance. You cannot use living organisms to demonstrate this because their origin is unknown and you cannot demonstrate that they assembled themselves without intelligent input.

Umm, biological machines self assemble all the time without intelligent guidance. We also know how this happens. It is a consequence of the physical properties of matter.

Your “Law” on the other hand contains a condition that you must prove can occur. In other words, you must demonstrate that selection has the power vested in it by you *before* you can incorporate it into your law as a condition.

Not so. Rufus’s Law is totally scientific because it is falsifiable and it relies on observational and experimental data. All you need to do to falsify Rufus’s Law is to demonstrate that deterministic forces like selection cannot produce complex features.

Charlie: Apparently, it was the result of random mutations. I don’t really see how this is different from “evolution pixies” or for that matter, magic. No clue is given by the authors as to how the genes that control these processes emerged. They just “evolved” I guess, whatever that means.

Charlie is still confused by what evolutionary theory argues. Random (wrt immediate fitness) mutations and natural selection.

A great transitional series from reptile to mammal can be found here

Synapsid Skulls (gif)

From the paper

The following postulated evolutionary sequence from reptilian to mammalian jaws, for which there is considerable fossil evidence, involves selective advantage at each step (Kermack & Kermack, 1984):1

Selective advantage, fossil evidence. Sounds like a good and solid hypothesis. See also This article

Hadrocodium sheds light on evolution of the mammalian middle ear. It is the earliest known taxon that lacks the primitive attachment of the middle ear bones to the mandible but has an enlarged brain vault (suggestive of a large brain) (Fig. 5A). This extends the first appearance of these modern mammalian features back to the Early Jurassic, some 45 million years earlier than the next oldest mammals that have preserved such derived features, such as Triconodon from the Late Jurassic (31–34).

Charlie Wrote:

I assume that you’re referring to genetic algorithms. They are often promoted as proof that darwinian evolution works, but they in fact, do not. It looks that way to the uninformed because it is broken up into two steps, with evolutionists only considering the second step. In the first step, programmers design the algorithm, set the parameters, starting conditions and they determine the validity of the output. To say that they use selection in place of intelligent guidance is simply not true.

Some time ago, I wrote this response to the claim that intelligence is somehow infused into the results of genetic algorithms. Show me where I’m wrong, if you can.

Objection: Natural selection simulated on computer produces solutions which are informed by the intelligence that went into the operating system, system software, and evolutionary computation software.

If we take a limited form of evolutionary computation for simplicity’s sake and analyze it, I think that we will come out ahead. Genetic algorithms, as presented by John Holland in 1975, work on a population of fixed-length bit strings. The bit-string representation is generic. The operations which the genetic algorithm performs involves the manipulation of these bit strings, with feedback from an evaluation function.

What are the manipulations on the bit-strings? The GA can copy bit-strings with mutation (change in state of a bit), crossover (production of a new bit-string using parts of two existing bit strings in the population), and a variety of other “genetic” operators. The GA selects bit-strings for reproduction based upon results returned from an evaluation function which is applied against each bit string in the population.

The purpose of the evaluation function is to provide a metric by which the bit-strings can be ranked. The critical point to be grasped is that neither the operations of the GA nor those of the evaluation function need information about the pattern of the end solution. The GA’s operations are completely generic; there are a variety of GA shell tools available for use, including plug-ins for MS Excel spreadsheets. Since the same GA tool may be used for job-shop scheduling in one instance, and oilfield pipeline layout in another, the objection that the intelligence of the GA programmer informed the specific designs that result from its application quite soon appear ludicrous. That a programmer might code a generic GA shell and also happen to somehow infuse it with just the right information to optimize PCB drilling movements might be possible, but to insist that the same programmer managed to infuse specific domain knowledge for each and every application to which his tool is put stretches credulity.

[The system would work just as well with an evaluation function that passed back to the GA the list of bit-strings in rank order and *no* information whatsoever that could be said to give an absolute measure of performance of each bit string. - WRE]

Now, let’s eliminate the evaluation function as a source of domain-specific information. Obviously the evaluation function does give information to the GA, but that information does not give a direction for adaptive change for each bit-string evaluated, but rather just how well each bit-string performed when evaluated. The result passed back to the GA does not give the GA insights like “Toggle bit 9 and swap 20-23 with 49-52”. It merely passes back a scalar number, which when compared to other scalar numbers, forms a ranking of the bit strings. The evaluation function can require very little in the way of domain-specific knowledge. For the PCB drilling application mentioned above, the evaluation function can very simply be instantiated as “return closed path length of the route represented by the input bit-string”, which says nothing at all about what the path looks like, and works for any set of hole coordinates. Because the evaluation function can be generic over cases, again we have the argument that domain-specific information is unavailable here on the same grounds as for the GA operations. While we might be able to conceive of an evaluation function that somehow encapsulated information about a particular solution, for problems like the PCB routing one mentioned it is highly unreasonable to credit that information about all possible PCB route configurations has somehow been instilled into the code.

What’s left? Merely the information content of the initial bit strings in the GA population. Since this is often, if not always, done by filling the bit-strings based upon random numbers, any non-trivial bit representation is highly unlikely to correspond to a final solution state.

The information or designs said to be produced by GA are the contents of the bit-strings at the end of the GA run. It can be confirmed that the end bit-string content differs from the initial bit-string content. It can be demonstrated that the evaluation of the initial bit-string indicates poorer function than the final bit-string. The question which those who object to evolutionary computation via the assertion that intelligence has somehow been infused into the result must answer is that if intelligence intervenes to shape or produce the final bit-string, *how* does it accomplish that, and *where* did the domain-specific knowledge come from? I’ve already sealed off infusion via the GA, the evaluation function, and the initial bit-strings for “how”. The “where” question poses an extremely serious difficulty for proponents of this assertion, since if the information needed to solve all the problems which a GA can solve is present on every machine which a GA can be run upon, the information capacity of each machine is demonstrably smaller than the information content of all those possible solutions. It is problematic where the information is stored, and even if that information were capable of being stored somehow, the problem of *why* computer designers and programmers, who would be shown by this to be very nearly omniscient, would chose to put all that information into their systems when the vast majority of it is very likely never to be used.

I’ll note that it is entirely possible to point to or construct evolutionary computation examples whose evaluation functions incorporate a known final solution state. I only know of such simulations done for pedagogy. Dawkins’ “weasel” program from “The Blind Watchmaker” is a fine example of this. However, the mere existence of that simulation is not sufficient to show that all evolutionary computation does so.

Wesley,

I hope you’re feeling better. :-)

Wesley wrote:

“Show me where I’m wrong, if you can.”

The problems wrt gentic algorithms have been discussed in detail elsewhere, and I don’t see any point in repeating them. But I will! First of all, the name “genetic algorithm” is misleading because it makes one think tht it has something to do with genetics and biology, which it doesn’t. Secondly, there is no evidence that what genatic algorithms model is in fact, what happens in nature. Rather than modeling what actually is happening in nature, it models a watered down version of what you *think* might be happening in nature. It doesn’t validate biological evolution by mutation and natural selkection as much as it validates your preconceived notion of mutation and selection, which differs from the actual natural process in numerous ways. The link between the model and what actually happens in nature is weak at best. Some of the major dispoarities include:

1. GA’s only work on one trait. Selection has to act on the many different traits that affect viability at the same time. It’s one thing for the brain to get larger, but at the same time, musculature, nerve supply, blood supply and a host of other traits must follow along.

2. In GA’s the selection coefficient is inordinately high, often 1.0 whereas in nature, selection coefficients far smaller are common.

3. GA’s use inordinately high rates of reproduction, often hundreds of times what is common in nature. Bacteria can double their numbers every 20 minutes or so, but what about whales, or rhinos, or humans?

4. Generation times are inordinately fast. Computers can process many thousands of generations in a microsecond. In nature, the generation times for primates can be 20 or more years.

5. GA’s only model quantitative traits, not qualitative traits. There’s a big difference between increasing complexity and increasing organization. I can generate an infinitely complex structure such as a mandelbrot set with a simple equation (Xn = X^2 + C) but I can’t write an equation that generates a mousetrap. For that I need an algorithm, which requires insight to produce.

6. GA’s artificially protect those entities selected from further mutations in case nothing better comes down the pipe. The effect of this is to ensure that the “right” outcome is generated since any move in that direction is protected.

7. GA’s use mutation rates that are much higher than those found in nature.

8. GA’s operate on entities that are very small and only do one thing. Real genomes are very large and controls hundreds of processes and structures at the same time and integrates them into a functional system.

9. GA’s would not exist and their output would not exist in the absence of human minds to create them, set their parameters and validate their output. Take human intelligence out of the picture and you have absolutely nothing. These GA’s do not bootstrap themselves into existence and do their work, as evolutionists suppose happened with genomes.

10. GA’s ignore polygeny and pleiotropy, 2 factors that are a basic part of living systems.

11. In order for a GA to work, either the outcome must be known (Dawkin’s Weasel) in advance or it must have selective value at every step of the process. Biological evolution, as descibed by darwinists is purposeless and goalless and functions cannot be evaluated in advance of their appearance.

12. When a GA is written, the author (an intelligent agent) specifies the program’s routines and subroutines and the way in which they will interact. Then the GA is free to investigate which modules to use and how they will interact. You cannot just ignore the fact that human intelligence created these algorithms and they only can do what they’re told to do. In addition, no algorithm can decide on its own validity, nor can another algorithm. This requires insight and insight comes from intelligence.

13. And on, and on, and on…

Thanks to everyone who has weighed in. Just a few replies to comments that were not addressed.

1. Examples of analogies that fail at some level: Atoms in a gas are like billiard balls. Electrons in atoms are like planets. The flagellum is like a machine.

2. I do not think it is generally useful to argue on whom the burden of proof falls. The vast majority of biologists, however, think that the case for evolution is proved. When you try to overthrow an accepted theory, then the burden of proof is on you.

Charlie, however, offers little beyond the argument from personal incredulity, which leads directly to a God-of-the-gaps argument.

3. When I wrote that using the explanatory filter is equivalent to saying, “I haven’t the foggiest idea,” I was referring to an eliminative argument like my doctor’s saying I have Z, even though she doesn’t know exactly what Z is. I do not agree that we haven’t the foggiest idea how organisms evolved. There is a large body of theory and evidence that demonstrate they evolved by natural selection. That is not an eliminative argument with no evidence to back it up.

4. A small quibble regarding Paul Lucas’s suggestion that we admit that organisms are designed, but by natural selection: Young-earth creationists may claim that organisms are manufactured, but intelligent-design neocreationists do not. They say, rather, that God (or some clever space aliens) intervened. I don’t think we should necessarily run away from the term, designed, but maybe we should not embrace it either. It is, after all, only an analogy to say that natural selection designed an organism. Why not just say that organisms have evolved?

Seems to me that Charlie has ignored Wesley’s challenge. All objections raised seem to be quite irrelevant to the issue. Nice to see Charlie resort to strawmen rather than address the issues raised by Wesley.

Nothing much has changed. Sigh

Charlie,

Thanks for your concern. I’m doing my best to get better.

Let me point out that I was addressing a very specific objection to evolutionary computation.

Objection: Natural selection simulated on computer produces solutions which are informed by the intelligence that went into the operating system, system software, and evolutionary computation software.

I argued that for GAs without “fixed ideal targets” that this objection is baseless. The essential question for those who continue to raise the objection is repeated here:

The question which those who object to evolutionary computation via the assertion that intelligence has somehow been infused into the result must answer is that if intelligence intervenes to shape or produce the final bit-string, *how* does it accomplish that, and *where* did the domain-specific knowledge come from?

Given that GAs are a well-known technology, and you claim to be familiar with their operation, it seems to me that you should easily be able to answer the question above (note the two parts of it) by describing the mechanism of information transfer and the source of information provided by intelligence that appears in the final bit-string selected as a solution. This is notable by its absence in your response to me.

My response to the objection assumes that the notion of simulation is not a stumbling block to the argument. Of course, intelligence goes into the coding of a simulation. This is true whether the simulation is a pseudo-random number generator, a meteorological prediction system, or a genetic algorithm. The question is not whether it takes intelligence to write a simulation, but rather whether the output of the simulation is infused somehow from that intelligence. It seems clear for the PRNG that the answer is “no”, that for the meteorological prediction system the answer is that the outcome is dependent on the data value of the inputs, and in the case of the GA you have my argument that there is no route for a flow of information from GA programmer to final bit-string.

I’ll comment here on the irrelevant bits of your reply as well.

First, I have a better list of points of disanalogy between a form of evolutionary computation and biological evolution. It has the advantage over yours that its points are all valid. (To be fair, there’s at least one point we both make in our lists.)

For example, the first item in your list claims that GAs evaluate a single trait. I know this is untrue from personal experience. I specified a GA for use in tournament scheduling. The evaluation function as I specified it considered five separate constraints, which should be cconsidered “traits”. The programmers, Bob Long and John Rutkowski, added several more constraints to the original set of five. The application works well for them.

I have another bit of text that argues that the basis of evolutionary computation captures the essential parts of natural selection:

Objection: EC cannot meaningfully be said to be derived from natural selection.

I have elsewhere laid out the formal statements of natural selection and GAs, pointing out the correspondences.

Natural selection is sometimes stated in the form of three premises and a conclusion:

  • Individuals in a population express variation.
  • Certain individual variations are heritable.
  • Certain heritable variations lead to differences in reproductive success.
  • Differences in reproductive success due to heritable variations cause changes in the representation of those variations in the population.

Now, for correspondences to evolutionary computation:

  • All EC methods involve a population of individuals whose data varies from individual to individual.
  • EC methods of “reproduction” mean that information is heritable by default.
  • EC methods apply a differential probability of reproduction based upon ranking of results from an evaluation function.
  • Individual representations with better rankings tend to have more patterns that are similar in future generations in EC methods.

These hold despite the long list of points of disanalogy.

Charlie Wagner has some misconceptions about genetic algorithms. Let me describe a few of them off the top of my head. First, a general comment: Charlie seems unable to discriminate between GAs as search and optimization techniques for applied problems, and evolutionary algorithms in general as platforms for studying hypotheses about evolution. The two are quite different, and many of Charlie’s comments derive from treating the former as though they were the latter. In addition, Charlie seems unaware of the relation between models and ‘reality,’ and manyof his remarks would rule out the use of any kind of model in science.

Now to some specific points. Charlie’s comments are quoted.

1. GA’s only work on one trait. Selection has to act on the many different traits that affect viability at the same time. It’s one thing for the brain to get larger, but at the same time, musculature, nerve supply, blood supply and a host of other traits must follow along.

False. GA’s routinely “work on” multiple traits simultaneously, and coevolve those traits over generations. The GAs my company uses code for several dozen traits of the artificial agents that we evolve, and the various portions of the digital genomes underlying those traits all co-evolve at the same time. I have no idea where charlie’s “one trait” notion could come from.

2. In GA’s the selection coefficient is inordinately high, often 1.0 whereas in nature, selection coefficients far smaller are common.

That’s a settable parameter, and setting it too high is actually bad for GAs used in optimization work because it can result in a population getting driven to a local maximum and hung up there. In any case, in an applied context one is using the GA for search and optimization, and an interest there is computational efficiency and speed of getting a satisficing solution. In using a GA as a research platform to study questions relevant to biology one sets parameter values that more closely correspond to biological values.

3. GA’s use inordinately high rates of reproduction, often hundreds of times what is common in nature. Bacteria can double their numbers every 20 minutes or so, but what about whales, or rhinos, or humans?

One can study k-selection versus r-selection in an appropriately built GA; once again, distinguish between the use of GAs to solve applied problems and the uses of evolutionary algorithms to model biological processes.

4. Generation times are inordinately fast. Computers can process many thousands of generations in a microsecond. In nature, the generation times for primates can be 20 or more years.

Um, that’s precisely why computer models of any lengthy natural process are more useful than tracking thousands of generations of elephants: One can run a whole lot of generations within a human researcher’s lifetime! More generally, absolute time is irrelevant here; it’s generational time that’s of interest.

5. GA’s only model quantitative traits, not qualitative traits. There’s a big difference between increasing complexity and increasing organization. I can generate an infinitely complex structure such as a mandelbrot set with a simple equation (Xn = X^2 + C) but I can’t write an equation that generates a mousetrap. For that I need an algorithm, which requires insight to produce.

“Algorithms” can themselves evolve. Download and run avida and watch assembly language programs evolve into irreducibly complex sequences of code that perform complex logic functions. (And you’ll never see a mousetrap reproduce with heritable variation: It’s a lousy analogy.)

6. GA’s artificially protect those entities selected from further mutations in case nothing better comes down the pipe. The effect of this is to ensure that the “right” outcome is generated since any move in that direction is protected.

Factually false. In our GAs, as in most, no member of the population is protected from mutation.

7. GA’s use mutation rates that are much higher than those found in nature.

Not necessarily. In applied optimization tasks that’s sometimes true, but again, that’s an experimenter-controlled parameter. And setting mutation rates too high can lead to catastrophic failure, both in biology and in GAs. To do research on biological hypotheses one can set mutation rates where one wants them for the purposes of study: it’s an independent variable of interest and is under experimental control.

9. GA’s would not exist and their output would not exist in the absence of human minds to create them, set their parameters and validate their output. Take human intelligence out of the picture and you have absolutely nothing. These GA’s do not bootstrap themselves into existence and do their work, as evolutionists suppose happened with genomes.

Irrelevant: Evolutionary algorithms are not used to study abiogenesis.

11. In order for a GA to work, either the outcome must be known (Dawkin?s Weasel) in advance or it must have selective value at every step of the process. Biological evolution, as descibed by darwinists is purposeless and goalless and functions cannot be evaluated in advance of their appearance.

This is mostly incoherent, and where it’s coherent it’s wrong. GAs (and EAs in general) make use of exactly the same kind of information that is available to biological evolution, namely the local topography of the chunk of a fitness landscape that is currently occupied by an evolving population. Neither has global information about distant topography, fitness evaluation is relative to the existing population, and reproductive fitness depends on locally determined behavior as compared with other members of the population. Further, it is routine when analyzing fitness changes through time in an EA evolutionary run to see lineages in a GA or EA that ‘retreat’ in fitness, only to increase their fitness later along some hitherto unexplored ridge of a fitness landscape. See here for an example in an evolutionary algorithm. It is a myth that GAs (and biological evolution) are solely monotonic hillclimbers. Particularly on dynamic fitness landscapes, subpopulations and individual lineages deke and juke, advance and retreat, and in general display pretty herky-jerky behavior.

I sometimes regret Dawkins having published his WEASEL example. Creationists and IDists have locked onto it as though it represents the highest and best of the general class of evolutionary algorithms, when Dawkins himself emphatically disclaimed that even as he was describing the simulation. It seems to represent the peak of knowledge about GAs that IDCs have mastered; witness Dembski bragging about MESA, a child’s toy of a monotonic hillclimber similar to WEASEL, as a research tool.

And on, and on, and on .…

RBH

Wesley wrote:

“Let me point out that I was addressing a very specific objection to evolutionary computation.”

I was considering the more general question of the validity of genetic algorithms in modelling biological evolution in nature. Considering the way you parse the question of intelligent input, I would agree with you that in the component that you specify, there is no information transfer from the operating system, system software, and evolutionary computation software to the final solution. But that fact is trivial when considering the other objections that I raised. You cannot just parse out a small piece of the overall system and prove that no intelligent input is required and ignore the rest of the system where intelligent input *is* a factor.

Wesley wrote:

“Of course, intelligence goes into the coding of a simulation. This is true whether the simulation is a pseudo-random number generator, a meteorological prediction system, or a genetic algorithm. The question is not whether it takes intelligence to write a simulation, but rather whether the output of the simulation is infused somehow from that intelligence.”

The fact that one small component of the GA is not intelligently guided does not mean that the entire process is not intelligently guided. These GA’s do not bootstrap themselves into existence from nowhere, as you would have us believe about the genome. They are the product of human intelligence. And the final solution generated in the output is only *better* because human intelligence decided it was and made a ruling on it’s validity. GA’s cannot create themselves from nothing, nor can they pass judgement on the “worth” of the final product. Only human intelligence can accomplish this.

Wesley wrote:

“I have another bit of text that argues that the basis of evolutionary computation captures the essential parts of natural selection:”

Evolutionary computing captures the essential parts of your conceptualization of natural selection. There is no evidence that I can see that it is a valid model of what happens in nature. In addition, no one that I know, certainly not myself, argues that natural selection is not a real phenomenon. I saw it myself in experiments that I did in college with Drosophila. I am convinced beyond doubt that natural selection can change the frequency of genes in populations under selective pressure. The Grants proved this quite nicely in “The Beak of the Finch”. The real issue, the one which has not been established to my satisfaction is that these changes in allelic frequency can lead to the emergence of highly organized structures and processes in which multiple structures support multiple functions and multiple processes support multiple functions and all of these processes, structures and functions are integrated in such a way that they support each other and they support the overall function of the living organism. Trivial simulations such as Weasel and others simply do not address this daunting challenge.

RBH, I freely admit that some of the points I made were weak and trivial. That’s what happens when it’s late at night and you don’t think about things carefully. In fact, I like Wesley’s list better! That having been said, I fully agree that natural selection is a real phenomenon. The frequencies of genes can change in populations under selective pressure. I also agree with Wesley’s definition of natural selection and it’s correspondence to evolutionary models and GA’s. My problem is that evolution, as it occurs in nature, has produced highly organized structures and processes and integrated these structures and processes in such a way that they support each other and they support the overall function of the living system. My point is that mutations and natural selection cannot create these kinds of systems in which means are adapted to ends and the GA’s that model this cannot produce them either.

“My point is that mutations and natural selection cannot create these kinds of systems “

Charlie, how would you describe evidence that, if it existed, would convince you that mutations and natural selection could create the kinds of complex systems you refer to?

What a coinkidink! T0.org just published a new article on genetic algorithms:

http://www.talkorigins.org/faqs/gen[…]/genalg.html

One of the examples used a GA to program a FPGA for voice recognition. The solution contains an amazingly small number of gates, including 5 which are not hooked up to the rest of the circuit, but are required to function!

This certainly sounds like it “created new information.”

Here is the relevant quote from the article:

This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems - a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way - yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997).

Adam Marczyk has just put up a new FAQ on genetic algorithms on TalkOrigins. Not incidentally, it identifies (and answers) the actual source of many of Charlie Wagner’s points above, AnswersinGenesis.

RBH

Adam Marczyk has just put up a new FAQ on genetic algorithms on TalkOrigins. Not incidentally, it identifies (and answers) the actual source of many of Charlie Wagner’s points above, a brief essay on AnswersinGenesis.

RBH

Oops. I see KeithB anticipated me. Ah, well. Better late if ever. Or something.

RBH

About this Entry

This page contains a single entry by Matt Young published on April 22, 2004 10:46 AM.

John Maynard Smith was the previous entry in this blog.

Still No Free Lunch from Bill Dembski! is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter