Recently in EvoMath Category


As part of the year-end Kitzmas festivities, The Discovery Institute’s PR organ Evolution News and Views re-posted an earlier article titled Following Kitzmiller v. Dover, an Excellent Decade for Intelligent Design.

This uncredited article from September 2015 included the following, which caught my eye:

In fact, the decade since Dover has been an excellent one for ID. Casey Luskin noted some highlights not long ago:… Theoretical peer-reviewed papers taking down alleged computer simulations of evolution, showing that intelligent design is needed to produce new information.

The paper which was linked, hereafter Ewert 2014, is titled “Digital Irreducible Complexity: A Survey of Irreducible Complexity in Computer Simulations”, and was written by Winston Ewert of the Biologic Institute for a 2014 edition of the institute’s open-access journal BIO-Complexity.


Ewert claims that Michael Behe’s concept of “Irreducible Complexity” is a stumbling block for evolutionary algorithms, and that several computer models of the evolution of irreducibly complex structures all fail to falsify Behe’s concept. Ewert examines five models: Lenski’s Avida, Schneider’s Ev, my own Steiner Trees, Sadedin’s Geometric Model, and Thompson’s Digital Ears program.

I won’t speak for the other models, but I can say this about Ewert’s discussion of Steiner solutions to network problems: it’s a massive strawman fallacy, a desperate “bait and switch” in which the problem my algorithm was solving, Steiner networks, was “replaced” with a much simpler problem, Minimum Spanning Trees. This ruse enabled Ewert to launch a (straw) attack on my genetic algorithm for solving Steiner’s problem.

The Steiner Genetic Algorithm was the subject of a heated blog war, the “War of the Weasels,” occurring between Panda’s Thumb and Uncommon Descent during the summer of 2006. It all began with my post of July 5th, 2006, Target? TARGET? We don’t need no stinkin’ Target! It seemed the War of the Weasels ended in the fall of 2006, after Uncommon Descent’s top programmers were unable to out-design the Steiner genetic algorithm during a public design challenge. But with Ewert’s 2014 article, and an earlier 2012 piece in BIO-Complexity by Ewert, Dembski and Marks, it’s clear that no ceasefire exists.

The War of the Weasels is back! More below the fold.

Dembski: “Moving On” from ID



From William Dembski’s new blog, for November 9th:

In the last few years, my focus has switched from ID to education, specifically to advancing freedom through education via technology. All my old stuff on ID is on the present site (it was previously at, which is no more), and can be accessed by clicking on “Design” in the main menu.

I still have a few ID projects in the works, notably second editions of some of my books (e.g., NO FREE LUNCH and THE DESIGN INFERENCE). I regard BEING AS COMMUNION: A METAPHYSICS OF INFORMATION (published 2014) as the best summation of my 23-years focused on ID (the start of that work being my article “Randomness by Design” in NOUS back in 1991).

I’m happy for the years I was able to spend working on ID, but it’s time to move on. I’ll be describing my new endeavors on this new blog.


Last March Tom English and I posted an argument here here at Panda’s Thumb analyzing an argument by William Dembski, Winston Ewert, and Robert Marks. They had made an argument that evolutionary “search” would not do better than blind search; we proved that their argument showed no such thing.

In response to our analysis here of the Dembski-Ewert-Marks paper, Winston Ewert has replied at Evolution News and Views. As that site does not allow comments, I have finally gotten around to posting a response here (six months late). Tom has now put up a related thread at The Skeptical Zone; I will try to comment in both discussions.

Ewert rather dramatically reveals that Tom and I do not actually disagree with any of the theorems in their paper. And he’s right about that. How did they discover this remarkable fact? Perhaps it was by reading our post, where we said

We’re not going to argue with the details of their mathematics, but instead concentrate on what in evolutionary biology corresponds to such a choice of a search.

or by reading a comment in that thread where I also said:

As theorems they may be mathematically true, but the average poor performance of searches is true only because so many irrelevant and downright crazy searches are included among the set of possible searches.

Ewert is right that we did not question their theorems. Instead we concentrated on what would follow from their theorems. We showed in a simple model that once there are organisms that reproduce, with genotypes that have phenotypes and fitnesses, that evolution will find higher fitnesses much more effectively than random guessing. So is it true that having what they call Active Information, embodied in a fitness surface and in a reproducing organism whose genotypes have those fitnesses, requires that there be Design Intervention to set up that system?

The issue is not the correctness of their theorems but, given that they are correct, what flows from them. Dembski, Ewert, and Marks (DEM) may object that they did not say anything about that in their paper.

We don’t think that it is a stretch to say that DEM want their audience to conclude that Design is needed.

Let’s look at what conclusions Dembski, Ewert, and Marks draw from their theorems. There is little or no discussion of this in their paper. Are they trying to persuade us that a Designer has “frontloaded” the Universe with instructions to make our present forms of life? Let’s look at what Dembski and Marks have said about that (below the fold) …

This post is by Joe Felsenstein and Tom English

Back in October, one of us (JF) commented at Panda’s Thumb on William Dembski’s seminar presentation at the University of Chicago, Conservation of Information in Evolutionary Search. In his reply at the Discovery Institute’s Evolution News and Views blog, Dembski pointed out that he had referred to three of his own papers, and that Joe had mentioned only two. He generously characterized Joe’s post as an “argument by misdirection”, the sort of thing magicians do when they are deliberately trying to fool you. (Thanks, how kind).

Dembski is right that Joe did not cite his most recent paper, and that he should have. The paper, “A General Theory of Information Cost Incurred by Successful Search”, by Dembski, Winston Ewert, and Robert J. Marks II (henceforth DEM), defines search differently than do the other papers. However, it does not jibe with the “Seven Components of Search” slide of the presentation (details here). One of us (TE) asked Dembski for technical clarification. He responded only that he simplified for the talk, and stands by the approach of DEM.

Whatever our skills at prestidigitation, we will not try to untangle the differences between the talk and the DEM paper. Rather than guess how Dembski simplified, we will regard the DEM paper as his authoritative source. Studying that paper, we found that:

  1. They address “search” in a space of points. To make this less abstract, and to have an example for discussing evolution, we assume a space of possible genotypes. For example, we may have a stretch of 1000 bases of DNA in a haploid organism, so that the points in the space are all 41000 possible sequences.

  2. A “search” generates a sequence of genotypes, and then chooses one of them as the final result. The process is random to some degree, so each genotype has a probability of being the outcome. DEM ultimately describe the search in terms of its results, as a probability distribution on the space of genotypes.

  3. A set of genotypes is designated the “target”. A “search” is said to succeed when its outcome is in the target. Because the outcome is random, the search has some probability of success.

  4. DEM assume that there is a baseline “search” that does not favor any particular “target”. For our space of genotypes, the baseline search generates all outcomes with equal probability. DEM in fact note that on average over all possible searches, the probability of success is the same as if we simply drew randomly (uniformly) from the space of genotypes.

  5. They calculate the “active information” of a “search” by taking the ratio of its probability of success to that of the baseline search, and then taking the logarithm of the ratio. The logarithm is not essential to their argument.

  6. Contrary to what Joe said in his previous post, DEM do not explicitly consider all possible fitness surfaces. He was certainly wrong about that. But as we will show, the situation is even worse than he thought. There are “searches” that go downhill on the fitness surface, ones that go sideways, and ones that pay no attention at all to fitnesses.

  7. If we make a simplified model of a “greedy” uphill-climbing algorithm that looks at the neighboring genotypes in the space, and which prefers to move to a nearby genotype if that genotype has higher fitness than the current one, its search will do a lot better than the baseline search, and thus a lot better than the average over all possible searches. Such processes will be in an extremely small fraction of all of DEM’s possible searches, the small fraction that does a lot better than picking a genotype at random.

  8. So just by having genotypes that have different fitnesses, evolutionary processes will do considerably better than random choice, and will be considered by DEM to use substantial values of Active Information. That is simply a result of having fitnesses, and does not require that a Designer choose the fitness surface. This shows that even a search which is evolution on a white-noise fitness surface is very special by DEM’s standards.

  9. Searches that are like real evolutionary processes do have fitness surfaces. Furthermore, these fitness surfaces are smoother than white-noise surfaces “because physics”. That too increases the probability of success, and by a large amount.

  10. Arguing whether a Designer has acted by setting up the laws of physics themselves is an argument one should have with cosmologists, not with biologists. Evolutionary biologists are concerned with how an evolving system will behave in our present universe, with the laws of physics that we have now. These predispose to fitness surfaces substantially smoother than white-noise surfaces.

  11. Although moving uphill on a fitness surface is helpful to the organism, evolution is not actually a search for a particular small set of target genotypes; it is not only successful when it finds the absolutely most-fit genotypes in the space. We almost certainly do not reach optimal genotypes or phenotypes, and that’s OK. Evolution may not have made us optimal, but it has at least made us fit enough to survive and flourish, and smart enough to be capable of evaluating DEM’s arguments, and seeing that they do not make a case that evolution is a search actively chosen by a Designer.

This is the essence of our argument. It is a lot to consider, so let’s explain this in more detail below:

As usual I will pa-troll the comments, and send off-topic stuff by our usual trolls and replies to their off-topic stuff to the Bathroom Wall

At Jerry Coyne’s bl*g Why Evolution Is True he has a new post calling attention to a web site on The Third Way of Evolution. It was apparently put up last year by James Shapiro, Denis Noble, and Raju Pookottil. It presents statements by 43 people expressing their view that a new Way of Evolution is needed. It has apparently been up for over 8 months, but only recently was mentioned by Denyse O’Leary at Uncommon Descent.

None of these people are, as far as I can tell, creationists. Many are working, or retired scientists or engineers. Jerry gives telling analyses of the views of some of the more prominent critics among them, citing his own past demolitions of their views. An interesting point is that all of these people are said to have agreed to being listed on the TWOE website.

A unified statement by 43 people, mostly scientists of some reputation, laying out a new evolutionary synthesis, should attract a lot of attention. However, the Third Way site does not do that. The difficulty is that each of these people seems to march to a different drummer, and in a different direction. They go off over the horizon in different directions, each convinced that theirs is the promising new direction. The common theme is that “The Modern Synthesis is dead, and I have a replacement for it!” But there is no agreement on what the replacement should be.

It is fun reading. Let’s have a thread there. Calling these folks creationists is not helpful; overwhelmingly they simply aren’t creationists. (The Second Way is, Shapiro et al. point out, creationism. To me it is a bit strange to hear creationism cited as a Way of Evolution, when what it actually says is “no way”.)

A very useful activity would be to characterize the views of some of the 43. Are they:

  • Lamarckians?
  • Mutational teleologists?
  • Saltationists?
  • (etc.)

Let’s discuss. I will, as usual, try to vigorously pa-troll the comments and send off-topic comments to the Bathroom Wall. Interventions by our usual creationist trolls and replies to those will go to the BW.

On August 14, William Dembski spoke at the Computations in Science Seminar at the University of Chicago. Was this a sign that Dembski’s arguments for intelligent design were being taken seriously by computational scientists? Did he present new evidence? There was no new evidence, and the invitation seems to have come from Dembski’s Ph.D. advisor Leo Kadanoff. I wasn’t present, and you probably weren’t either, but fortunately we can all view the seminar, as a video of it has been posted here on Youtube.

It turns out that Dembski’s current argument is based on two of his previous papers with Robert Marks (available here and here) so the arguments are not new. They involve considering a simple model of evolution in which we have all possible genotypes, each of which has a fitness. It’s a simple model of evolution moving uphill on a fitness surface. Dembski and Marks argue that substantial evolutionary progress can only be made if the fitness surface is smooth enough, and that setting up a smooth enough fitness surface requires a Designer.

Briefly, here’s why I find their argument unconvincing:

  1. They conside all possible ways that the set of fitnesses can be assigned to the set of genotypes. Almost all of these look like random assigments of fitnesses to genotypes.
  2. Given that there is a random association of genotypes and fitnesses, Dembski is right to assert that it is very hard to make much progress in evolution. The fitness surface is a “white noise” surface that has a vast number of very sharp peaks. Evolution will make progress only until it climbs the nearest peak, and then it will stall. But …
  3. That is a very bad model for real biology, because in that case one mutation is as bad for you as changing all sites in your genome at the same time!
  4. Also, in such a model all parts of the genome interact extremely strongly, much more than they do in real organisms.
  5. Dembski and Marks acknowledge that if the fitness surface is smoother than that, progress can be made.
  6. They then argue that choosing a smooth enough fitness surface out of all possible ways of associating the fitnesses with the genotypes requires a Designer.
  7. But I argue that the ordinary laws of physics actually imply a surface a lot smoother than a random map of sequences to fitnesses. In particular if gene expression is separated in time and space, the genes are much less likely to interact strongly, and the fitness surface will be much smoother than the “white noise” surface.
  8. Dembski and Marks implicitly acknowledge, though perhaps just for the sake of argument, that natural selection can create adaptation. Their argument does not require design to occur once the fitness surface is chosen. It is thus a Theistic Evolution argument rather than one that argues for Design Intervention.

That’s a lot of argument to bite off in one chew. Let’s go into more detail below the fold …

New “Reports of the NCSE” is out


Here. I was especially interested in a couple of articles. One, by Lorence G and Barbara J Collins, is More Geological Reasons Noah’s Flood Did Not Happen (pdf). It contains a good discussion of what “uniformitarianism” means in contemporary geology, as distinguished from its 19th century usage.

Another is from James A Shapiro, University of Chicago geneticist, whose ideas about an alleged paradigm shift in evolutionary theory have been severely criticized by (among others) Jerry Coyne, also at the U of Chicago. See

Shapiro’s article in the new RNCSE, however, is an attempted rebuttal of Larry Moran’s scathing RNCSE review of Shapiro’s new book, Evolution: A View from the 21st Century. Moran’s review concluded

Shapiro, like [Richard von] Sternberg, is widely admired in the “intelligent design” community and there’s a good reason for this. This book is highly critical of old-fashioned evolutionary theory (neo-Darwinism) using many of the same silly arguments promoted by the Fellows of the Discovery Institute’s Center for Science and Culture. Those fellows are dead wrong and so is Shapiro.

Fun times.

It has been announced that Robert Sokal died on April 9. I wrote a brief obituary here last autumn for his co-worker Peter Sneath. Together they pioneered the use of clustering algorithms in taxonomy, and argued for the adoption of phenetic methods based on clustering there. While they were ultimately unsuccessful in this, they became founding fathers of work on mathematical clustering, and their book Principles of Numerical Taxonomy was widely-noticed and greatly stimulated the development of phylogeny algorithms. A paper by Michener and Sokal (1957) is, as far as I can tell, the first one publishing a numerical phylogeny. His publication of the 1965 paper by Camin and Sokal in Evolution, and a visit he made to the University of Chicago that year, inspired me to start working on phylogeny algorithms.

sokal1964.jpg Sokal2-n.jpg
Robert Sokal in 1964 at the International
Entomological Congress in London
Bob Sokal, more recently

Bob’s Stony Brook colleague Michael Bell has written a fine obituary, which I reprint below with his permission.

James F. Crow 1916 - 2012


James F. Crow died peacefully in his sleep in Madison, Wisconsin on January 4th at the age of 95, having nearly reached his 96th birthday. Jim, as everyone who knew him called him, was one of the most important population geneticists of the 20th century, a major figure in the generation that followed Fisher, Wright, and Haldane.

CrowMishima72.JPG CrowKimura72g.JPG

Jim Crow in Mishima, Japan, 1972               Crow and Kimura in discussion, Mishima railroad station, 1972            photos by J.F.

His father was a cytologist who did graduate work soon after the rediscovery of Mendel’s work. Jim did his graduate work in the 1930s at the University of Texas, where he had gone in hopes of working with H. J. Muller (who had, however, already left). He later had opportunities to work with Muller, and always considered himself primarily influenced by Muller. After working at Dartmouth College during and after World War II, he moved to the University of Wisconsin - Madison, where he spent the rest of his career. His many honors included election to the National Academy of Sciences and as a Foreign Member of the Royal Society.

He was famous as a teacher and mentor of numerous population geneticists, of whom I am one. In the 1950s he started traveling to Japan; he had many Japanese collaborators and students. Motoo Kimura was his Ph.D. student, and began a longtime collaboration with him. In 1970 they published An Introduction to Population Genetics Theory, which became the standard textbook of that field. Jim’s plain and folksy speaking style was the same as his writing style – he was enormously prolific and famous for his clear exposition. Among its many effects on the field, the book popularized Gustave Malécot’s way of defining inbreeding coefficients and using them to compute covariances among relatives for quantitative characters.

Jim’s many papers included major work on mutational load and other forms of genetic load, the concepts of inbreeding and variance effective population number, and expanding on R. A. Fisher’s and H. J. Muller’s theory of the evolutionary advantage of recombination. In the 1950s and early 1960s he was a major participant in the debate over genetic variation in natural populations, arguing against Theodosius Dobzhansky’s view that attributed it largely to balancing selection. With Motoo Kimura in 1964 he derived the expected heterozygosity brought about by neutral mutation, and he played a major role in assisting Kimura in effectively presenting his case for neutral mutation. He helped bring Sewall Wright to Madison in 1955, and Jim and Ann Crow were important as friends during Wright’s later years.

In addition to these he contributed numerous insights in his many papers. He was interested in all of genetics, and read its literature widely. As an invariably polite, surprisingly modest, and easily approachable mentor who was always interested in clarifying and simplifying models, he had a great effect. Through his lab passed much of a generation of theoretically-inclined population geneticists. If your name was Morton, Kimura, Maruyama, Hiraizumi, Kerr, Sandler, Hartl, Langley, Gillespie, Ewens, Li, Nagylaki, Aoki, Lande, Bull, Gimelfarb, Kondrashov, Phillips, or Wu, you were among the many who were in Jim’s debt, and remember him warmly as friend and role model.

I talked to Bill Dembski in person about my work on using Genetic Algorithms to solve Steiner’s problem way back in 2001. He didn’t “get” it then, and he still doesn’t!

Reacting to this news story, “Supercolony trails follow mathematical Steiner tree”, Dembski writes today that

Some years back, ID critic Dave Thomas used to tout the power of genetic algorithms for their ability of solve the Steiner Problem, which basically tries to minimize distance of paths that connect nodes on a two-dimensional surface (last I looked, he’s still making this line of criticism - see here). In fact, none of his criticisms hit the mark – the information problem that he claims to resolve in evolutionary terms merely pushes the design problem deeper … In ID terms, there’s no problem – ants were designed with various capacities, and this either happens to be one of them or is one acquired through other programmed/designed capacities. On Darwinian evolutionary grounds, however, one would have to say something like the following: ants are the result of a Darwinian evolutionary process that programmed the ants with, presumably, a genetic algorithm that enables them, when put in separate colonies, to trace out paths that resolve the Steiner Problem. In other words, evolution, by some weird self-similarity, embedded an evolutionary program into the neurophysiology of the ants that enables them to solve the Steiner problem (which, presumably, gives these ants a selective advantage).

Kudos to Dr. Dembski for this classic Goal-Post movement! The purpose of my original article was simply to move the discussion of Genetic algorithms beyond the ID “Dawkins Defense,” namely that all genetic algorithms suffer the “Weasel” flaw of needing the solutions to be incorporated directly into the fitness function.

Dembski’s response is remarkable in that it totally avoids the issues I raised. Just because ants can find ways for colonies to make efficient paths has no bearing on whether genetic problems can be applied without having solutions in hand already.

My original article on Steiner (Target? TARGET? We don’t need no stinkin’ Target!) showed that there are also physical methods for solving Steiner’s problem, including minimal-surface soap films.

If soap films can solve Steiner problems, why not ants? And this bolsters the Weasel defense, how?

My Skeptical Inquirer article from last year, “War of the Weasels: An Evolutionary Algorithm Beats Intelligent Design” has a nice summary of these Weasel Wars, including the marvelous story of UD’s software engineer, Sal Cordova, getting whupped by a Genetic Algorithm on an open-book design problem. The article posting is courtesy of Southern Methodist University’s Critical Thinking/Physics Class!

More: Panda’s Thumb’s “EvoMath” category.

Many readers will be familiar with longtime TalkOrigins regular Doug Theobald – he is the author of “29+ Evidences for Macroevolution: The Scientific Case for Common Descent,” pretty much the most impressive FAQ of all time. Oh, and he’s a professor too, and has published some other stuff.

Today he has published a pretty impressive paper in Nature. It is entitled “A formal test of the theory of universal common ancestry.” Basically, it applies the likelihood-based and Bayesian phylogenetic techniques that have been developed over the last decade or two, adds in some standard model-selection theory, and uses these to assess “universal common ancestry” (UCA). A lot of arguments “for common ancestry”, e.g. biogeography, are really arguments for the common ancestry of groups of modern-day organisms – like mammals – rather than arguments that every living thing we know about shares common ancestry. There have been some powerful arguments for UCA over the years – e.g. the extremely conserved (if not quite identical) genetic code (and as everyone except Paul Nelson knows, “almost identical” and “identical” are virtually the same thing statistically, so his decade of yammering about the non-universality of the genetic code has had no impact on this evidence). However, although the arguments remain powerful and convincing, they weren’t usually quantitative and statistical, and it takes some serious work to construct a statistical assessment of something as deep and universal as common ancestry. This is what Doug has done.

He’s getting a lot of press. Just in Nature there is a News & Views from Mike Steel and David Penny, and a Nature podcast.


PT veterans may remember several posts from 2006, in a summer-long series of articles about Genetic Algorithms, Dawkins’ Weasel, and Fixed Targets.

It’s taken me a few years to get off my duff and write up a proper version for the Skeptical Inquirer. I’m pleased to report that my article has been published in the May/June 2010 issue.

The Good News: Several of my computer-generated diagrams have been professionally redrawn, and look splendid!

The Bad News: Besides the “Web-Extra” sidebar about Solving Steiner Problems using soap films, the article itself, “The War of the Weasels: How an Intelligent Design Theorist was Bested in a Public Math Competition by a Genetic Algorithm!”, appears only in the print copy. You will have to go to your local newstand to get a print copy, or order one from the Committee for Skeptical Inquiry (CSI) directly.

So, after almost four years, how has the ID community responded? Are they still fixated on Dawkins’ “Weasel” demonstration? Do they still maintain that all genetic algorithms require detailed knowledge of their solutions, just as the phrase “METHINKS IT IS LIKE A WEASEL” was the “fixed target” in Dawkins’ 1986 exposition?

More below the fold.

A couple of months ago, I finished a first reading of Stephen Meyer’s new book, Signature in the Cell. It was very slow going because there is so much wrong with it, and I tried to take notes on everything that struck me.

Two things struck me as I read it: first, its essential dishonesty, and second, Meyer’s significant misunderstandings of information theory. I’ll devote a post to the book’s many mispresentations another day, and concentrate on information theory today. I’m not a biologist, so I’ll leave a detailed discussion of what’s wrong with his biology to others.

In Signature in the Cell, Meyer talks about three different kinds of information: Shannon information, Kolmogorov information, and a third kind that has been invented by ID creationists and has no coherent definition. I’ll call the third kind “creationist information”.

Intelligent design creationists love to talk about information theory, but unfortunately they rarely understand it. Jonathan Wells is the latest ID creationist to demonstrate this.

In a recent post at “Evolution News & Views” describing an event at the University of Oklahoma, Wells said, “I replied that duplicating a gene doesn’t increase information content any more than photocopying a paper increases its information content.”

Wells is wrong. I frequently give this as an exercise in my classes at the University of Waterloo: Prove that if x is a string of symbols, then the Kolmogorov information in xx is greater than that in x for infinitely many strings x. Most of my students can do this one, but it looks like information expert Jonathan Wells can’t.

Like many incompetent people, Wells is blissfully unaware of his incompetence. He closes by saying, “Despite all their taxpayer-funded professors and museum exhibits, despite all their threats to dismantle us and expose us as retards, the Darwinists lost.”

We don’t have to “expose” the intelligent design creationists as buffoons; they do it themselves whenever they open their mouths.

Durston’s devious distortions


A few people (actually, a lot of people) have written to me asking me to address Kirk Durston’s probability argument that supposedly makes evolution impossible. I’d love to. I actually prepared extensively to deal with it, since it’s the argument he almost always trots out to debate for intelligent design, but — and this is a key point — Durston didn’t discuss this stuff at all! He brought out a few of the slides very late in the debate when there was no time for me to refute them, but otherwise, he was relying entirely on vague arguments about a first cause, accusations of corruption against atheists, and very silly biblical nonsense about Jesus. So this really isn’t about revisiting the debate at all — this is the stuff Durston sensibly avoided bringing up in a confrontation with somebody who’d be able to see through his smokescreen.

If you want to see Durston’s argument, it’s on YouTube. I notice the clowns on Uncommon Descent are crowing that this is a triumphant victory, but note again — Durston did not give this argument at our debate. In a chance to confront a biologist with his claims, Durston tucked his tail between his legs and ran away.

Creationists think information theory poses a serious challenge to modern evolutionary biology – but that only goes to show that creationists are as ignorant of information theory as they are of biology.

Whenever a creationist brings up this argument, insist that they answer the following five questions. All five questions are based on the Kolmogorov interpretation of information theory. I like this version of information theory because (a) it does not depend on any hypothesized probability distribution (a frequent refuge of scoundrels) (b) the answers about how information can change when a string is changed are unambiguous and agreed upon by all mathematicians, allowing less wiggle room to weasel out of the inevitable conclusions, and (c ) it applies to discrete strings of symbols and hence corresponds well with DNA.

All five questions are completely elementary, and I ask these questions in an introduction to the theory of Kolmogorov information for undergraduates at Waterloo. My undergraduates can nearly always answer these questions correctly, but creationists usually cannot…


According to Rotten Tomatoes, the movie WΔZ starts showing today in the UK. The movie is a psychological thriller/horror movie and has been compared to Se7en. What makes this movie interesting is the fact that the screenplay was inspired by Price’s Equation:

Price’s Equation is a broader version of Fisher’s Fundamental Theorem of Natural Selection. It describes how the change in trait with phenotypes is related to the phenotypes’ fitnesses, . Note that the genetics of the trait (mutation, ploidy, etc.) is contained in the second term. See Wikipedia for more details.

Now according to the Rotten Tomatoes exclusive on WΔZ:

The script comes from City of Vice scribe Clive Bradley, who claims to have come up with the movie’s premise after flicking through a book on Darwinism. “It featured a mathematical equation—W Delta Z—formulated by American population geneticist George R. Price,” he explains. “It supposedly shows that there’s no real altruism in nature; no such thing as selflessness. Price was so upset by his findings that he ended up giving away all his possessions to the poor and, eventually homeless himself, committed suicide with a pair of nail scissors in a filthy London squat.”

The study of the evolution of altruism goes beyond the description above, and I hope moviegoers won’t be seduced by this fictional account of evolutionary theory. (I’m waiting to see what demagoguery that AiG, DI, and the Expelled frauds come up with about this movie.) Now, it is true that according to Price’s Equation, altruistic behavior that benefits a species at the cost of individual fitness is selected against. (Note that a deleterious phenotype can still exist in a population through mutation-selection balance or genetic drift.) However, if the altruism only benefits certain members of the species (e.g. relatives), then altruism can be selected for.

This is represented by Hamilton’s rule: . This describes under what conditions an altruistic allele will invade a population. is the cost of the allele to the “actor”, is the relatedness of the receiver to the actor, and is the benefit that the receiver receives by the actor being altruistic. The consequence of Hamilton’s rule is that selfish genes can still be altruistic. There is a lot of interesting literature about the evolution of altruism, including how punishment can reinforce altruism. I recommend Sean Rice’s Evolutionary Theory, Chapter 10, as a good starting point.

So if anyone in the UK goes to see this movie this weekend, please send us an overview/review.

As promised, hot off the presses, here is a little tutorial I’ve decided to call Genetic Algorithms for Uncommonly Dense Software Engineers. Given some of the bizarre commentary issuing from the ID community over at Uncommon Descent regarding my past posts on Genetic Algorithms, I’ve developed this guide to help the folks over there figure out if the Genetic Algorithms (GAs) they are working on employ a “fixed-target” approach (like Dawkins’s Weasel), or if they are instead un-targeted, like most GAs used in research and industry are.


They came for a contest that might someday be viewed as a pivotal moment in the eternal conflict between Darwin and Design.

On one side were the Intelligent Designers. They came from California and Alabama, New Mexico and England, Finland and the Netherlands, and from all around the world. They came from academia, and from industry, and from the armed services. They came armed with computer spreadsheets, home-made programs, graph paper and calculators. They applied trigonometry and calculus, intuition and insight, knowledge of minimal soap films and surface tension, database optimizing algorithms and random searches, and other techniques available only to Intelligent Designers. And they strived to answer the tricky question “What is the Steiner Tree (smallest possible network of straight line segments connecting six given points) for the network shown in “Take the Design Challenge!”


On the other side were Evolutionary (or Genetic) Algorithms, in which herds of digital organisms were bred over many generations. Each organism was a string of numbers and letters, which were “transcribed” by fixed rules as representing some of the billions upon billions of possible candidate networks for the given problem. Those organisms whose lengths were smaller gained a slightly better chance at being a parent of one of the organisms of the next generation, and mutations of the strings were allowed to happen occasionally. In this process, no trigonometry or calculus was required. No information about characteristics of Steiner Trees was necessary. But, as the strings competed with each other, marvelous and unexpected designs began to appear.


Although most of the Intelligent Designers were not members of the “Intelligent Design” movement, which had been officially invited to respond, the ID community did indeed weigh in, via the efforts of Salvador Cordova, one of the IDers running the show at William Dembski’s blog Uncommon Descent.

So, what is the Answer? Did Salvador do better than Darwin? Did our team of Intelligent Designers find the True Steiner, or did they, like the evolutionary algorithm, find “MacGyver” (not-quite-perfect-but-extremely-functional) solutions also?

Readers, let’s enter the Design Room and meet our Winners!

In July, I described a Genetic Algorithm that, unlike Dawkins’ “Weasel” experiment, specifies no fixed “Target” for the simulation, but instead rewards those members of the current population which use fewer or shorter segments to connect a fixed set of points. As the algorithm progresses, it finds a multitude of answers for the math problem called “Minimization of Steiner Trees,” i.e. the shortest possible straight-line networks connecting the fixed points.

Last Monday, I posted Take the Design Challenge, wherein I called for solutions to a tricky little 6-point network. Next Monday, I will announce the winners (there are 20 entries already, several with true Steiner Solutions, and others with almost-as-good “MacGyver” solutions).

Imagine my surprise, then, when I found Salvador Cordova at Uncommon Descent spewing blatant falsehoods about this work. I was shocked - shocked, I say - to catch the UD Software Engineers in a lie. And quite a lie it is - with the help of mathematicians like Carl Gauss, I’m going to lift the veil from the obfuscations of IDers, and prove it’s a Lie, much as you would prove a mathematical theorem.

About this Archive

This page is an archive of recent entries in the EvoMath category.

Development is the previous category.

Irreducible Complexity is the next category.

Find recent content on the main index or look in the archives to find all content.



Author Archives

Powered by Movable Type 4.381

Site Meter