Biology Journal Gets Conned

In 1977, the Journal of Theoretical Biology (JTB) published a paper by Hubert Yockey called A calculation of the probability of spontaneous biogenesis by information theory. The paper became something of an embarrassment for the journal, since Yockey’s calculations, which purported to show that a naturalistic origin of life was effectively impossible, were both biologically and mathematically ridiculous. Of course, this didn’t stop the creationists from crowing about the paper.

JTB is normally a well-respected journal, but history has now repeated itself. The journal has just published a paper by Steinar Thorvaldsen and Ola Hössjer called Using statistical methods to model the fine-tuning of molecular machines and systems. Once again, the creationists are crowing about having slipped some of their dreck into a respectable venue. But as with Yockey’s paper, the authors’ mathematics is naïve to the point of being silly.

The authors characterize fine-tuning in physics like this: “The finely-tuned universe is like a panel that controls the parameters of the universe with about 100 knobs that can be set to certain values. … If you turn any knob just a little to the right or to the left, the result is either a universe that is inhospitable to life or no universe at all.” However, whether this constitutes fine-tuning, by some reasonable definition of that term, is going to depend on a lot more than just the number of knobs.

Tell me more about those “certain values.” Can the knobs be rotated all the way around, or can they only be rotated a few degrees? Are all the hash marks on the knobs equally likely to occur, or are there physical reasons that some values are far more likely than others? Can all the knobs be turned independently (which is the usual assumption made in thought experiments along these lines), or does turning one knob automatically force another knob to be turned?

We need good answers to these questions if we are going to turn casual claims of fine-tuning (gosh, isn’t it interesting that small changes in the values of fundamental constants can lead to big changes in the stability of the universe!), into scientifically rigorous arguments useful for drawing grand, metaphysical conclusions (which only intelligent design can explain). But the simple fact is that we have essentially no idea at all how to answer these questions.

The authors acknowledge this, writing, “A probabilistic argument presumes adequate knowledge of (the limits on) the space of possibility. … [T]here is no way of assigning a probability distribution as reference associated with the universe in that early stage.” That’s the wisest thing they say in the whole paper! They nonetheless quickly ignore this point, preferring instead to claim, groundlessly, that inferring design is the best explanation for cosmic fine-tuning.

The situation only gets worse when they turn to biology. Here their primary example of fine-tuning comes from so-called “irreducibly complex” systems in biology. They even coin the term “Behe-system” after biochemist Michael Behe, who developed the idea in its modern form. The authors write, “[William] Dembski applies the term ‘Discrete Combinatorial Object’ to any of the biomolecular systems which have been defined by Behe as having ‘irreducible complexity’.” Mimicking the approach taken by Dembski in his book No Free Lunch, they go on to write, “Then the probability of a protein complex is the multiplicative product of the probabilities of the origination of its constituent parts, the localization of those parts in one place, and the configuration of those parts into the resulting system (contact topology).” There then follows a bona fide equation, complete with Greek letters, subscripts, and even a big pi to indicate a product.

This is the point where a legitimate peer-reviewer would have thrown the paper aside, refusing to read any further, because every word is nonsense. Complex systems do not evolve through three distinct phases of origination, localization, and configuration, for heaven’s sake. But let’s leave that aside and play along for a moment.

For the sake of argument, let’s assume the constituent parts are proteins. When you speak of the probability of origination, do you mean the probability that the protein originates at random from an ocean of randomly colliding amino acids, or do you mean the probability that the protein originates as a slight variation on an already-existing protein? When you refer to the probability of localization, are you imagining that the organism’s body is partitioned into discrete locations from which we choose uniformly and at random, or are you thinking in the context of slightly modifying a genetic program that already sends everything to specific locations? And when you speak of the probability of configuration, are you imagining this probability simply as a permutation so that the number of configurations is just the factorial of the number of parts, or are you imagining the correct configuration arising gradually through a sequence of less effective systems until the modern form appears? And why, exactly, are you treating origination, localization, and configuration as independent events, which they would have to be, to justify multiplying these probabilities together?

In No Free Lunch, William Dembski effectively chose the former option in each of these questions, which is why everyone laughed at him. But we obviously need to consider the latter, and we equally obviously have no hope of assigning numbers to any of the relevant variables. Incredibly, the authors even acknowledge this point, writing, “Modeling the formation of structures like protein complexes via this three-part process … is of course problematic because the parameters in the model are very difficult to estimate.” But having just admitted to wasting the reader’s time developing a useless equation, they claim nonetheless that it has heuristic value. It does not, because this three-fold process has precisely zero connection to any real biological process.

Here’s a rule of thumb for you: If an author says he will address the question of whether a complex system could have arisen through evolution by natural selection, and then later says he will use probability theory to address the issue, then you can stop reading right there, because everything that comes next will be nonsense. Probability theory is just flatly the wrong tool for this job because there is no hope of defining a proper probability space within which to carry out a calculation.

Asked to explain their confidence that complex adaptations have evolved gradually, scientists generally note first that such systems always show what Stephen Jay Gould referred to as the “senseless signs of history,” meaning odd arrangements and strange kludges that make no sense when viewed as the product of human engineering, but make perfect sense when the history of the system is taken into consideration. They then point to comparative anatomy in the present, to show that variant, and often simpler, systems exist in the present, thereby establishing the existence of plausible, functional, stepping-stones. And for many specific systems they then go on to point to strong evidence from genetics, embryology, and paleontology that indicate precisely how the system evolved.

In particular, creationist arguments based on “irreducible complexity” just completely backfire. We could as easily have used the term “easily broken” in place of “irreducibly complex,” and human engineers, at least, don’t usually brag about designing easily broken systems. Engineers don’t design complex systems so that they fail catastrophically when a single part breaks, but natural selection, as has been pointed out many times, does this all but automatically. In other words, the prevalence of “Behe-systems,” to use the authors’ term, is strong evidence of evolution. When you find resilient systems with fail-safes, backups, and redundancies, then we can talk about intelligent design.

Desirous of keeping this post to a reasonable length, I have focused on what I take to be the absolutely fatal flaw of this paper. The authors claim to have used probability theory to establish a scientifically rigorous and useful notion of “fine-tuning,” but they have failed because we have nothing like the information we would need to carry out meaningful probability calculations. Done.

But I don’t think I’ve adequately communicated just how bad this paper is. The authors are constantly tossing out bits of mathematical jargon and notation, but then they do nothing with them. There is a frustrating lack of precision, as when they variously describe fine-tuning as an object, an entity, a method, and an attribute of a system, all on the first page of the paper. They constantly cite creationist references, with only the most glancing mention that any of this work has been strongly and cogently criticized. They say we should give fair consideration to a “design model” for the origination of complex structures, but they give not the beginning of a clue as to what such a model entails.

In short, it’s hard to believe this paper could have gotten through an honest peer-review process (as opposed to one in which ideology played a big role). Whatever happened behind the scenes, it’s a huge black eye for the journal.