Von Neumann, Berlinski, and evolution: Who's the hooter?
by Douglas L. Theobald, Assistant Professor of Biochemistry, Brandeis University
Jeffrey Shallit pointed me to a youtube video, in which David Berlinski makes the following remarkable claim: “… von Neumann, one of the great mathematicians of the 20th century, just laughed at Darwinian theory. He hooted at it.”
For those even tangentially familiar with the Hungarian mathematician John von Neumann, this will come as a shock. One may ask, however, with some justification: who cares what a non-biological mathematician thinks about evolutionary theory? After all, anyone speculating outside of their field of expertise is simply doing that, and their opinion should carry no more weight than anyone else who talks about something they know little about. John von Neumann, however, is not just any mathematician, and his seminal work on self-replicating automata and game theory has had important, fundamental implications for evolutionary biology (as have, more indirectly, his contributions to ergodic theory, numerical analysis, and statistics).
Von Neumann is something of a legend, one of those people whose name keeps showing up again and again in the citations of technical papers in very disparate fields (somewhat reminiscent of Sir Ronald Fisher, but even more intellectually promiscuous). 1
So, Berlinski’s pompous bit spurred me to do a bit of digging and jogging of the memory. I found that Berlinski’s unsubstantiated claim is—yawn—preposterous. Von Neumann was demonstrably pro-evo, especially regarding the usual mut/sel/drift mechanisms, yet he may have been critical of abiogenesis hypotheses given his theoretical work with self-replicating automatons. Regardless, the creationists have apparently wrung certain statements out-of-context and/or conflated evolution with abiogenesis (no surprise there). Here are three bits of fact on the matter:
There is one misleading, yet eye-raising, quote from von Neumann that I’ve seen repeated on creationist/ID sites:
I shudder at the thought that highly purposive organizational elements, like the protein, should originate in a random process.
This may be the ultimate source of many of the claims that von Neumann was anti-evo. However, this is clearly a partially mangled, out-of-context quote. Here is the original source, from a personal letter written by von Neumann to George Gamow in 1955:
I still somewhat shudder at the thought that highly efficient, purposive, organizational elements, like the proteins, should originate in a random process. Yet many efficient (?) and purposive (??) media, e.g., language, or the national economy, also look statistically controlled, when viewed from a suitably limited aspect. On balance, I would therefore say that your argument is quite strong. von Neumann to Gamow, 25 July, 1955. Gamow fld., von Neumann papers, LC. Quoted in Lily E. Kay, Who Wrote the Book of Life?: A History of the Genetic Code, Stanford University Press, 2000, p 158.
The context was a discussion regarding the nature of the genetic code, which at the time had not yet been solved. Gamow came up with some random model for the distribution of amino acids in proteins (for which I don’t understand the rationale, and neither evidently did Francis Crick). Von Neumann gave an analytical solution for the model, and Gamow found that the observed distribution didn’t match the theoretical one. From other considerations, Gamow concluded that the deviation from randomness must be due to a nonrandom distribution of nucleotide triplets in DNA, and he used this as support for his non-overlapping, triplet, combinatorial code hypothesis. Gamow made this argument to von Neumann, and von Neumann responded with the quote above. Gamow’s specific hypothesis turned out to be wrong (particularly the combinatorial part)—but of course there is a non-random distribution of nucleotides in codons (which are indeed triplet and non-overlapping).
So von Neumann’s statement has nothing to do with protein evolution, but rather deals with how amino acids are coded for in the translation apparatus. Obviously neither the genetic code nor translation in general are predominantly random processes.
There are two other similar quotes I have seen recounted by creationists, one from Harold F. Blum’s book Times Arrow and Evolution (Harper 1962) and another from A.G. Cairns-Smith’s book Seven Clues to the Origin of Life. On page 178G Blum writes regarding abiogenesis theories:
As the late John von Neumann pointed out, a machine that replicates itself can, with some difficulty, be imagined; but such a machine that could originate itself offers a baffling problem which no one has yet solved.
Similarly, Cairns-Smith says on page 15:
Is it any wonder that Von Neumann himself, and many others, have found the origin of life to be utterly perplexing?
Both Blum and Cairns-Smith are respectable sources, and I don’t doubt their word, but in neither case are references given. I have not seen anything specifically where von Neumann has criticized abiogenesis per se; however, the following source may be what Blum and Cairns-Smith refer to.
Here von Neumann shows without question his acceptance of evolutionary theory, though there are hints that he may have had trouble seeing how a self-replicating and evolvable machine (i.e. an organism) could arise de novo. I quote this at length as it may be of use in refuting creationist claims (and because it could easily be misunderstood or quote-mined when taken out-of-context).
In 1949 von Neumann gave a series of lectures at the University of Illinois on self-replicating machines. They were published posthumously in 1966 under the title Theory and Organization of Complicated Automata. Much of this will sound very fuzzy from the lecture transcription, but von Neumann actually published several papers (and a posthumous book) where self-replicating automata were formalized (see von Neumann cellular automata and von Neumann universal constructor for more info).
From the fifth lecture, entitled “Re-evaluation of the problems of complicated automata—Problems of hierarchy and evolution”:
Anybody who looks at living organisms knows perfectly well that they can produce other organisms like themselves. This is their normal function, they wouldn’t exist if they didn’t do this, and it’s plausible that this is the reason why they abound in the world. In other words, living organisms are very complicated aggregations of elementary parts, and by any reasonable theory of probability or thermodynamics highly improbable. That they should occur in the world at all is a miracle of the first magnitude; the only thing which removes, or mitigates, this miracle is that they reproduce themselves. Therefore, if by any peculiar accident there should ever be one of them, from there on the rules of probability do not apply, and there will be many of them, at least if the milieu is reasonable. But a reasonable milieu is already a thermodynamically much less improbable thing. So, the operations of probability somehow leave a loophole at this point, and it is by the process of self-reproduction that they are pierced.
Furthermore, it’s equally evident that what goes on is actually one degree better than self-reproduction, for organisms appear to have gotten more elaborate in the course of time. Today’s organisms are phylogenetically descended from others which were vastly simpler than they are, so much simpler, in fact, that it’s inconceivable how any kind of description of the later, complex organisms could have existed in the earlier one. It’s not easy to imagine in what sense a gene, which is probably a low order affair, can contain a description of the human being which will come from it. But in this case you can say that since the gene has its effect only within another human organism, it probably need not contain a complete description of what is to happen, but only a few cues for a few alternatives. However, this is not so in phylogenetic evolution. That starts from simple entities, surrounded by an unliving amorphous milieu, and produces something more complicated. Evidently, these organisms have the ability to produce something more complicated than themselves.
The other line of argument, which leads to the opposite conclusion, arises from looking at artificial automata. Everyone knows that a machine tool is more complicated than the elements which can be made with it, and that, generally speaking, an automaton A, which can make an automaton B, must contain a complete description of B and also rules on how to behave while effecting the synthesis. So, one gets a very strong impression that complication, or productive potentiality in an organization, is degenerative, that an organization which synthesizes something is necessarily more complicated, of a higher order, than the organization it synthesizes. This conclusion, arrived at by considering artificial automata, is clearly opposite to our earlier conclusion, arrived at by considering living organisms.
I think that some relatively simple combinatorial discussions of artificial automata can contribute to mitigating this dilemma. Appealing to the organic, living world does not help us greatly, because we do not understand enough about how natural organisms function. We will stick to automata which we know completely because we made them, either actual artificial automata or paper automata described completely by some finite set of logical axioms. It is possible in this domain to describe automata which can reproduce themselves. So at least one can show that on the site where one would expect complication to be degenerative it is not necessarily degenerative at all, and, in fact, the production of a more complicated object from a less complicated object is possible.
The conclusion one should draw from this is that complication is degenerative below a certain minimum level. This conclusion is quite in harmony with other results in formal logics, to which I have referred a few times earlier during these lectures. . . . There is a minimum number of parts below which complication is degenerative, in the sense that if one automaton makes another the second is less complex than the first, but above which it is possible for an automaton to construct other automata of equal or higher complexity. . . .
There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where synthesis of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.
Reproduced in Papers of John von Neumann on Computing and Computer Theory, W. Aspray and A. Burks, eds., MIT Press, pp 481-482
Von Neumann goes on to explain how automata can mutate, replicate, and inherit mutations. He obviously was convinced of both the power of natural selection and of the fact of phylogenetic evolution.
Von Neumann’s very existence is even used as evidence for extra-terrestrials. Enrico Fermi once famously asked concerning the potential existence of aliens: “Where are they?” Fermi reckoned that, given the size and age of the universe, many technologically advanced civilizations must exist and that the odds are that they should have visited us by now—an argument dubbed the “Fermi Paradox”. Leo Szilard supposedly provided an answer: “Maybe they’re already here, and you just call them Hungarians.” (Of course there are other stellar Hungarian mathematicians and physicists, like Erdos, Wigner, Polya, and Szilard himself, but Fermi and Szilard were both good friends of von Neumann, and of the same age). ↩