Information in biology

| 103 Comments

In the Stanford Encyclopedia of Philosophy Professor of Philosophy at Harvard University Peter Godfrey-Smith provides a useful definition of Shannon Information and biology.

Professor Godfrey-Smith is also author of “Information and the Argument from Design” which was part of the collection edited by Robert Pennock title Intelligent Design Creationism and Its Critics

In the weaker sense, informational connections between events or variables involve no more than ordinary correlations (or perhaps correlations that are “non-accidental” in some physical sense involving causation or natural laws). This sense of information is associated with Claude Shannon (1948), who showed how the concept of information could be used to quantify facts about contingency and correlation in a useful way, initially for communication technology. For Shannon, anything is a source of information if it has a number of alternative states that might be realized on a particular occasion. And any other variable carries information about the source if its state is correlated with that of the source. This is a matter of degree; a signal carries more information about a source if its state is a better predictor of the source, less information if it is a worse predictor.

This way of thinking about contingency and correlation has turned out to be useful in many areas outside of the original technological applications that Shannon had in mind, and genetics is one example. There are interesting questions that can be asked about this sense of information (Dretske 1981), but the initially important point is that when a biologist introduces information in this sense to a description of gene action or other processes, she is not introducing some new and special kind of relation or property. She is just adopting a particular quantitative framework for describing ordinary correlations or causal connections. Consequently, philosophical discussions have sometimes set the issue up by saying that there is one kind of “information” appealed to in biology, Shannon’s kind, that is unproblematic and does not require much philosophical attention. The term “causal” information is sometimes used to refer to this kind, though this term is not ideal. Whatever it is called, this kind of information exists whenever there is ordinary contingency and correlation. So we can say that genes contain information about the proteins they make, and also that genes contain information about the whole-organism phenotype. But when we say that, we are saying no more than what we are saying when we say that there is an informational connection between smoke and fire, or between tree rings and a tree’s age

Godfrey-Smith also pointed out, as have many ID critics before him, how Dembski and other “ID proponents of “Intelligent Design” creationism appeal to information theory to make their arguments look more rigorous.”

For instance Dembski likes to use the term information, rather than probability because the former can appeal to Information theory even though all he does is apply a transformation to a probability

To assign a measure of information to the event, you just mathematically transform its probability. You find the logarithm to the base 2 of that probability, and take the negative of that logarithm. A probability of 1/4 becomes 2 bits of information, as the logarithm to the base 2 of 1/4 is -2. A probability of 1/32 becomes 5 bits of information, and so on. In saying these things, we are doing no more than applying a mathematical transformation to the probabilities. Because the term “information” is now being used, it might seem that we have done something important. But we have just re-scaled the probabilities that we already had.

That’s the full extent of ID’s appeal to information theory, take the negative base 2 logarithm of a probability.

Despite all the detail that Dembski gives in describing information theory, information is not making any essential contribution to his argument. What is doing the work is just the idea of objective probability. We have objective probabilities associated with events or states of affairs, and we are re-expressing these probabilities with a mathematical transformation.

So what about some of the other terms used by ID proponents such as complexity? Surely that means something?

So far I have discussed Dembski’s use of the term “information.” Something should be said also about “complexity” and “specification,” as Dembski claims that the problem for Darwinism is found in cases of “complex specified information” (CSI). Do these concepts add anything important to Dembski’s argument? “Complexity” as used by Dembski does not add anything, as by “complex information” Dembski just means “lots of information.”

Again we find that ID’s usage of terminology adds nothing and only leads its followers into confusion as they have come to believe that these terms mean something else.

In other words, Complex Specified Information CSI all boils down to the following

That completes the outline of Dembski’s information-theoretic framework. Dembski goes on to claim that life contains CSI – complex specified information. This looks like an interesting and theoretically rich property, but in fact it is nothing special. Dembski’s use of the term “information” should not be taken to suggest that meaning or representation is involved. His use of the term “complex” should not be taken to suggest that things with CSI must be complex in either the everyday sense of the term or a biologist’s sense. Anything which is unlikely to have arisen by chance (in a sense which does not involve hindsight) contains CSI, as Dembski has defined it.

Or in other words

So, Dembski’s use of information theory provides a roundabout way of talking about probability.

Back to the age old creationist argument of improbability.

Richard Wein, Mark Perakh, Wesley Elsberry and many others have shown how Dembski’s ‘novel’ approach is neither novel nor particularly relevant as we lack sufficient resources to calculate the probabilities involved. In other words, the reason why ID is scientifically vacuous is because all it can contribute is a calculation of the negative base 2 logarithm of a probability and it cannot calculate said probability.

103 Comments

I’ve been hammering the creationist argument about information a lot. It gets brought up over and over, and the never seem to acknowledge that people have addressed their concern before.

Check out the blog:

http://aigbusted.blogspot.com

-Ryan

proponents of “Intelligent Design” creationism appeal to information theory to make their arguments look more rigorous

One of the ways IDers like to use information theory is to claim that information cannot arise on its own from random assortments of molecules (and not just IDers - I think Paul Davies uses a similar argument). Although I am a biologist with no training in information theory, it has always seemed to me that this argument (like all of theirs) is bogus.

There is more information inherent in a random collection of molecules than a highly ordered one. Using water as an example, given the position of one molecule in an ice cube of a given size, you could very specify the position of all the other water molecules with a minimum amount of information that specified the parameters of the crystal structure and distance between molecules. Given the same amount of water in a steam vapor and the position of one molecule, it would require much more information to specify the position of any other water molecule. You could encode a much more complex message using the position of the water molecules in the vapor than in the ice crystal.

Similarly, there is more total information in a random collection of six billion nucleotides in a beaker than there is in my genome. The only difference is that I have RNA polymerases and ribosomes to decode what little information remains.

As I said, I have no background in information theory, so please let me know if I have missed something.

I think I have a different perspective. Specified information, which was defined originally by Leslie Orgel (as William Dembski himself says) is perfectly meaningful once we agree on what scale to rank individuals (and the relevant one is fitness). Dembski’s critics point to his vagueness about what the scale is – but defining it as fitness is the way to rescue that part of his argument. The rest of his argument, that deterministic and random functions can’t create it, is where he is wrong, as I explain in my article in the current issue of Reports of the NCSE (the issue that isn’t quite up on their web page yet).

The idea that random stuff has lots of information is mistaken. It has lots of things that need explaining but that is different from saying that it has a lot of specified information. It has none. If I ask you for some good hot tips on the coming horse races, and you say sure and send me random numbers, I ought to be royally ticked off, and should not congratulate you on sending me so much information. Because what you sent me does not help me at all.

If I ask you for some good hot tips on the coming horse races, and you say sure and send me random numbers, I ought to be royally ticked off, and should not congratulate you on sending me so much information.

Sigh. How many times does it have to be said that Shannon information is not the same as semantic content? The sequence of random numbers contain a lot of information because it takes a lot of bits to transmit it.

That’s the full extent of ID’s appeal to information theory, take the negative base 2 logarithm of a probability.

Why the negative base 2 logarithm of a probability?

In other words, why not the positive base 3? Why not divide by 112 before taking the negative base 2 logarithm? Why not just add 42?

Joe, the whole concept of “specified information” is nonsense. A ‘specification’ means the object conforms to a pattern, i.e., is easy to describe. But ‘information’ in the sense it is understood by mathematicians and computer scientists measures to what extent an object is hard to describe; that it, how much it does not conform to a pattern.

I recommend reading Kolmogorov complexity and its applications by my colleagues Li and Vitanyi, or my long paper with Elsberry debunking the notion of ‘specified complexity’.

Semantic content. I think that’s in a way what WAD is trying to say, that life has meaning. But that’s a philosophical argument, and he’s trying to make it sound scientific by calling it ‘information’ instead of ‘meaning’… or ‘intent’… or ‘purpose’…or ‘design’. Circular and pointless.

JuliaL:

Why the negative base 2 logarithm of a probability?

because base 2 gives you the number of bits required, a “bit” being a 2-state function. if you used base 3 or base 4, youd get “tribits” or “quadbits” which arent pretty to work with.

The Stanford Encyclopedia of Philosophy also has an article on “Creationism,” which contains the following statement in its conclusion:

“Scientifically Creationism is worthless, philosophically it is confused, and theologically it is blinkered beyond repair. The same is true of its offspring, Intelligent Design Theory.”

I forget–how did Dembski claim to specify information? It is so easy to look at some thing, and see a pattern in it. How much complex specified information can I detect in an ink blot, or in clouds? I can see patterns in biology, some that require a lot of words to describe, but many are easily explained by common descent or contingency.

mark:

I forget–how did Dembski claim to specify information? It is so easy to look at some thing, and see a pattern in it. How much complex specified information can I detect in an ink blot, or in clouds? I can see patterns in biology, some that require a lot of words to describe, but many are easily explained by common descent or contingency.

He alleged to have devised an “Explanatory Filter” with which to detect “Design.” However, neither he, nor any of his followers or associates have ever actually theoretically or physically demonstrated exactly the “Explanatory Filter” works.

Dembski’s No Free Lunch explains csi in a way that I could eventually understand. It was quite convincing and I think most of you have missed it.

On an unrelated issue: Dawkin’s could not give an example of a mutation that increased information (by any definition). Is there an example of a mutation that improves the function of an enzyme or makes a new structure or is evolution in a positive direction?

Merlin

Mark asked

I forget–how did Dembski claim to specify information? It is so easy to look at some thing, and see a pattern in it.

That’s essentially how Dembski does it. There is no principled (= mechanical) way to specify the pattern to which the object under analysis conforms. Is the pattern to which a bacterial flagellum conforms an outboard motor (as ID creationists typically characterize it), or a helicopter’s rotor, or a flexible stick poking out of a blob, or an antenna whipping in the breeze, or what?

Function also (occasionally?) enters into the identification of the (alleged) specification> But one is still in the definitional soup: Is the function of the flagellum appropriately described as to provide motility via a rotary motor and flexible rod (the usual ID creationist functional specification), or to provide motility, full stop, or to function so as to raise the probability that the organism will escape crowding, or will find food, or what? Again, there’s no principled way to describe the function in Dembski’s blathering about it.

Merlin Perkins:

Dembski’s No Free Lunch explains csi in a way that I could eventually understand. It was quite convincing and I think most of you have missed it.

On an unrelated issue: Dawkin’s could not give an example of a mutation that increased information (by any definition). Is there an example of a mutation that improves the function of an enzyme or makes a new structure or is evolution in a positive direction?

Merlin

Then, please demonstrate how to use “csi” to detect design in, say, the heteromorph ammonite Nipponites mirabilis

Dawkins appeared to have been unable to give an example of a mutation that “increased information” because his interviewers were creationists who lied in order to interview him, and they edited the footage in order to make him look foolish. In fact, Dawkins was actually contemplating having the interviewers thrown out of his office.

http://www.talkorigins.org/indexcc/[…]CB102_1.html

If you actually knew how to do research, rather than unflinchingly swallow all of the lies creationists tell you, you would realize that there are countless research papers done on the positive identification of positive mutations, such as the three different versions of the enzyme nylonase in 2 different strains of Flavobacterium and 1 strain of Pseudomonas aeruginosa, the studies done on the appearance and evolution of the “antifreeze” gene in the Antarctic icefish of the suborder Notothenioidei, or even how heterozygous carriers of sickle cell anemia are capable of surviving virulent strains of malaria.

Merlin Perkins asked

On an unrelated issue: Dawkin’s could not give an example of a mutation that increased information (by any definition). Is there an example of a mutation that improves the function of an enzyme or makes a new structure or is evolution in a positive direction?

See the evolution of lactase (extending its functioning into adulthood in some populations) for an example of such an improvement in an enzyme. And see the Milano mutation for one that’s particularly interesting to me, since I have coronary artery disease and have had an M.I. already. And quit blathering creationist bullshit.

Dembski’s No Free Lunch explains csi in a way that I could eventually understand. It was quite convincing and I think most of you have missed it.

Why do ignorant people think that their being convinced of something is of any import?

Why the negative base 2 logarithm of a probability?

See http://cm.bell-labs.com/cm/ms/what/[…]y/paper.html

In other words, why not the positive base 3? Why not divide by 112 before taking the negative base 2 logarithm? Why not just add 42?

Because none of those operations would yield a measure of information in the sense of Shannon’s theory.

The idea that random stuff has lots of information is mistaken.…Because what you sent me does not help me at all.

Perhaps mining your quote like this might indicate what is so very wrong about it. The world does not revolve around you. Information cannot be defined in terms of how it helps one specific recipient; Shannon information is an objective measure, and thus cannot be a matter of what a message tells you about horses or any other specific subject matter. The information measure pertains to the message itself as an abstract mathematical object. Since a random sequence is less redundant than a repetitive sequence, it has a higher information measure.

Why is someone who is so ignorant of information theory that he can say “The idea that random stuff has lots of information is mistaken” writing articles about it for Reports of the NCSE?

PG nails it.

* NO SEMANTIC CONTENT IN INFORMATION THEORY

* INFORMATION THEORY IS USELESS TO ID and everything else he said on this too.

Surprisingly there are even a couple articles on one of the IDist websites saying the same thing. It makes me think the author was trying to get through to someone. It obviously did’nt work but that’s what happens when people have concrete in their heads.

I have not been impressed with Dembski’s work with regards to CSI. Behe’s idea about IC seems more a plausible angle, if it could be positively demonstrated that there is no chemical pathways to certain structures. I ask the crowd here for your thoughts: if design is present in nature, is there any way to detect it, excepting for blatant messages to us in the DNA?

Muldoon: You’d have to give us an exact definition fo “design” before anyone could objectively verify whether or not it is “detected” in nature. “It looks designed (to me), therefore it is designed” doesn’t cut it, especially for an artsy type like me who sees “design” of sorts in ice crystals.

Some of the confusion and difficulties associated with attempting to apply information theory to biology can be illustrated with a simple analogy to dendritic growth (e.g., icicles, stalactites, or other mineral or neuronal growth). Initially, before a specified branch of a dendrite has developed, the probability of its development would appear to be very low. However, this misconception arises because the branch is specified, and it ignores the contingencies that could just as easily have produced others.

In reality, millions of possibilities are available before a particular branch develops, and once a branch starts (because of contingencies existing in the system or its environment), the probabilities of its continuation may be quite large. This is just another way of saying that the probability of reaching certain states is dependent on what nearby states are available. Singling out a particular branch and asserting that its probability is low relative to the background states from which it developed (and therefore special in some way) is misleading because millions of other branches could just as easily have developed.

It may be meaningful in some way to calculate the probability (or, equivalently, the entropy or “information”) of that particular branch relative to nearby states, but that doesn’t take into account the millions of other branches that could have but didn’t develop. Given the right conditions and energy throughput, dendritic growth may be inevitable for a given system; it’s just the particular configuration that appears improbable if one ignores the broader picture.

Similarly, singling out particular features of an organism and suggesting that they are improbable (and therefore must be designed) ignores the record of evolutionary history. The fact that there are so many varied life forms that exist, and have existed, suggests that a large number of life forms are possible within the energy ranges and conditions on this planet. Just because other forms didn’t develop doesn’t mean that, given suitable contingencies, they couldn’t have developed. The variety of life on this planet appears to be in a constant state of flux. And the evolutionary tree does appear to be much like constantly changing dendritic growth. Species and their characteristics arise against a background of contingencies, and once they are established and selected, they form a template for further development along a particular branch (at least temporarily until further contingencies wipe out the branch).

When Dembski asserts something is improbable (and hides this assertion in a negative logarithm to base 2), he is arrogantly assuming he knows specifically how the current state of an organism was achieved and that there were no alternative organisms that could have developed. It’s like he gets dealt a 52-card hand from a shuffled deck of cards and concludes that the amount of information contained in his hand is minus log2(1/52!).

I think some of your terminology is muddled muldoon however I think your question if I am interpreting it correctly has some merit. The problem lies in proving their are no viable mutation pathways from point A to point B. This is not impossible just incredibly difficult to do in a truly rigorous sense. You have to account for an impractical amount of possible contingencies if your initial set of conditions fail to be viable as well as the wide variety of possible pathways. On the other hand in cases where researchers have tried to fill in potential intermediates they have found a fair amount of success suggesting that in practice we will find molecular intermediates given a reasonable amount of time and resources for a wide variety of systems. It gets much dicer if you want to talk about any gene because many of them are not nearly as well suited to experimental characterization.

Behe’s idea about IC seems more a plausible angle, if it could be positively demonstrated that there is no chemical pathways to certain structures.

How are you going to positively demonstrate a universal negative? Behe tried to present a logical argument, but it’s based on a ridiculous strawman version of evolution as a strictly additive process.

Tex,

There is more information inherent in a random collection of molecules than a highly ordered one. Using water as an example, given the position of one molecule in an ice cube of a given size, you could very specify the position of all the other water molecules with a minimum amount of information that specified the parameters of the crystal structure and distance between molecules. Given the same amount of water in a steam vapor and the position of one molecule, it would require much more information to specify the position of any other water molecule. You could encode a much more complex message using the position of the water molecules in the vapor than in the ice crystal.

Similarly, there is more total information in a random collection of six billion nucleotides in a beaker than there is in my genome. The only difference is that I have RNA polymerases and ribosomes to decode what little information remains.

As I said, I have no background in information theory, so please let me know if I have missed something.

I’d think that to be relevant, the information has to be at least somewhat persistent. Positions of molecules in a vapor change constantly, so there’s no persistence there.

———–

Muldoon,

if design is present in nature, is there any way to detect it,

Don’t look for “design”, look for side effects of the engineering methods used to implement it.

Henry

The problem lies in proving their are no viable mutation pathways from point A to point B.

No, there’s no point A; the claim is that IC systems couldn’t have evolved.

And all this time I’d been saddled with the impression that the word “specified” within the expression “CSI” had been put in there by the usual suspects in order to indirectly introduce the concept of a “designer” at the ground floor of the whole gallimaufry.

After all, something can’t be “specified” unless some specifier has engaged in specifying. The CSI in the bacterial genome that produces a flagellum is clearly exactly equal, functionally, to a CNC tape for a milling machine or a set of cards for a Jacquard loom. Both of those are complex, informative, and would have been specified by some engineer or designer.

Henry J.

I’d think that to be relevant, the information has to be at least somewhat persistent. Positions of molecules in a vapor change constantly, so there’s no persistence there.

This does not argue against my point that there is a wealth of information there, no matter how useful you may find it.

If, however, you insist on persistance, then just change my analogy to a pile of 1000 bricks stacked in a 10 X 10 X 10 array versus the same number of bricks randomly scattered over a lawn where they will persist in this arrangement until serious effort is expended to move them.

If, however, you insist on persistance, then just change my analogy to a pile of 1000 bricks stacked in a 10 X 10 X 10 array versus the same number of bricks randomly scattered over a lawn where they will persist in this arrangement until serious effort is expended to move them.

Yes, it takes a lot less information to precisely specify the former configuration than the latter.

Excellent. Sorry for not seeing the subsequent posting.

Possibly because of the dog awful software at this site that hides posts (I now habitually check for them after hitting preview, before posting). Any chance it will ever get fixed?

Of course, deleterious mutations TEND to get weeded out, and the worst are weeded out the best, but with new ones arising continually and beneficial one rare, the net effect would seem to be negative.

Yeah, it seems so much more likely that populations would becomes less and less fit over time – despite natural selection favoring fit over unfit organisms – if it weren’t for the constant insertion of space aliens’ pudgy digits into the process.

Mutations that destroy function don’t reproduce (even if they manage to survive one generation, they will perish in subsequent ones). The remaining mutations introduce variation. If you think in terms of populations and very large numbers of generations, you might be able to get your intuitions in line with the observed facts – which you won’t find “on some creationist or ID website” or in the work of “Demski”. You talk about “assume the evolution I evolution I am questioning”. You might as well talk about “assuming” the Earth is round – there’s no need to assume what has been observed.

Demski, using work by Douglas Axe, describes small islands of functioning genes surrounded by a vast ocean of non-functioning possible sequences of amino acids.

Hey, I thought ID predicts that all DNA is functional?

The implication being that you cannot get from one function to another by descent with modification.

Hunh? How in the heck is that an implication of the above?

The fact is that we have some rather detailed pictures of how one function evolved from another, including ID’s favorite targets, the bacterial flagellum and the human blood clotting system.

Merlin:

A quick search gave me Kimura’s neutral theory of mutation. Is that the one? It seems to me that it all depends on the ratios of deleterious, neutral and beneficial mutations. I have heard estimate of one out of 100, 1 out of 1000, and fewer beneficial mutations to all others. What are the current estimates?

Of course, deleterious mutations TEND to get weeded out, and the worst are weeded out the best, but with new ones arising continually and beneficial one rare, the net effect would seem to be negative. Could this be resolved by a computer model?

No, it is not the neutral theory, but Kimura’s (1962) publication of the probability of fixation of a beneficial or deleterious mutant. It will be found in this paper of his:

Kimura, M. 1962. On the probability of fixation of mutant genes in a population.   Genetics   47: 713-719.

which is made available freely here. The relevant equations are (8) and (15). Another place to find it is in Chapter VII of my online populations genetics book.

Use of these formulas will show that the proposed “swamping” is not going to happen, and will show how much more common deleterious mutants have to be than advantageous ones to have a greater effect. Let us know what you discover.

I believe somewhere in the related posts is also a note by Hawks that the observed rate short-term violates Haldane’s limits by a factor 10 - 100, making a joke of another creationist scam claim.

Ehrm, always check before posting. [Hangs head in shame.]

Hawks et al work blows Haldane’s limit out of the water:

What about the Haldane limit? I thought such rapid evolution was impossible!

J. B. S. Haldane famously estimated that the substitution cost of new alleles in humans limited the rate of adaptive evolution. In his estimate, the slow rate of human reproduction limited substitutions to one every 300 generations. This became known as the “Haldane limit”.

Motoo Kimura used Haldane’s argument as a reason why selection could not explain the substitution rate, and he asserted that as support for his neutral theory of molecular evolution. We do not challenge neutralism, but it is clear that Haldane’s limit is a problem, since every estimate says that humans have been evolving at many times that rate.

Maynard Smith (1968, Nature) showed that Haldane’s argument depended on the unrealistic assumption of independence among all selected loci, so that the substitution load depends critically on the fitness of the optimal genotype among all selected loci. If selection on many loci is non-independent, then a very large number of genes may be selected with the same substitution load as a single gene under Haldane’s assumptions. Later, Ewens (e.g., 1972, Am Naturalist) made a similar argument. Ewens (2004, Mathematical Population Genetics) reviewed this problem, pointing out an additional weakness of Haldane’s argument: it depends solely on mortality selection, while many genes may be under fertility selection.

These considerations show that Haldane’s limit does not constrain the adaptive substitution rate in humans to 1/300 generations, and our estimated rate of 13 per generation is not excluded. Moreover, considering the high infant and juvenile mortality evidenced in Neolithic and later populations, much of that death rate resulting directly from disease and dietary deficiencies, the number of selective deaths available to drive substitutions has clearly been high.

So not only is the Haldane limit observably surpassed by a factor of > ~ 1 000, it can be a persistent violation in their model.

Another IDC claim eats the dust for good.

@ Merlin:

And you may be correct on everything else if you assume the evolution that I am questioning.

I don’t think so. I’m not a biologist but Hawks say they test for linkage disequilibrium with an LDD test:

In population genetics, linkage disequilibrium is the non-random association of alleles at two or more loci, not necessarily on the same chromosome. …
Linkage disequilibrium is generally caused by genetic linkage and the rate of recombination; mutation rate; random drift or non-random mating; and population structure.

Which of these observable effects are dependent on an assumption of evolution theory in your eyes? I don’t think any are [again, not a biologist], and as they do a test they get an independent test of evolution theory, that hereditary changes happens in populations over longer periods of time.

But even if they didn’t, evolution theory can’t be assumed by biologists, it is a basic theory long since validated by numerous other tests.

Where are the new structures/new functions? ….

You can answer that question yourself by reading the paper for the data on observed change.

Torbjörn Larsson, OM:

I believe somewhere in the related posts is also a note by Hawks that the observed rate short-term violates Haldane’s limits by a factor 10 - 100, making a joke of another creationist scam claim.

Ehrm, always check before posting. [Hangs head in shame.]

Hawks et al work blows Haldane’s limit out of the water:

What about the Haldane limit? I thought such rapid evolution was impossible!

J. B. S. Haldane famously estimated that the substitution cost of new alleles in humans limited the rate of adaptive evolution. In his estimate, the slow rate of human reproduction limited substitutions to one every 300 generations. This became known as the “Haldane limit”.

Motoo Kimura used Haldane’s argument as a reason why selection could not explain the substitution rate, and he asserted that as support for his neutral theory of molecular evolution. We do not challenge neutralism, but it is clear that Haldane’s limit is a problem, since every estimate says that humans have been evolving at many times that rate.

(Larsson’s second block of quotation was not from himself but from John Hawks’s web site, which he had referenced).

I wrote a paper in 1971 (in American Naturalist) in the debate about the meaning of Haldane’s limit, a debate Kimura had set off by using it as an argument that most substitutions had to be neutral. In my paper I pointed out that Haldane’s model was one in which the environment deteriorates, and then the population is rescued by a substitution, after a time. I redid Haldane’s argument (more exactly). The Haldane limit is set by the reproductive excess of the population, how many offspring there are above 1 per parent. If the reproductive excess is not sufficient, the population will be driven extinct by the deteriorations of the environment. However, if some substitutions are not deleterious, but advantageous, they do not pose any threat to the continued existence of the population. In fact, they increase the ability of the population to bear the cost of the deteriorations of the environment. That is another reason why Haldane’s limit is not a problem.

People accept the Haldane calculation too uncritically without thinking of this issue. Scenarios involving gene interaction are not necessary to escape the Haldane “limit”.

Torbjorn Larsson and Joe Felsenstein

I will try to get thru the materials you have referenced. Time and the math may be a problem. Thanks. I can see that I am not well enough informed to discuss this with you. None the less, the problem is getting new information/structures/function, by whatever definition. I understand that one can design a good radio antenna using a evolutionary algorithm, but one will never happen upon a bicycle wheel using the same algorithm.

Mike Elzinga

The fossil record shows little change, once a species shows up. Isn’t this why the theory of punctuated equilibrium was proposed; to explain the stasis?

Popper’s Ghost

Dembski, of course. Thanks, my mistake, and I should have said amino acids when I said genes. It is in No Free Lunch. For a functioning enzyme that has 1000 amino acids, there are 20 to the 1000th possible sequences of that same length. Sequences that vary slightly from the functioning one, will perform the same function, but as more changes are made, function drops off and ceases but the number of the remaining possible sequences is so huge that it is impossible to get to another functioning sequence by chance.

I understand that one can design a good radio antenna using a evolutionary algorithm, but one will never happen upon a bicycle wheel using the same algorithm.

This would be a consequence of GAs “lacking a target”; so you couldn’t expect a GA to arrive at a specific solution, like a bicycle wheel. But I don’t think that’s what you mean: I think you mean that for some reason GAs can’t produce something as Platonic as a perfect circle. Can you clarify why not? I can imagine evolutionary algorithms that would be able to produce something close to a bicycle wheel.

Merlin:

Dembski, of course. Thanks, my mistake, and I should have said amino acids when I said genes. It is in No Free Lunch. For a functioning enzyme that has 1000 amino acids, there are 20 to the 1000th possible sequences of that same length. Sequences that vary slightly from the functioning one, will perform the same function, but as more changes are made, function drops off and ceases but the number of the remaining possible sequences is so huge that it is impossible to get to another functioning sequence by chance.

This is utter nonsense. Dembski knows absolutely nothing about genetics. He conveniently neglects to factor in things like silent or quiet mutations, where an altered codon still codes for the same amino acid, or where a changed amino acid does not change the function of the protein. Silent mutations occur all the time in nature. Dembski also blatantly ignores the fact that advantageous mutations have been observed, both in nature and in the laboratory.

http://en.wikipedia.org/wiki/Silent_mutation

I understand that one can design a good radio antenna using a evolutionary algorithm, but one will never happen upon a bicycle wheel using the same algorithm.

Yes, very good.

Evolution, or its analogs, the GAs, won’t make a wheel, or at least it’s quite unlikely (intermediates are difficult to imagine, and there are severe physiological constraints). Design, by contrast, is likely to produce a wheel.

That’s why wheels have been designed by humans, and are not found in biology. Because biology has not been designed (except for a few tweaks by ourselves).

Glen D http://tinyurl.com/2kxyc7

Merlin:

Torbjorn Larsson and Joe Felsenstein

I will try to get thru the materials you have referenced. Time and the math may be a problem. Thanks. I can see that I am not well enough informed to discuss this with you. None the less, the problem is getting new information/structures/function, by whatever definition. I understand that one can design a good radio antenna using a evolutionary algorithm, but one will never happen upon a bicycle wheel using the same algorithm.

The problem that you raised was whether the large number of deleterious mutations would “swamp” lower rate of advantageous mutations and cause fitness to decline. I predict that when you do the calculations you will find that this won’t happen. The issue of whether the favorable mutations will be truly novel is a completely separate one, and not what I was responding to. I await with interest your use of the equations I cited, and the conclusions from them about whether swamping will occur. If you can’t do it, let us know and I will do an example and post it here.

Merlin Wrote:

The fossil record shows little change, once a species shows up. Isn’t this why the theory of punctuated equilibrium was proposed; to explain the stasis?

There can be a number of reasons for “stasis”, including an imperfect fossil record.

But, in general, stasis has its analogs in all kinds of physical systems, so it would not be surprising to find it in complex living systems. Eigenmodes of energy or vibration states, in systems with several degrees of freedom, exhibit relatively stable configurations that can jump suddenly from one state to another as a result of a relatively small perturbation. How these flips occur depends on the kind of couplings that exist between different states and the nature of the perturbations.

In very complicated systems, the couplings among states may be extremely small (this is necessary in order for a system to be complicated; otherwise it would be chaotic and unstable and, in the case of living systems, perhaps unable to survive; or alternatively, very adaptable). So a relatively large perturbation would be required to flip it from one state to another. In the case of a living system, periods of stasis would simply mean that the environmental perturbations aren’t sufficiently large to cause a detectable change in the phenotype that gets fossilized.

On the other hand, if flipping to another state requires a perturbation that is more than the system can handle, the system is destroyed (goes extinct). So complicated systems that don’t have a sufficiently close distribution of states into which they can flip are more vulnerable to extinction if the environment in which they are immersed makes a large enough change.

So there is nothing esoteric going on here.

About this Entry

This page contains a single entry by PvM published on January 2, 2008 5:25 PM.

NOMA in Ohio, Redux was the previous entry in this blog.

T(h)resholds on Comer is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter