Promiscuity in Evolution

| 11 Comments

No, this is not another post about the sexual habits of female apes. This is about enzymes, and their ability to catalyze different reactions with different substrates, even those that aren’t found in nature. It’s a property known as “promiscuity”, one that’s being increasingly recognized as important in enzymology and enzyme evolution.

The usefulness of enzymes derives in part from their specificity, in that they don’t just catalyze any old reaction with any old substrate. It would be hard for cells to maintain homeostasis if enzymes were highly nonspecific; helpful reactions would be coupled with harmful side reactions, regulation would be impossible, and things would get messy real quick. So it’s useful for enzymes to specialize in certain functions so that they can be applied for specific tasks at specific times. But because nature is a bit sloppy, enzymes are often able to catalyze many reactions weakly in addition to the “native” functions that they specialize in. These additional weak activities are referred to as promiscuous activities, and they’re potentially very important in enzyme evolution. Now a recent study (subscription required) published in Nature Genetics by Amir Aharoni and coworkers sheds some light on why enzymes are promiscuous, and what it means for their evolvability. (There is some good non-technical commentary on the paper here and here.) It also badly knocks down some bold claims made by leading ID proponents.

Enzyme promiscuity facilitates evolution because new catalytic functions can evolve from those that already exist weakly in existing enzymes. One notable example is TCHQ dehalogenase, an enzyme involved in PCP degradation, which evolved very recently from maleylacetate isomerase. Even though TCHQ, which is a breakdown product of PCP, does not exist in nature, bacteria quickly evolved an enzyme that digests it.

Aharoni and coworkers decided to look at three enzymes that weakly catalyze extra reactions, and apply a technique known as “directed evolution” to see if these promiscuous activities could be increased. Directed evolution is just Darwinian evolution in a test tube – you apply random mutagenesis to a gene, apply selection criteria to the results, and repeat. DNA shuffling, which mimics the natural process of recombination, is also frequently added in. The enzymes they used are serum paraoxonase (PON1), a bacterial phosphotriesterase (PTE) and carbonic anhydrase II (CAII). Each one can catalyze a variety of promiscuous activities in addition to its native activity. The researchers attempted to evolve improved activity towards different substrates for each, using 9 substrates total.

What the authors discovered isn’t too surprising: they were able to substantially improve the level of activity for each of the promiscuous functions they studied. Lots of other directed evolution experiments have done the same. What is somewhat surprising, however, is what happened to the native functions. It’s important to note that in each of their experiments, the researchers only selected for a given promiscuous activity – they did not select for other promiscuous activities nor did they select for the maintenance of the native activity. This is crucial because they found that in some cases, activity had substantially improved for other promiscuous activities (indicating plasticity), but had not decreased by a large amount for the native activity (indicating robustness). Or to put it another way, the promiscuous activities were easily perturbed (increased or decreased) through a few mutations, but the native function was not so easily perturbed. This may seem rather fortuitous, but aside from structural considerations, there is one simple explanation for this phenomenon: native functions have been under selection, which should favor robustness. Promiscuous activities, however, have not been selected for, so they tend to be more plastic. While plenty of previous studies have demonstrated the prevalence of promiscuity among various enzymes, it is not well established that the native function can remain undisturbed while the promiscuous functions are greatly improved. Aharoni and coworkers looked through the literature and found many other examples of this phenomenon. (I’ll toss in yet another example which was just published.)

When it comes to evolvability, these two properties – robustness of native function and plasticity towards promiscuous functions – are great facilitators. When an organism is faced with new challenges, an enzyme can improve its activity towards a new substrate (or new reaction) while maintaining a high level of native function. This greatly increases the chances of successfully achieving a novel function without disrupting the old one. As the simplified graph below shows, an enzyme evolving a new function must maintain a high level of fitness throughout its evolution otherwise it will be constrained by selection. Selection will keep the enzyme near the “peaks” in sequence space and away from the “valleys”. If the old function must be lost or severely degraded before the new one can evolve, then protein evolution is limited by negative trade-offs.

But the results of Aharoni et al show that enzymes can indeed maintain their old functions while adapting to new ones. So they need never enter a valley before finding a new peak. When coupled with gene duplication, this presents us with a generalized model of how new functions are continuously added to the genome. The authors give a simple account in their text:

The divergence of new proteins could follow this route: initially, a gene acquires a beneficial mutation that renders it generalized by increasing the protein’s promiscuous activity to a level sufficient for survival while maintaining the original activity largely intact. Gene duplication, and the divergence of a completely new gene (with respect to sequence and function), then follow.

This is likely to be a fairly general method of novel protein evolution, and quite interesting in its own right. But it also refutes some key claims made by ID advocates. In particular, it pertains to our previous critique of a paper written by ID advocates Michael Behe and David Snoke, which attempted to model the evolution of gene duplication using a simple “neo-functionalization” assumption. Their model assumes that gene duplication occurs first, and then it’s a race against time for beneficial mutations to appear before the duplicate gene is rendered nonfunctional by deleterious mutations. But as we see here, a likely route towards novel gene evolution, which they did not account for, is one in which the new activity exists prior to gene duplication, with duplication simply allowing specialization. Hence, there is no race against the clock, since the selectable function is already present.

More strikingly, however, is how this research flies in the face of a prediction made by William Dembski:

In section 5.10 of NFL [No Free Lunch], I indicated how perturbation probabilities apply to individual enzymes and how experimental evidence promises shortly to nail down the improbabilities of these systems. The beauty of work being done by ID theorists on these systems is that they are much more tractable than multiprotein molecular machines. What’s more, preliminary findings of this research indicate that islands of functionality are not only extremely isolated but completely surrounded by a sea of nonfunctionality (not merely polypeptides having different functions but polypeptides incapable of function on thermodynamic grounds – in particular, they can’t fold). For such extremely isolated islands of functionality, there is no way for [Richard] Wein’s method of co-evolving functions to work. Prediction: Within the next two years work on certain enzymes will demonstrate overwhelmingly that they are extremely isolated functionally, making it effectively impossible for Darwinian and other gradualistic pathways to evolve into or out of them. This will provide convincing evidence for specified complexity as a principled way to detect design and not merely as a cloak for ignorance.

William Dembski, Obsessively Criticized but Scarcely Refuted: A Response to Richard Wein (Emphases original)

Well it’s been over two years since Dembski wrote this, and not only has the evidence for “isolated functionality” of enzymes not materialized, the precise opposite has been discovered – protein functions overlap. (Note that the “preliminary” findings that Dembski refers to are not cited, and most protein chemists wouldn’t have bought this claim in the first place.) So we have here a bold yet failed prediction, the sort of thing we’ll undoubtedly be seeing lots more of.

Notice also that Dembski is positing this property of “isolated functionality” in order to shore up his concept of “specified complexity”. Richard Wein has written a lengthy critique of specified complexity as presented by Dembski in [u]No Free Lunch[/u], and the above quote is part of Dembski’s reply. As Wein points out in response, Dembski is hanging his grand method of detecting design in biology on the speculative outcome of future research, which basically proves the point that specified complexity is indeed an argument from ignorance. Here we see the dangers of making revolutionary claims based on what might be discovered some time in the future – you often end up being wrong.

Reference:

Aharoni A, et al. Nat Genet. 2005 Jan;37(1):73-6.

11 Comments

Nice stuff Steve.

I might add that the “function” linchpin of Dembski’s inane “theory” was, and remains, its most blatant and naive flaw after we allow ourselves to pretend with Dembski that mysterious alien beings with unprecedented powers actually exist. Of course, that’s not where the problems end but looking much further begs the question.

But as we see here, a likely route towards novel gene evolution, which [Dembksi and Co.] did not account for, is one in which the new activity exists prior to gene duplication, with duplication simply allowing specialization.

Insofar as the concept of proteins having multiple functions – including functions that have not yet been defined – is obvious to any high school student who has taken a decent class in cell biology, the failure to deal with the issue responsibly has always been an unforgiveable oversight on Dembski’s part. It’s not reasonable to make such “mistakes” under the circumstances. Rather, such mistakes suggest selective ignorance of the facts. It’s pathetic, actually.

Very interesting, altho not all that surprising to a biochemist. And, as usual, it’s a case study of how the IDC’ers take a textbook generalization

“Enzymes are highly specific catalysts.”

and turn it into a rigid rule

“Enzymes only can catalyze a single reaction.”

This allows them to reach the demonstrably false conclusion

“Within the next two years work on certain enzymes will demonstrate overwhelmingly that they are extremely isolated functionally, making it effectively impossible for Darwinian and other gradualistic pathways to evolve into or out of them.”

I’m not concerned about Dembski; he’s obviously incorrigible and has amply demonstrated the ability to ignore data that opposes his model (in this among other things, he clearly is not a scientist). But as a textbook author myself, I often have found myself having to make general statements to avoid a bewildering mass of disclaimers.

I wonder about how we can teach the real lesson: that science draws verifiably true but non-rigid conclusions. It’s a real problem, and it comes back to bite us sometimes.

Scientists usually shy away from making predictions. Recall that false prophets were to be stoned to death as Dembski and Co’s favored book tells us, and the risk of becoming a false prophet calls for a caution in making predictions. Martin Gardner had defined a crank (in a book first published about 50 years ago). There are a number of features typical of a crank. A crank often has a penchant for introducing new terms (like Dembski’s “unsimplifiability”), claiming to have discovered new important scientific laws (like Dembski’s “law of conservation of information”), suggesting new, allegedly powerful methods of research (like Dembski’s explanatory filter) etc. Behind this impressive facade there is in fact nothing of value. A crank may display enormous erudition, ingenuity in coming up with seemingly sophisticated arguments, and typically is absolutely convinced that his critics are stupid and either do not understand his great breakthroughs or willfully try to suppress his discoveries because of some non-kosher reasons. Gardner should have added one more feature typical of cranks - a penchant for dabbling in prophecies. In this contribution Steve Reuland gives a fine example of the abismal failure of one such bold prediction made by Dembski with his typical unbounded self-confidence. I doubt that the failure of Dembski’s prediction will teach him a lesson. We can expect more such predictions from Isaac Newton of information theory. Perhaps there will be all kinds of arguments attempting to justify Dembski’s uncautious wrong prediction, but no spin can deny the obvious.

Another gap is closed for ID to hide in. While science uncovers more and more evidence and mechanisms, ID is doomed to remain ignorant.

While some people try to portray ID as scientific the reality is that there is NO scientific theory of ID. All there is, is a belief that science cannot explain all features and that this is evidence of ‘design’. While admitting that ID was formulated to avoid addressing issues of ‘who/what designed’, the ID movement has made it clear that its designer is God. Often ID can be observed claiming that it adds to science by providing means to detect design, something lacking in science while at the same time arguing that design detection has been applied succesfully in science. So in other words, the design ID is referring to cannot be about science since science already can detect design and thus this addition to science is, as expected, supernatural.

Not only is ID scientifically flawed and meaningless but it proposes that science can falsify God, who is forced to hide in gaps of our ignorance. A theologically risky and flawed position which may cause much damage to Christian faith.

Here we have a good example where ID is found to be wrong in its claims. Does this mean that ID has been falsified?

To investigate what kind of evolutionary advantage promiscuity offers, the team created a speeded-up version of evolution in the lab. Mutations were introduced into the genes coding for various proteins in a completely random manner. Evolutionary pressure was then simulated by selecting those mutants with higher levels of activity in one of the promiscuous traits.

After several rounds of mutation and selection, the scientists looked at their enzymes to see what had changed. As expected, they had managed to increase the activity they were selecting for by as much as a hundredfold and more.

So we generate mutations in a random manner, then intelligently select those mutations that increase the activity of the promiscuous traits, and then guess what? Surprise! Surprise! They increased the activity of what they were selecting for by as much as a hundredfold and more.

This appears to be no different than simulating evolution with a computer program and simply shows what one can accomplish using intelligence.

I have a hard time believing anyone could buy such an illogical argument. The researchers did not “design” the proteins. They simply applied a selection criterion without any knowledge of the actual mutations involved. The results are proof of principle that mutation plus selection can create new proteins, the very thing that IDists have said couldn’t happen. There is nothing stopping this from happening in the wild so long as there is selective pressure to encourage it to happen (which is precisely the case with PCP degradation, for example.)

By your reasoning, there is no such thing as a laboratory experiment that is not “intelligent design”. Any variables that the researchers control means that they’re using “intelligence”, so therefore it’s impossible to study any natural process whatsoever.

Speaking of promiscuous behavior, could someone explain to me what exactly is meant by the term “100% human brain” in the following context:

Mice With Human Brains

Weissman has already created mice with brains that are about one percent human.

Later this year he may conduct another experiment where the mice have 100 percent human brains. This would be done, he said, by injecting human neurons into the brains of embryonic mice.

Before being born, the mice would be killed and dissected to see if the architecture of a human brain had formed. If it did, he’d look for traces of human cognitive behavior.

Weissman said he’s not a mad scientist trying to create a human in an animal body. He hopes the experiment leads to a better understanding of how the brain works, which would be useful in treating diseases like Alzheimer’s or Parkinson’s disease.

The test has not yet begun. Weissman is waiting to read the National Academy’s report, due out in March.

http://news.nationalgeographic.com/[…]himeras.html

I can only assume they mean mice with brains that consist entirely of human neurons.

What “traces” of “human” cognitive behavior does he expect to observe in his little mouse dude?

I’ll be frank, here: the remote possibility of benefits flowing from this research don’t outweigh the sickening feeling.

From my perspective, that sort of experiment is no different from creating a human embryo and allowing the embryo to develop “traces” of “human” cognitive behavior so Dr. Weissman can experiment on it.

I don’t really know, but my guess is that they’d like to see if the neurons form the same kinds of connections that human neurons do, or if the brain specializes the way human brains do, or if they act more “mouse-like”. That would tell us if the factors responsible for human cognition (or at least human brain structure) are contained within the neurons themselves, or are influenced by other factors within the organism.

But this stuff isn’t really my field. I prefer enzymes. (Hint.)

I put my follow-up thoughts on the “Little Mouse Dude” sub-thread up on the Bathroom Wall, in case anyone is remotely interested. ;)

About this Entry

This page contains a single entry by Steve Reuland published on January 27, 2005 10:21 AM.

Modeling metazoan cell lineages was the previous entry in this blog.

Anti-Evolution in Georgia (Again) is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter