Developmental buffering, or how to live with your bad genes

| 31 Comments

ResearchBlogging.org

Mutate. Select. Repeat. Mutate. Select. Repeat. You can’t understand evolutionary biology if you don’t get the significance of that process. And yet, if you think that’s all there is to it, you’re way off track. PZ explained this very nicely here last week. Let’s focus on one simple point that he made, and look at some recent and significant work on that subject that shows just how misleading some of the common simplifications of evolutionary biology can become.

Here’s PZ on simple views of mutation and selection:

Stop thinking of mutations as unitary events that either get swiftly culled, because they’re deleterious, or get swiftly hauled into prominence by the uplifting crane of natural selection. Mutations are usually negligible changes that get tossed into the stewpot of the gene pool, where they simmer mostly unnoticed and invisible to selection.

I think this is an extremely important point, both for those seeking to answer creationist propaganda and for anyone else trying to understand the process of evolutionary change. The common picture, painted all too often by commentators of various stripes, depicts a world in which mutations run a harrowing gauntlet of selection that is likely to foolishly discard both the gems and the proto-gems of biological function. Oh sure, the cream eventually rises to the top, but only through the magic of seemingly endless eons and limitless opportunities. I hope that most readers of the Panda’s Thumb are annoyed by this crude caricature, but it’s the standard tale, and when the narrator only has a paragraph, it’s the one we’re most likely to hear.

To improve the situation, we might first add the concept of random drift. And that helps a lot. Then we would emphasize the selective neutrality of the vast majority of all mutations, as PZ did. And that helps a lot, too. Let’s look at another helpful concept, one from the evo-devo playbook, almost crazy at first glance but remarkably interesting and important.

Suppose that one reason many mutations are selectively near-neutral is because genetic systems are able to tolerate mutations that have the capacity to be strongly deleterious. Suppose, in other words, that organisms are robust enough to live with seriously nasty genetic problems. This would mean that such mutations could escape selection, and that populations could harbor even more genetic diversity than our simplistic account would seem to suggest.

Some very nice work in the fruit fly (“Phenotypic robustness conferred by apparently redundant transcriptional enhancers”), performed by Frankel and colleagues and published in Nature in July, shows us one way this sort of thing can work. The authors were studying genetic control elements (called enhancers) that turn genes on and off. Specifically, they were looking at how the expression of a gene called shavenbaby was affected by a set of enhancers. (The shavenbaby gene controls the development of hair-like structures on the surface of the fly larva - i.e., maggot - and so alterations in the embryo’s patterning that result from changes in shavenbaby function are easily detectable by simple microscopy.) Now, like many genes that control development, shavenbaby is regulated by a few different enhancers, some that are close to the gene and others that are apparently redundant and are further away. These latter elements are called “shadow” enhancers, as they are remote and distinct from the primary enhancers but highly similar in activity.

Why all this redundancy? Others had proposed that shadow enhancers might confer “phenotypic robustness” - i.e., developmental or functional robustness - by maintaining function in the face of significant challenges (environmental changes, for example), and Frankel et al. set out to test that hypothesis. First they deleted the shadow enhancer region, and this had a very mild effect, consistent with the idea that the shadow enhancers are redundant with respect to the function of the primary enhancers. But then they examined development in the absence of the shadow enhancers, now introducing environmental stress (extremes of temperature), and found dramatic developmental defects. They concluded that the shavenbaby shadow enhancers normally contribute to phenotypic robustness through what they term “developmental buffering.” In other words, the animal’s critical developmental pathways are buffered against many disastrous alterations, in part through the action of redundant control systems.

That’s interesting all by itself, but the authors went one crucial step further. What if the redundant enhancers can also buffer against genetic disasters? The experiment was straightforward: they deleted one copy of a major developmental control gene (called wingless). Those animals are just fine, until they lose the buffering of the shavenbaby shadow enhancers. Without the redundant system, the loss of one wingless gene leads to a significant change in developmental patterning. The conclusion, I think, is quite interesting: the impact of the shadow enhancers only becomes apparent when the system is stressed, by environmental challenges and even by genetic problems elsewhere in the genome.

Such developmental buffering systems are thought to be common in animal genomes, and this means that animal development is capable of tolerating significant genetic dysfunction. It means, I think, that simplistic stories about deleterious mutants being readily discarded from populations are even less useful than we already should have realized, and that’s without the deliberate misuse of such outlines by anti-evolution spinmeisters.

And one last thing. Why my little comment about the evo-devo playbook? Well, one concept championed by evo-devo thinkers is the notion of “evolvability.” The idea (roughly) is that the ability to generate diversity is something that we should expect to see in evolution. Like most other evo-devo proposals, it’s been savaged by some smart critics. But phenotypic buffering by redundant developmental control elements is just the kind of thing that “evolvability” was meant to encompass when it was discussed by Kirschner and Gerhart more than a decade ago. So I say we give credit where it’s due. Anyone else?

31 Comments

Nice post Steve.

Putting on my geneticist hat, another major factor in evolution that routinely gets overlooked is that sexual reproduction allows for very efficient recombination. Recombination is a huge driver in generating new combinations of alleles, some combinations of which will be more adaptive than others. Variation arising through recombination greatly outstrips that of de novo mutation alone.

I think that it is worth pointing out that part of the ambiguity could easily result from not explicitly distinguishing between mutation as an observable phenotype vs. mutation from strictly a gene point of view. To an organismal biologist neutral mutations are of less utility and they natural emphasis would be to focus more on the short and sweet version because it is the data that they are more familiar in handling.

To put more fine a point on it. The original work of Haldane and Fisher under emphasized the importance of recombination by focusing on single loci that are not interacting. I’m not sure exactly to what extent Wright worked on that specific type of problem but the shifting balance idea is in the ballpark of the importance of recombination.

Dennis Venema said:

Nice post Steve.

Putting on my geneticist hat, another major factor in evolution that routinely gets overlooked is that sexual reproduction allows for very efficient recombination. Recombination is a huge driver in generating new combinations of alleles, some combinations of which will be more adaptive than others. Variation arising through recombination greatly outstrips that of de novo mutation alone.

This fits nicely into a more general context in which the size and “richness” of a complex system tends to enhance system stability in the face of fluctuations and perturbations.

Just looking at large systems containing a very large number, N, of particles maintained near some relatively constant energy reveals that the fluctuations about a mean value of some chosen parameter that describes the system are on the order of 1/√N (i.e., the reciprocal of the square root of N).

Simple systems consisting of relatively few parts tend to have large relative fluctuations, and tend to be sensitive to relatively small perturbations that can destroy them.

The enormous sizes and complexities as well as the diversities of large systems usually provide a relatively stable environment or “bath” in which energy driven organization and coordination can take place.

This is a general rule in nature.

I am less sure than Steve M or PZ that most mutations at coding loci are neutral. Just because you can’t see their effect in the lab does not mean nature can’t. If the population size is (say) 1 million, population-genetic theory says that selection coefficients (fractional differences of fitness) as small as 1 part in 4 million can have an effect different from neutrality.

Another phenomenon to think of is that if developmental buffering reduces the selection coefficient against a mutant (say) fourfold, for a dominant or partially dominant mutation the equilibrium frequency in the population will rise until a new equilibrium is reached, with the mutation four times more frequent. The decrease in fitness caused by recurrent mutations at that locus then ends up being about the same! So the notion that buffering eliminates the deleterious effect of mutation is not correct, in the long run.

This post and PZ’s last post, together with the commentary for, are the reason I keep coming back to the Thumb. Clear, concise, intelligent dialogue on a vitally important subject. Keep it up.

Joe isn’t there a kinetic effect where smaller and smaller coefficients allow mutations to persist for longer periods of time? Longer persistence would increase the efficacy of other recombination effects at sampling a wider territory.

JGB said:

Joe isn’t there a kinetic effect where smaller and smaller coefficients allow mutations to persist for longer periods of time? Longer persistence would increase the efficacy of other recombination effects at sampling a wider territory.

Yes, descendants of each mutational event persist proportionately longer, which means the frequency of that allele in the population is proportionately higher. We are really talking about the same thing.

… and then while the shadow enhancers are doing their job the original enhancer can run off and do something else…

Mike E, I’m always entertained in the best possible way by your ability to restate biological reality in the language of physics.

We had a paper recently accepted that looks at the consequence of recurrent mutations and “moderate” selection coefficients. (I’ll explain in better detail when it is published.)

If selection is very weak, then the mutants will act like neutral alleles. If selection is very strong, then the mutant will either be rapidly fixed or rapidly lost. Between these regions, there is a “twilight” area in which selection is strong enough to be non neutral, but not strong enough to be deterministic or rapid.

This ends up allowing selective variants to persist in the population for a long time, producing complex dynamics.

fnxtr said:

… and then while the shadow enhancers are doing their job the original enhancer can run off and do something else…

Mike E, I’m always entertained in the best possible way by your ability to restate biological reality in the language of physics.

And I hope I am not appearing to be arbitrarily stretching metaphors beyond recognition.

Biological systems contain all sorts of hints of their underlying chemistry and physics. It is easier to see this when one has considerable familiarity with complex condensed matter systems rather than the austere elementary particle or highly idealized simple systems in which the fundamental laws of physics are isolated and studied.

This latest generation of biologists, biochemists, biophysicists, geneticists, are extremely fortunate to have the tools to study biological systems directly and through the kinds of modeling that Joe does.

I may not live long enough to see it, but I have this strong feeling that some of this younger generation will see some of the major breakthroughs we have long anticipated in determining how life works and came about.

I’m a science nerd all the way through; I love this stuff, and I never get tired of it. :-)

That’s really interesting… there are degrees of neutrality?

Sort of like a shiftless transmission as opposed to a 5-speed.

fnxtr said:

That’s really interesting… there are degrees of neutrality?

Sort of like a shiftless transmission as opposed to a 5-speed.

Sort of like that old Chrysler fluid drive transmission that got the name “Slush-O-Matic. I remember those.

I’m a science nerd all the way through; I love this stuff, and I never get tired of it. :-)

And one doesn’t even have to be a scientist to suffer that affliction;-)

On a more topical note, doesn’t the OP and the comments all point to an obvious conclusion: Life doesn’t look anywhere near designed; it looks just like the makeshift hodgepodge you’d expect from natural processes?

I think it would be useful to look at this from an information theoretical perspective for a second. The information in a string is inversely related to how compressible it is. A repetitive string is the most compressible kind. Thus, repetitive elements in a string add no new information. In this case, the repetitive element is the shadow regulator. From Hong et al:

We suggest that shadow enhancers might arise from duplication, comparable to the duplication and divergence of protein-coding sequences.

But wait a minute, something that adds no information that clearly has function? What gives? The problem is a deeply-seated assumption in ID: that information is only intrinsic to the genome. In this case, the information is also extrinsic and can come from the environment also. The function of the buffering of shavenbaby only is evident when the fruit fly is environmentally stressed. When not stressed it just acts normally. Again from Hong et al:

Shadow enhancers have the potential to evolve novel binding sites and achieve new regulatory activities without disrupting the core patterning functions of critical developmental control genes.

What this means is in order to calculate the information you not only need to know the genome of what you are studying but the entire environmental history of the species! So-called specified complexity is already incalculable but this adds another dimension to it.

Redundancy is a design technique to improve reliability which is usually removed by management because it’s too expensive, viz. the BP disaster. So, even if you assume that shadow regulators are an example of front-loaded design you still have the problem of detectability through the information route. In fact, it’s examples like this that cause many of us to conclude that detecting design through any complexity-based approach is most likely impossible. Examples coming out of ID also bolster this case since they come nowhere near a solid design inference and they have been trying for decades.

Not only are Steve and I both evangelicals we both come from the Reformed camp. There’s a doctrine in Reformed theology known as concursus where God cooperates with the actions of free agents to achieve His will. This is sometimes also known as the secret will of God. As the name implies, we do not have access to it and the assumption that we can always determine the purpose or design of the details is arrogant. From a scientific standpoint, however, you don’t have to know the small-scale purpose. That’s why Steve and PZ can be in agreement here.

For me, the design inference is a theological/philosophical one and not a scientific one. This means it cannot legally be taught in science classes and this causes the rub with the Intelligent Design movement because thinking like mine blocks trying to sneak creationism into the public schools. When I brought all this up on Uncommon Descent a couple years ago I was banned in less than 24 hours. I wonder why.

Oops. Please replace all instances of shadow regulator with shadow enhancer above. Must. not. post. before. coffee.

Steve concludes by asking:”But phenotypic buffering by redundant developmental control elements is just the kind of thing that “evolvability” was meant to encompass when it was discussed by Kirschner and Gerhart more than a decade ago. So I say we give credit where it’s due. Anyone else?

Not me, thanks. But I will admit that it is an interesting subject, well worth discussing. For the sake of argument, we will assume that PBbRDC really is the kind of thing that K&G meant by “evolvability”. Has PBbRDC arisen by evolution? Yes of course. But is it an adaptation? Has it evolved by natural selection? Aye, there’s the rub. If it is, in some sense, an adaptation, but it did not arise by NS, just why did it arise? Concursus?

Before Darwin, people found two kinds of “fitness” or “adaptation” in nature. First, organisms seemed well fitted to their environments, in that the organisms could tolerate and exploit those environments. Second, environments seemed well fitted to their inhabitants; they were tolerable (in fact, almost “fine-tuned”) and they provided bountiful exploitable resources. But today, we only worry about the first kind of fitness: fitness to the environment. The second kind, fitness of the environment, is seen as something which doesn’t really require explanation. So the environment is usually treated as an exogenous variable in evolutionary theorizing; the evolution (i.e. change over time) of the environment is mostly explained by geochemistry and plate tectonics and such. So, if this evolution leads to an oxygen-rich atmosphere with moderate temperatures suitable to be exploited by animals, and even intelligent animals, we tell ourselves that we “just got lucky”.

Similarly, I think that the cited “smart critic”, Michael Lynch nicely explains the genetic “environment” (or at least, its surprisingly large size) as a side effect of small population sizes, which may itself be explained as something that large organisms just have to expect.

So, why do we have phenotypes which are buffered by redundant developmental controls? The answer that Lynch would give would involve an expansion of “junk” genome size for non-adaptive reasons, coupled with an exploitation of that genetic-environmental resource by the adaptive, non-junk portion of the genome.

In other words, we “just got lucky”. I think he is right, but I can certainly understand why TEs continue to exist.

Joe Felsenstein said:

I am less sure than Steve M or PZ that most mutations at coding loci are neutral.

Given the evo-devo focus, I suspect that the implied denominator is not coding loci, but rather coding, plus regulatory, plus potentially regulatory loci. Perhaps enough additional loci so that “most” is no longer hyperbole.

Another phenomenon to think of is that if developmental buffering reduces the selection coefficient against a mutant (say) fourfold, for a dominant or partially dominant mutation the equilibrium frequency in the population will rise until a new equilibrium is reached, with the mutation four times more frequent. The decrease in fitness caused by recurrent mutations at that locus then ends up being about the same! So the notion that buffering eliminates the deleterious effect of mutation is not correct, in the long run.

Hmmmm. But when you take recombination into account, doesn’t the efficient removal of organisms carrying multiple deleterious mutations hold down the genetic load and restore some of the buffering? Same logic as with sex and diploidy?

I have thought of neutral mutations as those which make no difference in phenotypic expression, and thus are invisible to selective processes. Because selection is statistical in nature, and statistics don’t work well for small populations, I am beginning to think genetic drift in its various forms can be more important than selection in determining the genetic make up of small populations.

am beginning to think genetic drift in its various forms can be more important than selection in determining the genetic make up of small populations.

Here’s an article that seems to address that: http://en.wikipedia.org/wiki/Founder_effect

The cost of extra DNA to a multicellular organism is a pretty small part of it’s total energy budget. So on the face of it while an individual may not benefit directly from having buffering capacity his descendants could realize some benefit as the gene pool starts to shuffle things. As long as the only cost was the energy of replicating the extra DNA chunk the benefit would not have to be large to come out ahead.

Another way to envision some of these more complex systems of interacting genes and benefits only in some genetic backgrounds is that they could be modeled as a special kind of frequency dependent selection.

If you consider the normal environmental fluctuations as well I think this is an under appreciated way of generating more polymorphism. The alleles wouldn’t be neutral in some senses they would just tend to have slightly different affects, and be best in slightly different conditions. The periodic fluctuations would tend to drive very small changes back and forth in the gene pool preserving the diversity in a dynamic equilibrium.

The impression I got from PZ’s It’s more than just genes posting was that ‘mutate’ is the wrong word. It should be Variate. Select. Repeat. Variate. Select. Repeat.

Your posting reinforces that impression. Is ‘variate’ the better word?

JGB said:

So on the face of it while an individual may not benefit directly from having buffering capacity his descendants could realize some benefit as the gene pool starts to shuffle things. As long as the only cost was the energy of replicating the extra DNA chunk the benefit would not have to be large to come out ahead.

But if the benefit was to the whole population, the allele causing increased buffering capacity might not benefit more than average.

If you consider the normal environmental fluctuations as well I think this is an under appreciated way of generating more polymorphism. The alleles wouldn’t be neutral in some senses they would just tend to have slightly different affects, and be best in slightly different conditions. The periodic fluctuations would tend to drive very small changes back and forth in the gene pool preserving the diversity in a dynamic equilibrium.

Variation of selection coefficients does not necessarily preserve genetic diversity. It’s tricky. For example if two alleles exist in a population, and the heterozygotes between them are intermediate between the two homozygotes on a log scale, then variation in selection coefficients will generally cause one allele to win out. For the theory see section II.10, “Temporal Variation in Fitnesses” in my free online e-textbook of theoretical population genetics.

It’s tempting to just conclude that one variation must cause another, but it’s not that simple.

Mike Elzinga said:

This fits nicely into a more general context in which the size and “richness” of a complex system tends to enhance system stability in the face of fluctuations and perturbations. …

Simple systems consisting of relatively few parts tend to have large relative fluctuations, and tend to be sensitive to relatively small perturbations that can destroy them.

The enormous sizes and complexities as well as the diversities of large systems usually provide a relatively stable environment or “bath” in which energy driven organization and coordination can take place.

This is a general rule in nature.

This paper seems to disagree.

Perplexed in Peoria said:

This paper seems to disagree.

I think this sentence from the paper may be getting at an important point that could explain the apparent discrepancy.

… this lead to the conclusion that the network’s topology, rather than the kinetic and biochemical details, was primarily responsible for the properties of robustness. Because the Drosophila segmentation network is both robust and sparse it may be possible, given sufficient computational resources, to reverse engineer this network structure with evolutionary simulations.

Not only is connectivity important, but the energetics of the system can single out connected chains and either enhance their effects or diminish them depending on how those connections respond to the magnitude and frequencies of the perturbations flowing throughout the entire system or network.

This is gets back to what I was referring to when I compared the entirety of the set of interactions – the super-system, if you will - as being analogous to a “heat bath” that supplies the mean energy to the system and, in effect, stabilizes it against overall perturbations.

However, that doesn’t mean that everything within the system is stabilized equally. Some subsystems, because of their interconnectivity as well as the strength of their coupling to the larger system, may “resonate” and respond dramatically to relatively small perturbations. Other subsystems may be dampened in their responses to the background perturbations.

Another source of confusion may also go right back to that old “lottery winner’s fallacy” by focusing on the persistence of individual subsystems as a measure of robustness when in fact it is the robustness of the system as a whole that is more important. In other words, the fact that the entire interconnected system is constantly capable of supporting viable subsystems even though those subsystems are free to evolve within the larger complex would be what we mean by “robust” rather than what a given subsystem is capable of doing.

But these seem to be fruitful lines of work. I suspect that as the concepts and definitions become clearer, we will begin to see some of the same universality of behaviors in biological systems that we see in other complex systems.

By the way, this is also why the “quiet, nutrient-rich pond” is not necessarily a good model for abiogenesis.

Again, the general rule of the emergence of stable systems is that they form in an energy cascade and the “products” then get shuttled into a relatively benign environment, or stabilizing heat bath, and have a chance to relax into “ground states” that are not re-excited or ripped apart by the more energetic environment in which they were originally formed.

I don’t know what this is called in biology, but in physics it is often referred to as “pumping.”

Mike Elzinga said:

By the way, this is also why the “quiet, nutrient-rich pond” is not necessarily a good model for abiogenesis.

Again, the general rule of the emergence of stable systems is that they form in an energy cascade and the “products” then get shuttled into a relatively benign environment, or stabilizing heat bath, and have a chance to relax into “ground states” that are not re-excited or ripped apart by the more energetic environment in which they were originally formed.

I don’t know what this is called in biology, but in physics it is often referred to as “pumping.”

From Imanaka and Smith: The following figure is the current understanding of the Nitrogen chemistry in the upper atmosphere of Titan. Double boxes are stable species and the arrows represent photolysis.

http://www.pnas.org/content/107/28/[…]pansion.html

Why is this relevant? From the paper where I got the figure:

Nitrogen is an essential biotic element, yet difficult to fix into biotic/prebiotic molecules directly from N2. Despite N2 being the most abundant constituent in the Earth’s atmosphere through most of its history, only limited fixed nitrogen is available for the biosphere. In the thick N2 atmosphere of Titan with a trace amount of methane, atmospheric chemistry leads eventually to heavy organic gaseous species and aerosol particles. Understanding the formation chemistry and the resulting chemical structure of possible nitrogenated organic aerosols in the Titan atmosphere might help in constraining the atmospheric contribution in abiotic nitrogen fixation processes relevant to the origin and evolution of early life.

Rich Blinne said:

Why is this relevant? From the paper where I got the figure:

Precursors also appear to be floating around in outer space. It’s not clear where they come from. They could be the result of element-rich stars exploding and spewing atoms into the surrounding space where they remain energized by radiation and where they can combine with other atoms to form such molecules in the dense clouds that form through gravitational attraction.

Or they could be formed on moons or planets under UV or gamma bombardment where they get shuttled by convection into protected environments only to be blasted out into space by a meteor impact.

There are also extreme conditions deep within the Earth’s mantel. If there are enough of the right atoms and molecules in those extreme environments, this may also be a place where energy cascades can produce precursors and perhaps live itself.

Many of the molecules that are necessary for life are in metastable states in which atoms sit in shallow wells at the top of somewhat larger potential hills. They require an activation energy to trigger their breakup or their interaction with other atoms and molecules. In other words, they have to be given an energetic kick to get them out of “a dimple at the top of a hill.”

Getting atoms or compounds into such configurations is why “pumping” or energy cascades are necessary. These assemblies are built by “tumbling down” from higher energy states with the products then finding their way into an environment in which they are protected from the higher energies in those cascades.

In any case, many of these kinds of cascading processes are the heart of industrial chemical processes and experimental processes in which materials and compounds are constructed for a particular purpose. It is not surprising that they also occur in nature.

Steve: In other words, the animal’s critical developmental pathways are buffered against many disastrous alterations, in part through the action of redundant control systems.

The question though: how on earth did these animals end up with such advanced capabilities?

Berend de Boer said:

The question though: how on earth did these animals end up with such advanced capabilities?

Well, Berend, that really wasn’t the question. But lots of people are working on that one too.

At a guess, the ones with less capabilities tended to leave fewer descendants, and the ones with more tended to leave more?

About this Entry

This page contains a single entry by Steve Matheson published on August 3, 2010 4:41 PM.

Scenes From A Multiverse: Uncharted Planet was the previous entry in this blog.

Dover Trap in the Pelican State is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter