Making Babies - How Simple Rules Build Complex Bodies

| 34 Comments

The New York Academy of Sciences provides us with access to a talk by Nobel Laureate Christiane Nüsslein-Volhard on how genes drive development, no need for unspecified ‘Intelligent Designers’, no need for miracles, just hard work by scientists who are committed to discovering the details of how, what, when and so on. Compare this with how ID explains the development of the embryo.

Click on the Flash presentation

nusslein15_small.gif

I also suggest that interested readers get their hands on her book “Coming to Life: How Genes Drive Development” by Christiane Nüsslein-Volhard or read an excerpt of the book: Chapter IX — Evolution, Body Plans, and Genomes

34 Comments

Get access to the NYAS Podcasts

Excellent talks by many scientists…

This echoes a theme emphasized by Richard Feynman and developed by scientists and mathematicians under the name “chaos”: that simple rules can give rise to complex phenomena.

Feynman often used the example of chess: the rules of the game are simple (they fit on a single leaf of paper) but in application the rules are used over and over again to make a complex game (the applications fill a library).

Another example is the Mandelbrot set. The rules for the Mandelbrot set are simple in the extreme. The set itself has often been described as “infinitely complex”.

http://www.youtube.com/watch?v=gEw8xpb1aRA

To the extent that I understand the work of William Dembski, his claims imply that neither chess nor the Mandelbrot set can exist.

Or possibly Dembski claims that he can become a grand master …, erhm, excuse me, the Isaac Newton of chess, by studying the rule book.

I’m not familiar with the context of Feynman’s statement, but along with evolution, this definition of chaos is surely one of the underlying themes of all science. Everytime you take things up a level, subatomic particles to atoms to compounds to cells to organisms to ecosystems to planets to solar systems and the large scale structure of the universe, things attain a level of complexity not handled well by the smaller scale rules. It’s not something really conveyed as a message in any class I’ve ever taken, but it surely is part of the beauty of the entire endeavor.

JGB Wrote:

I’m not familiar with the context of Feynman’s statement, but along with evolution, this definition of chaos is surely one of the underlying themes of all science. Everytime you take things up a level, subatomic particles to atoms to compounds to cells to organisms to ecosystems to planets to solar systems and the large scale structure of the universe, things attain a level of complexity not handled well by the smaller scale rules. It’s not something really conveyed as a message in any class I’ve ever taken, but it surely is part of the beauty of the entire endeavor.

It is interesting however is that widely separated energy scales at successive levels of complexity can simplify the problems of analysis. For example, most of chemistry (energy levels on the order of electron volts or eV) usually doesn’t need to take into consideration what goes on in the nuclei of the atoms (energy ranges on the order of millions of electron volts or MeV).

Emergent phenomena can often be the dominating forces in determining further development of complex systems. With living things, however, it is much more complicated. Emergent phenomena have energies on the order of a few tenths of an electron volt (protein folding, binding of solids, stiffness of solids and tissues, viscosity, compressibility, and all those other macroscopic features of solids and liquids). When the energy inputs from the surrounding environment are of the same order, things can get very hard to predict. Light in the visible range has energies of the order of an eV. Infrared, and heat associated with molecular vibrations are on the order of a few tenths of and eV. These energies do not tap into the ionization potentials of atoms (on the order of eV).

What makes the task harder for biology is that, at the level of complexity of biological systems, the number of degrees of freedom a system has in which to develop has already increased enormously. Jockeying around among these degrees of freedom becomes hard to predict. It is remarkable that some of these systems remain relatively stable for long periods of time. But under stress, they do change, and sometimes quite rapidly (relative to geological time). Under more severe stresses (on the order of a few eV) they are torn apart or vaporized. So life, as we know it on this planet, exists in a relatively narrow window of a few tenths of an eV, but a lot sure happens in that window.

Hm, without having listened to the podcast (which I will surely do), I’m not sure what your point is relative to evolution/ID. I thought IDers had no problem whatsoever with developmental biology since the high conservation of e.g. the Hox genes “point to design”…

”… since the high conservation of e.g. the Hox genes “point to design”…”

Actually, the high conservation of the Hox genes in their spatial expression patterns is a defining characteristic of Animalia, and clearly demonstrates evolution by descent. Evidence for design would be independent creation–the use of entirely different genes and/or expression patterns to program the axial patterning of vertebrates and invertebrates, for example. Hox gene structural conservation and conserved patterns of expression flows directly from the hypothesis of evolution by descent. It is entirely unanticipated by special creation and its bastard relation, ID.

I thought IDers had no problem whatsoever with developmental biology

ID-creationists have severe problems with any biology. For example William Dembski in a thread on whale evolution:

Pardon my naivete, but don’t really really big animals (like whales that can weigh 200 tons) need all sorts of different homeostatic mechanisms to control body heat, etc. compared with much smaller deer/rat-like creatures?

The podcast shows how science works in expanding our knowledge of how embryos develop AND how changes in development can influence evolutionary changes. Any time simple rules give rise to complexities, ID is at odds with science since it cannot accept that such CSI is created by natural processes alone. It requires a supernatural designer at all cost.

@ dvrvm:

I didn’t listen to the whole podcast, but in there Nüsslein-Volhard remarks that the recent availability of data in the field opens up to explore the evolutionary context.

@ PvM:

CSI

Sigh. Please define “CSI” rigorously, explain how it relates to biological science, and specifically how complexities of processes such as development relates to complexities of said “information”.

Mike Elzinga said:

“Emergent phenomena can often be the dominating forces in determining further development of complex systems.”

Ernst Mayr, if I recall correctly, describes life as an emergent property in his book “This is Biology”. This makes sense when one thinks of life as an ordered hierarchical system, which is something that all freshman biology students should be familiar with. Heck, I even teach “emergence” to my non-majors, which makes teaching about the evolution of complex phenomena much easier for them to grasp.

I thought the IDC position on birth was pretty straight forward. The stork did it.

Sigh. Please define “CSI” rigorously, explain how it relates to biological science, and specifically how complexities of processes such as development relates to complexities of said “information”.

Why?

Vince Wrote:

Ernst Mayr, if I recall correctly, describes life as an emergent property in his book “This is Biology”. This makes sense when one thinks of life as an ordered hierarchical system, which is something that all freshman biology students should be familiar with. Heck, I even teach “emergence” to my non-majors, which makes teaching about the evolution of complex phenomena much easier for them to grasp.

This idea of life being an emergent property has always intrigued me. And the fact that relatively simple rules can lead to very complex outcomes is also relevant.

In my earlier comment about separate energy domains not figuring into what goes on within a particular domain, I was alluding to the fact that this is going on within a very narrow band of energies. If one just looks at the energies represented by temperatures between the freezing and boiling points of water (that turns out to be 0.0235 eV to 0.0322 eV), one begins to realize that a lot of simplification must be taking place in living systems in order to exclude the complications that would be brought in from other energy ranges.

There are a few exceptions in the cases of responses to, say, visible light (approximately 1 to 3 eV) where electrons are lifted out of lower energy levels in organic solids to participate in those energy ranges in which living things operate and rearrange themselves. But things that seem extreme to living systems, such as the melting of iron (about 0.1 eV), tell a lot about what a rich realm develops in the right constituents when they are placed it a narrow energy range a little below that. Mind boggling.

Hi Mike,

I found your comments really fascinating. Could you point me to any references that describe the relationship(s) between energy scales, emergent properties and life?

Cheers Che

Che Wrote:

Could you point me to any references that describe the relationship(s) between energy scales, emergent properties and life?

I don’t know very many popular works that get into this, but I know lots of physics textbooks on quantum mechanics, statistical mechanics, and solid state physics (more generally, condensed matter physics).

I don’t know what level you are interested in, but here are a few undergraduate and beginning graduate level classics. These may be a little scary if you aren’t familiar with the mathematics necessary to read these.

Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles by Robert Eisberg and Robert Resnick.

Solid State Physics by Ashcroft and Mermin.

Statistical and Thermal Physics by Frederick Reif.

General Chemistry by Linus Pauling.

Check in any physics section of a good library to find many others.

I’m sure the biologists here can point you to works in biology.

There are a few simple tools for maneuvering around in the various domains and making ballpark estimates.

Common flashlight cells are chemical processes that show that the outer shells (valence electrons) are removable with energies on the order of 1.5 eV.

The energy needed to completely ionize a hydrogen atom is 13.6 eV.

The energies needed to take apart the nucleus of atoms are on the order of millions of electron volts (MeV)

Melting and vaporization temperatures give estimates of the molecular binding energies of solids and liquids by way of the formula E = kT, where E is the energy in electron volts, k is Boltzmann’s constant (8.6E-5 eV/K), and T is the absolute temperature in Kelvin. The velocity of sound in these materials also gives estimates of these molecular binding energies which are on the order of 0.1 to about 0.2 eV for inorganic materials such as iron, and on the order of a few hundredths of an eV for softer materials and organic compounds.

The relationship between photon energy and photon wavelength is E = hc/lambda, where E is the energy in electron volts, h is Planck’s constant (4.1E-15 eV*s), c is the velocity of light (3.0E8 m/s, or 3.0E17nm/s; nm = nanometers), and lambda is the wavelength in nanometers if you use c = 3.0E17 nm/s.

Other phenomena, due to quantum mechanics, include the discrete energy levels within atoms, and the spreading out of energy levels into valence and conduction bands as atoms collect together to form solids and liquids.

Then there are the emergent properties (hardness, wetness, viscocity, coefficients of friction, “stickiness”, etc.) of macroscopic solids and liquids that are familiar to engineers. In the mesoscopic arena of small systems made up of a few atoms, there are overlapping domains in which many interesting things go on, but these are usually observed at very low temperatures below a few Kelvin. Living systems are made up of organic molecules that have lots of degrees of freedom, but which are still subject to quantized energy levels in their behaviors.

Feynman often used the example of chess: the rules of the game are simple (they fit on a single leaf of paper) but in application the rules are used over and over again to make a complex game (the applications fill a library).

Another example is the Mandelbrot set. The rules for the Mandelbrot set are simple in the extreme. The set itself has often been described as “infinitely complex”.

Consider all of the implications of the Peano Axioms, or of the Euclidean Axioms.

@ PvM:

Why?

Good question. Another good question is “Why not?”.

Look, it seems to be your style to look philosophically on creationism. And I recognize that one shouldn’t argue about personal style as such.

But while this blog is decidely anti-antievolution (as described in the About tab) there is also an underlying science we try to defend.

As there are currently no defensible, usable definition of CSI, and it is expected for good reasons that there will never be one even for the case someone wanted to make one (as IDC doesn’t, testability gives up their scam), it is confusing to mention it as if it existed, in connection to science topics.

Isn’t science hard enough for intersted blog readers without philosophical “what if” games? That is my 2p, with small change included, and I will continue to put them on the table. :-P

Besides, speaking of styles, as I would never discuss invalid concepts in connection with empirical areas, you more or less exclude people like me from your argument, such as it is. We are reduced to pointing out that the argument isn’t valid, and follow up on those that are.

Mike Elzinga Wrote:

The energies needed to take apart the nucleus of atoms are on the order of millions of electron volts (MeV)

We can get a handle on this by looking at radioactive deacy, a phenomenon of the weak nuclear force.

The lowest-energy emitter of which I am aware is tritium, whose beta particles possess about 17 keV (less energy than the electrons in a standard CRT from a television set). The highest energy beta emitters (e.g. 32P) are in the region of about 1.5 to 2 MeV. Alpha particles are emitted (e.g. from Americium 241) with energies up to about 10 or 20 MeV. I do not know about gamma energies, but I believe they are in the same order of magnitude as alpha particle energies.

And these are all energies arising from the weak nuclear force. The strong nuclear force, as one might imagine, is stronger and has higher energies associated with it.

JGB Wrote:

I’m not familiar with the context of Feynman’s statement, but along with evolution, this definition of chaos is surely one of the underlying themes of all science. Everytime you take things up a level, subatomic particles to atoms to compounds to cells to organisms to ecosystems to planets to solar systems and the large scale structure of the universe, things attain a level of complexity not handled well by the smaller scale rules. It’s not something really conveyed as a message in any class I’ve ever taken, but it surely is part of the beauty of the entire endeavor.

This was neatly illustrated to me when, in a final-year chemistry lecture, the lecturer mentioned ab initio molecular orbital calculations. Those in the audience who had encountered this term before (mostly those doing straight chemistry degrees) groaned loudly.

This is because ab initio molecular orbital calculations, arising from the application of quantum mechanical theory to chemistry, are exceedingly hard. So much so that, at the time I encountered the term, no-one had ever completed them for anything more complicated than a hydrogen molecule (H2).

If the orbits on hydrogen molecule is bad, just imagine doing it for something like dihydrogen monoxide!

Henry, yes. And we thought water was a simple molecule!

In chapter IX, Nuesslein-Volhard says fossils are rare. Really? In what sense?

I will assume she is referring to the very low probability that any one individual organism will fossilize.

While waiting for the experts to provide the facts, I found a description of taphonomy:

Taphonomy, from the Greek taphos (death), is concerned with the processes responsible for any organism becoming part of the fossil record and how these processes influence information in the fossil record.

Not every organism that ever lived could become part of the fossil record. If you eat an average of three meals a day, you test and prove this hypothesis daily. A large percentage of all biological entities end up as food for other organisms higher on the food chain. This fact alone may prevents these organisms from being preserved. Even those organisms that avoid being eaten have a low probability of becoming fossilized because most of them undergo decay and recycling of their chemical components. For example, you can examine any forest-floor litter and find that beneath the top layer of leaves, the organic matter has been degraded to an unrecognizable form (humus – not hummus, the garlic-laden spread served in health-food restaurants). This recycling keeps the carbon, nitrogen, and sulfur cycles operating. In fact, many taphonomic biases impact the odds of any organism being preserved. [My emphasis.]

Any organism must successfully pass through three distinct, and separate, stages in order to be seen in a museum display. These stages span the entire time from death of the organism to collection.

Necrology is the first stage, and involves the death or loss of a part of the organism. The vast majority of animals must die before they can become introduced to the next phase. […]

Once an organism has died or sheds a part, all the interactions involving its transferral from the living world to the inorganic world (including burial) constitute the second taphonomic stage. This is the Biostratinomy stage. […]

Once buried, the organic material is subjected to the third taphonomic phase, or Diagenesis. Diagenesis involves all of the processes responsible for lithification of the sediment and chemical interactions with waters residing between clasts. The processes of fossilization appear to be site specific with respect to depositional settings, resulting in a mosaic of preservational traits in the terrestrial and marine realm.

Although the fossil record is incomplete, it still provides a useful survey of the history of life because of the vast amounts of time represented within the rock record. Even if the conditions for preserving organic matter existed only once every 10,000 years in each contemporaneous depositional environment around the globe, a lithology that was 100 meters thick (330 feet) and encompassing 1 million years of time would contain 100 fossil assemblages. Such conditions are not unrealistic, particularly within the ocean basins.

If we then consider contemporaneous depositional settings around the globe, the number of fossil assemblages that would be preserved during this 1 million years of time increases dramatically. [My emphasis.]

If ~ 10 000 years pass between fossils in a place, you have a rather low probability for organisms to fossilize as many generations and individuals will pass for small species.

I haven’t read her book – yet. But I did listen to the slide show.

My own feeble attempt of transcription from her talk on the relevant – short, one sentence – part (snippet #43) is this:

And then I have a little thing on the evolution of hominids and humans, which came out of molecular studies comparing DNA sequences of existing species, and also some from fossils, which I think is also quite exciting.

AFAICT, this is the only point where she references fossils at all. Übrigens I shall defer to Torbjörn Larsson, OM :-)

In chapter IX, Nuesslein-Volhard says fossils are rare. Really? In what sense?

I will assume she is referring to the very low probability that any one individual organism will fossilize.

That fits with what I recall reading - the number of fossils that have been studied is on the order of a third to half a billion. I don’t know what the best estimate is for the number of species that have existed since the Cambrian, but I’d guess it to be around that order of magnitude as well. And since a lot of those known fossils belong to a few species for which lots of fossils are known, I’d guess that less than 10% of extinct species are represented at all.

Henry

Besides, speaking of styles, as I would never discuss invalid concepts in connection with empirical areas, you more or less exclude people like me from your argument, such as it is. We are reduced to pointing out that the argument isn’t valid, and follow up on those that are.

While I agree, I also think that this is not the only approach we should take. Information in the genome needs to be explained even though as we all know it has little relationship to how ID defines information.

Dembski seems to have accepted Shannon information as a valid form of CSI, so it seems valid to point out that this shows how natural processes can trivially create information, even CSI as somewhat incompetently defined by ID.

@ PvM:

Information in the genome needs to be explained even though as we all know it has little relationship to how ID defines information.

I hearthily agree, sort of. :-P

It seems the concept of traits (alleles) are sufficient to give the bulk of biology, while such concepts such as information would perhaps add a little understanding in places. The same picture paints itself in signal theory (Shannon), algorithmic theory (Kolmogorov) and physics (entropy).

I think that would be an appropriate message to the public. Think increase of adaptive traits, not increase of information.

[Especially if it is suspected that the earth populations doesn’t increase in complexity long-term. It is IMHO much more straight forward to grasp that species has to adapt to a changing environment, than that they will loose (changed environment) and gain (adaption) information during the process.]

Dembski seems to have accepted Shannon information as a valid form of CSI

And here I don’t agree. As there is no measurable definition of CSI, why should we care. Moreover, on the Information in biology thread the difference between Shannon information and information measures that capture complexity was mentioned.

No single measure can capture all structural information (which is why CSI would fail, even if IDCers would attempt to make a positive testable definition for once), but some can capture parts. Mutual information is one such - mutual information maximizes between random and perfect order, and is suitable for describing for example neuronal complexity, i.e. instances of biological apparent design.

But it is of course widely different from Shannon information, that for example Dawkins use to describe information change under variation and selection, i.e. instances of biological processes.

I understand that it is tempting to equate between a fictional concept of CSI and apparent design. But there is a reason why it is called apparent.

Merlin asks, “In chapter IX, Nuesslein-Volhard says fossils are rare. Really? In what sense?”

I’m reading Coming to Life right now. On page 129 (Chapter IX) Nuesslein-Volhard says, “…most of what we know about ancient human evolution rests on fossil evidence, which is naturally shaky do to the scarcity of human fossils.” I can’t find any mention of fossils in general being scarce.

I’m not even sure if Nuesslein-Volhard is still correct about the scarcity of human fossils. There have been a lot of new discoveries, lately. But she is certainly correct that historically the field was hampered by the scarcity of human fossils. Anyway, that was just her setup for her main point, which is tha DNA analysis has been enormously helpful in understanding human evolution.

Oh, dear. Another mention, on page 119, that fossils “are very rare.…”

I don’t know what that’s about. She goes on to say that fossils “tell us almost nothing about the embryonic development or the genes of… animals.” So maybe Nuesslein-Volhard just wasn’t interested enough to learn how many fossils there really are. Fossils, in any case, are not what her book is about.

hoary puccoon:

Oh, dear. Another mention, on page 119, that fossils “are very rare.…”

I don’t know what that’s about. She goes on to say that fossils “tell us almost nothing about the embryonic development or the genes of… animals.” So maybe Nuesslein-Volhard just wasn’t interested enough to learn how many fossils there really are. Fossils, in any case, are not what her book is about.

She write “The answer to this question is difficult because in most cases, neither the intermediate forms nor the common ancestors exist anymore. Fossils, which are the only clue to the animals of the past, are very rare and tell us almost nothing about the embryonic development or the genes of these animals. “

She has a very valid point

Yes, and to elaborate a little more:

Transitional species exist for only short periods of time (as predicted by Darwin in TOOS). Since fossilisation is rare in terms of: (1) the number of individuals in a particular population that are fossilised, and (2) the relative proportion of those fossils that survive to the present day; there are relatively few fossils to illuminate our own evolution.

Additionally, w.r.t. embryonic patterning, since most vertebrate embryos are rather delicate, and since the most durable components of a vertebrate tend to be bones and teeth, fossilisation of embryos is likely to be under-represented compared with adults. There may also be a cause-of-death bias, because a higher proportion of embryos (i.e. eggs; except, obviously, for placental mammals) might fall to predation than is the case for adults.

She is quite right to indicate that fossils tell us nothing about genes (except in terms of gross anatomy), but this is what makes the fossil record independent of the genetic record. Arguably, this makes the case for common descent stronger. I do not understand why she is complaining that fossils tell us nothing about genes.

Transitional species exist for only short periods of time (as predicted by Darwin in TOOS). Since fossilisation is rare in terms of: (1) the number of individuals in a particular population that are fossilised, and (2) the relative proportion of those fossils that survive to the present day; there are relatively few fossils to illuminate our own evolution.

Iow, Darwin described punctuated equilibrium, he just didn’t give it a name.

Henry

Henry J, I don’t think that’s what he was getting at in the chapter I had in mind. He was talking about two daughter species diverging from one another as a result of competition, and a transitional population being out-competed by both daughter species. In part, this is related to the size of each population, because larger populations are more robust and more adaptable than smaller ones. It is also down to the nature of competition - if two species are very similar, and have the same food, the same range, choose the same nesting sites etc., they will be competing with one another all of the time. However, the more they differ from one another, the less likely it is that they will be competing directly with one another all of the time. Thus, daughter species will diverge after separating from one another. Now, this is a thought experiment based on populations that diverge for no specified reason and that can still interact. Obviously, it does not pertain to the case of divergence due to geographical isolation. However, in the latter case, the species will diverge as a consequence of (1) adaptation to different environments, and (2) genetic drift.

Incidentally, the divergence of daughter species is a prediction of Darwin’s that has been borne out by study of the fossil record.

About this Entry

This page contains a single entry by PvM published on January 5, 2008 1:41 PM.

New NAS book on Evolution & Creationism was the previous entry in this blog.

Dissenting from Darwinism is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter