Uncommon Dissent

| 170 Comments

by Joe Felsenstein
http://evolution.gs.washington.edu/felsenstein.html

Over at Uncommon Descent an unusual discussion has erupted. A commenter named “MathGrrl” who has been occasionally active there as a critic of ID has actually been allowed to make a guest posting. She gave several examples of situations where one could make a specification of what were the best genotypes, and asked how in these cases Complex Specified Information could be defined. She has handled the discussion with great restraint. Several hundred comments later no consensus has emerged. Commenters at anti-ID blogs ( here, here, here, here, and here), have concluded from this that the concept of CSI is vacuous.

I’d like to give a perspective that may be unpopular here. I don’t think Complex Specified Information is a vacuous concept, though we usually do not have enough information to actually calculate numbers for it.

Simply put, birds fly and fish swim. They do so a lot better than organisms coded by random strings of DNA (formed by mutation without natural selection – organisms coded for by monkeys typing with four-key typewriters). If we could imagine looking at all possible such organisms with the same length genome as (say) a bird, the fraction of them that would fly as well as a bird, or better, would be incredibly tiny. So tiny that if every particle in the universe were a monkey with an ATGC typewriter, there would not have been enough time by now to produce anything as good as a bird even once since the time of the Big Bang. That is the essence of William Dembski’s argument. Note that getting technical about information theory is not required. People love to contradict each other about information theory, but we can set most of that part of the argument aside.

A simple definition of Specified Information would be that it is the negative log (to the base 2) of the fraction of those sequences that are better at flying than a bird. We don’t have enough information to actually calculate it, but we can be sure that it is big enough to pass Dembski’s threshold of 500 bits, and thus CSI is present.

So am I saying that CSI is present in examples of life? Yes, I am. So does that mean that it follows that design is present in those cases? No, it does not. As I have explained before (here), Dembski draws the conclusion that the presence of CSI proves design because he has a theorem, the Law of Conservation of Complex Specified Information (LCCSI), which supposedly proves that an amount of specified information large enough to constitute CSI cannot arise by natural processes, even once in the history of the universe. In fact, he is wrong, for two reasons:

* His theorem is not proven. Jeffrey Shallit and Wesley Elsberry pointed out (here) that Dembski violated one of the conditions of his own theorem when gave his proof that this large an amount of SI could not arise by deterministic processes.

* In any event, to use his theorem (even if it were proven) to rule out natural selection you have to use the same specification (say “flies as well as or better than this bird”) both before and after evolutionary processes act. And this Dembski does not do. His conservation theorem involves changing the specification in midstream. When you require that the specification say the same, you can immediately see that the amount if SI cannot be conserved. Natural processes such as natural selection can improve the flight of birds.

Advocates of ID endlessly repeat the mantra that the presence of CSI proves that design is present. They are relying on Dembski’s LCCSI, whether they know it or not. But natural selection can put Specified Information into genomes, and when it acts repeatedly, can easily exceed the threshold that Dembski uses to define CSI. The issue is not CSI, it is the conservation law, one that has not been proven in any form that is relevant to detecting design.

170 Comments

In a way this is a flashback to the reason “Febble,” a British neuroscientist, was banned from UD four years ago. She argued that on Dembski’s definition of the behavior of an “intelligent” designer, natural selection qualifies as an intelligent designer.

The take-home is that to the extent that ID notions like “intelligence” or “complex specified information” can be made operationally explicit and testable, naturalistic processes are entirely adequate to produce the phenomena those notions are invoked by creationists to explain (where “explain” in the latter case is used very loosely, of course).

The biggest problem is that design is quite easily detected without it having to be complex at all. Handaxes are not very complex, yet are readily understood to have been designed–by humans, oddly enough, not God (really, if God were busily operating in the environment, what right would we have to ascribe intelligently-made ancient artifacts to humans?).

That’s why Dembski attempts to redefine simple design as being “complex,” because all life is actually complex, while design per se can be either simple or complex. Dembski wanted to conflate design and life, so he calls simple “unlikely” (via “natural” means) artifacts complex when they are in fact simple.

Only modern life can be said to be reliably complex in every known instance. “Elegance” and “simplicity” mark many designed objects, and these objects are typically as readily demonstrated to be designed as the latest computer chips are. What is strikingly obvious in most designed objects is rationality in its construction, regardless of their simplicity or complexity, while known non-engineered (or artificially selected) life is invariably both complex and without rational design and construction.

Design is not characterized by CSI, rather by rationality. Life is complex, but without rational design behind it (evolution can sometimes come close to what we might (wrongly, in fact) consider to be rational ends, yet the beginnings are obviously not rationally chosen–they are simply what is available to evolution).

ID’s conflation of design’s frequent simplicity with life’s complexity exists only to obscure the importance of rationality’s existence within designs, and its lack in all “wild-type” life.

Glen Davidson

The definition of CSI you provide was described in a 2007 PNAS paper by Jack Szostak (2009 Nobel laureate).

Proc Natl Acad Sci U S A. 2007 104 Suppl 1:8574-81. “Functional information and the emergence of biocomplexity.” Hazen RM, Griffin PL, Carothers JM, Szostak JW.

http://www.pnas.org/content/104/suppl.1/8574.long

Szostak calls it “functional information”. (It’s identical to a definition I came up with and submitted to the Journal of Theoretical Biology back in the early 90’s, but the paper was rejected – blah, blah, blah).

I’m getting very leery of the word “information”. The word is perfectly valid in an informal sense, of course: “Boyo, there’s a lot of good information in this book!” However, in a technical discussion its usage has to be defined relative to the subject at hand.

That is, unlike a concept like “energy”, it has little general applicability, and outside of the specific discussions for which “information” is variously defined, it simply causes confusion. We know that one of the basic features of life (as opposed to nonlife) is heredity; we know that heredity is embodied in the sequences of the genome. If we ask the question of whether there is “information” in the genome, what do we know when we get an answer that we didn’t before?

The CSI argument of ID seems identical to Paley’s notions of organized complexity, as in a watch, with the same conclusion, that it is a trademark of Design. Modern evolutionary theory disagrees; the CSI argument of ID is simply reiterating Paley’s assertion, under a smokescreen of technical verbosity, and claiming it as a proof when it hasn’t moved any of the old pieces on the chessboard.

Of course, any discussion of the merits or lack thereof of the use of “information” as a scientific term is beside the point when it comes to the dubious individuals who show up in Pandaland, and throw the term around as a measure of what in military terms would be called “noise jamming”.

mrg said:

(snip) The CSI argument of ID seems identical to Paley’s notions of organized complexity, as in a watch, with the same conclusion, that it is a trademark of Design. (snip) Of course, any discussion of the merits or lack thereof of the use of “information” as a scientific term is beside the point when it comes to the dubious individuals who show up in Pandaland, and throw the term around as a measure of what in military terms would be called “noise jamming”.

Exactly, mrg.

They’ve updated their jargon from “pocket watch” to “computer code” and from “elan vital” to “information”, but it’s still the same arguments from ignorance and incredulity.

SSDC: same shite, different century.

Don’t forget that they can’t even define ‘C’, ‘S’, or ‘I’ in a consistent way that makes sense. And just to reiterate what RBH pointed out, there is nothing in any aspect of “Intelligent Design Theory” that requires… well… intelligence.

One Pro-ID commenter has even claimed that termites are intelligent, at least as ID defines the term. (http://ogremk5.wordpress.com/2011/0[…]/#comment-81)

Further, I maintain that it is impossible, even in theory, to determine whether a genetic sequence or protein was designed or the result of pure randomness or randoness + natural selection, which renders the entire ID ‘argument’ moot.

The stuff I’ve seen that provides some semblance of a positive argument for ID has zero difference from what would be expected if evolution were the designer.

They cannot detect design, even in theory. So, yeah, ID is vacuous nonsense. It’s totally not needed or useful.

Sorry, I’m still confused by how CSI is meaningful.

A simple definition of Specified Information would be that it is the negative log (to the base 2)

The log or negative log to the base 2 is a common function to come across in discussions of information that touch on the binary system, because, among other things, it relates the minimum number of digits of string length needed (number of “bits”) to convey numerical information. For example, the log base 2 of 8 is 3, and the log base 2 of 16 is 4. To express the numerical information “eight” through the numerical information “fifteen”, you need four binary bits (1000 through 1111). For fractions it works the same way, except with negative log 2.

of the fraction of those sequences that are better at flying than a bird.

I’m seriously not sure what you mean here. I’ll ask the easy questions first - which birds are we talking about, and how do we measure the quality of their flying in a reproducible way?

Which sequences are we talking about? I’m guessing you’re talking about randomly generated genome-sized sequences of DNA, and imagining what would happen if those sequences were substituted for the genome of a bird at earliest embryonic stage, or something. Is that right?

We don’t have enough information to actually calculate it, but we can be sure that it is big enough to pass Dembski’s threshold of 500 bits, and thus CSI is present.

At one level, obviously, we are in complete agreement that either CSI is a meaningless term, or it is a trivial term that refers to something, but that has no relevance to the theory of evolution.

I’m just not sure it is in any way a meaningful term.

“A simple definition of Specified Information would be that it is the negative log (to the base 2) of the fraction of those sequences that are better at flying than a bird. We don’t have enough information to actually calculate it, but we can be sure that it is big enough to pass Dembski’s threshold of 500 bits, and thus CSI is present.”

This is a better piece of work on CSI than any creationist has ever managed to produce.

Oh, unless the bird is a penguin. Or a moa.

Douglas Theobald said:

The definition of CSI you provide was described in a 2007 PNAS paper by Jack Szostak (2009 Nobel laureate).

(It’s identical to a definition I came up with and submitted to the Journal of Theoretical Biology back in the early 90’s, but the paper was rejected – blah, blah, blah).

… and a special case of it was described by me in a paper in American Naturalist in 1978. I wasn’t a reviewer on your 1990 submission but if I had been I might have noted that!

mrg said:

I’m getting very leery of the word “information”. The word is perfectly valid in an informal sense, of course: “Boyo, there’s a lot of good information in this book!” However, in a technical discussion its usage has to be defined relative to the subject at hand.

That is, unlike a concept like “energy”, it has little general applicability, and outside of the specific discussions for which “information” is variously defined, it simply causes confusion. We know that one of the basic features of life (as opposed to nonlife) is heredity; we know that heredity is embodied in the sequences of the genome. If we ask the question of whether there is “information” in the genome, what do we know when we get an answer that we didn’t before?

I agree in that, in this case, calling it information does not actually accomplish much. One could as easily say that one is measuring, not Complex Specified Information but “farness out into the upper tail of the distribution of fitness”. Organisms are manifestly nonrandomly extremely far out there, too far to just be there by pure mutation. In Dembski’s argument the work is supposed to be done by the additional Law of Conservation. Except it doesn’t work. Taking logarithms and calling the result information is unnecessary.

The CSI argument of ID seems identical to Paley’s notions of organized complexity, as in a watch, with the same conclusion, that it is a trademark of Design. Modern evolutionary theory disagrees; the CSI argument of ID is simply reiterating Paley’s assertion, under a smokescreen of technical verbosity, and claiming it as a proof when it hasn’t moved any of the old pieces on the chessboard.

In this case it is degree of adaptation, not the way complexity is or is not organized, that is supposed to be decisive. So I don’t see a direct parallel to Paley. But I haven’t actually read Paley.

OgreMkV said:

Further, I maintain that it is impossible, even in theory, to determine whether a genetic sequence or protein was designed or the result of pure randomness or randoness + natural selection, which renders the entire ID ‘argument’ moot.

The stuff I’ve seen that provides some semblance of a positive argument for ID has zero difference from what would be expected if evolution were the designer.

If the Law of Conservation of Complex Specified Information did the job it was intended to do, then the consequences would be huge, and we would be forced to acclaim Dembski as the greatest figure in evolutionary biology (perhaps ahead of Darwin).

Alas for him, the LCCSI doesn’t do this, and as a result you are correct – the nonrandom degree of adaptation of organisms is explicable from natural selection, as you note.

harold said:

Sorry, I’m still confused by how CSI is meaningful.

A simple definition of Specified Information would be that it is the negative log (to the base 2)

The log or negative log to the base 2 is a common function to come across in discussions of information that touch on the binary system, because, among other things, it relates the minimum number of digits of string length needed (number of “bits”) to convey numerical information. For example, the log base 2 of 8 is 3, and the log base 2 of 16 is 4. To express the numerical information “eight” through the numerical information “fifteen”, you need four binary bits (1000 through 1111). For fractions it works the same way, except with negative log 2.

of the fraction of those sequences that are better at flying than a bird.

I’m seriously not sure what you mean here. I’ll ask the easy questions first - which birds are we talking about, and how do we measure the quality of their flying in a reproducible way?

Which sequences are we talking about? I’m guessing you’re talking about randomly generated genome-sized sequences of DNA, and imagining what would happen if those sequences were substituted for the genome of a bird at earliest embryonic stage, or something. Is that right?

We don’t have enough information to actually calculate it, but we can be sure that it is big enough to pass Dembski’s threshold of 500 bits, and thus CSI is present.

At one level, obviously, we are in complete agreement that either CSI is a meaningless term, or it is a trivial term that refers to something, but that has no relevance to the theory of evolution.

I’m just not sure it is in any way a meaningful term.

Their hypothesis (if you can call it that) seems to be that they can use CSI to detect design, and by extension a designer (i.e., God). Do they really want to make God a testable hypothesis which they would be forced to reject if/when their arguments fail?

Sorry for the preceding folks: I and the comment system appended harold’s comment to my reply to OgreMkV, and named it a reply to harold. Aarghh!! anyway …

harold said:

Sorry, I’m still confused by how CSI is meaningful.

,,,

me: of the fraction of those sequences that are better at flying than a bird.

I’m seriously not sure what you mean here. I’ll ask the easy questions first - which birds are we talking about, and how do we measure the quality of their flying in a reproducible way?

I’m being informal, but any old (flying) bird species. The point is that even without going into detail about how to measure flying ability, they are way better at it than any bird that whose genome was a random string of DNA.

Which sequences are we talking about? I’m guessing you’re talking about randomly generated genome-sized sequences of DNA, and imagining what would happen if those sequences were substituted for the genome of a bird at earliest embryonic stage, or something. Is that right?

Precisely, and almost all of that time we don’t even get out one living cell.

me: We don’t have enough information to actually calculate it, but we can be sure that it is big enough to pass Dembski’s threshold of 500 bits, and thus CSI is present.

At one level, obviously, we are in complete agreement that either CSI is a meaningless term, or it is a trivial term that refers to something, but that has no relevance to the theory of evolution.

I’m just not sure it is in any way a meaningful term.

It simply is a way of saying that real organisms are so far out into a tail of fitness (or if you prefer, flying ability), that this good or better would occur less than 1 time in 2-to-the-500 sequences. And that much is fairly obviously true.

Then Dembski trots out the Law of Conservation and concludes that you can’t get there by natural selection. But, the way he formulates that and uses it, he’s wrong.

CSI appears to be a designation applied to whatever the cdesign proponentsists have already determined is Designed, using some theological test never made explicit.

Matt G said:

Their hypothesis (if you can call it that) seems to be that they can use CSI to detect design, and by extension a designer (i.e., God). Do they really want to make God a testable hypothesis which they would be forced to reject if/when their arguments fail?

Well, that is an argument that might persuade a person of faith not to use a god-of-the-gaps approach.

However in this case they start by thinking that they have a Law of Conservation that does not allow adaptation by natural selection. And since there is tons of adaptation everywhere in life, they think they have a very big Gap. However, as the Law turns out not to do the job, all they are left with is that the adaptation could instead be due to natural selection. That doesn’t prove that it is natural selection, so the Designer is not refuted, just nowhere near proved to be at work.

Flint said:

CSI appears to be a designation applied to whatever the cdesign proponentsists have already determined is Designed, using some theological test never made explicit.

I disagree – I think it is just a way of arguing that there is so much adaptation that it could not be explained by pure mutation (without natural selection). So it is just a mathematicized version of the explosion-in-a-junkyard analogy. As many people here have noted, that analogy has no counterpart to natural selection. If Dembski’s Law of Conservation did the job, it would rule out natural selection. But, alas for them …

The purpose of the concept “Complex Specified Information” is to sneak the concept of a Specifier into the discussion. In the context in which it was inserted, the unstated assumption is that the Specifier is an active entity, i.e. The Designer.

As noted before, a more honest terminology would be “Complex Specifying Information”. But this can obviously arise from natural processes, and therefore is a non-starter as an IDC propagational tool.

Complex Specified Information could take practically any form, and have any number of purposes, the number of which could, practically speaking, be zero.

Examples: a grocery shopping list tucked into my shoe by my wife yesterday morning. The painting The Persistence ff Memory by Salvador Dali.

Joe Felsenstein said:

Douglas Theobald said:

The definition of CSI you provide was described in a 2007 PNAS paper by Jack Szostak (2009 Nobel laureate).

(It’s identical to a definition I came up with and submitted to the Journal of Theoretical Biology back in the early 90’s, but the paper was rejected – blah, blah, blah).

… and a special case of it was described by me in a paper in American Naturalist in 1978. I wasn’t a reviewer on your 1990 submission but if I had been I might have noted that!

I imagine it’s been “invented” independently many times – it seems a natural measure of functional information to me. In my paper I actually argued that, from a biological perspective, a better definition would be the log of the fraction of sequences that provide the same absolute fitness (conditional on some specified finite sequence space). It’s then trivial to show that this type of “CSI” can increase due to natural selection.

Unless I’m mistaken, you can even see casual admissions that the stuff of evolution produces “active information” at least in an attempt to critique the digital organism Avida:

“Mutation, fitness, and choosing the fittest of a number of mutated offspring [5] are additional sources of active information in Avida we have not explored in this paper.”

Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism Ewart Dembski and Marks

http://evoinfo.org/papers/2009_Evol[…]ynthesis.pdf

I can’t think of a compelling reason that if those are sources of active information in Avida, that they are not also in life.

CSI appears to be a designation applied to whatever the cdesign proponentsists have already determined is Designed, using some theological test never made explicit.

I disagree – I think it is just a way of arguing that there is so much adaptation that it could not be explained by pure mutation (without natural selection).

I don’t think this is a disagreement. Start with the assumption of Design, theologically required. Now, how do we determine whether the Designer did it? Well, if doctrine implies it was created as-is, then it was.

Evolution is, from this view, nothing more than an incorrect interpretation of what scripture makes obvious. How can we show it’s incorrect? Well, maybe it’s too complicated to have happened that way. Maybe there’s no obvious evolutionary pathway. But these aren’t more than ad hoc justification in support of foregone conclusions.

Douglas Theobald said: I imagine it’s been “invented” independently many times – it seems a natural measure of functional information to me.

From what I’ve seen, Leslie Orgel is usually said to have invented the term “CSI”, though he didn’t have any “Design” agenda behind it.

The whole notion of “functional information” is a quagmire. It is possible to come up with ad-hoc definitions of it for specific circumstances, but there’s no general definition that allows a measure of it in one circumstance to be compared to a measure in another circumstance.

The CSI concept fails on the C, the S, and the I. But its greatest failing is the “Specified” – because gene sequences are not specific. Not just in theory, but observably so. There are many mutations which make no difference at all to a gene’s function. There are other mutations that reduce a gene’s function, but not enough to make it completely non-functional. There are genes that vary in function depending on whether they are homozygous or heterozygous (e.g., heterozygous sickle cell malaria resistance good, homozygous sickle cell anaemia bad). There are species that lack enzymes of related species but do just fine (jawless fish can form clots just fine despite having a much simpler clotting cascade than we do, with 6 enzymes instead of our ten).

The idea that the enormous variation in genetic sequences in the biosphere is *specified* is wrong from the start. Even if one posits for the sake of argument that God created life pretty much as it is today, it is demonstrable that this fictional God did not do so with specified genetic information.

Not that I mind Dembski trying to enumerate and measure the complexity of genetic information. It could be an interesting project if he wasn’t obsessed with squaring the circle to prove that God made circles.

Simply put, birds fly and fish swim. They do so a lot better than organisms coded by random strings of DNA (formed by mutation without natural selection – organisms coded for by monkeys typing with four-key typewriters). If we could imagine looking at all possible such organisms with the same length genome as (say) a bird, the fraction of them that would fly as well as a bird, or better, would be incredibly tiny. So tiny that if every particle in the universe were a monkey with an ATGC typewriter, there would not have been enough time by now to produce anything as good as a bird even once since the time of the Big Bang.

Do you really know how small that fraction is? How did you estimate it? Or are you just making this up?

a better definition would be the log of the fraction of sequences that provide the same absolute fitness

That should read “the same or greater”.

Joe Felsenstein said:

Flint said:

CSI appears to be a designation applied to whatever the cdesign proponentsists have already determined is Designed, using some theological test never made explicit.

I disagree – I think it is just a way of arguing that there is so much adaptation that it could not be explained by pure mutation (without natural selection). So it is just a mathematicized version of the explosion-in-a-junkyard analogy. As many people here have noted, that analogy has no counterpart to natural selection. If Dembski’s Law of Conservation did the job, it would rule out natural selection. But, alas for them …

Of course they argue that purely “random” mutation can’t happen, neglecting the importance of prior phylogentic history in constraining the types of mutations that would be possible, or more, broadly, to invoke something akin to Gould’s notion of contingency. This is the fundamental problem I see with Meyer’s absurd attempt in “testing” deviations from a “true” Design via data in the fossil record, since there is no prior accounting of that phylogenetic history.

Joe Felsenstein -

Would it be reasonable to say that you are pointing out that the term “CSI” as used by Dembski is essentially synonymous with the more common terms “fitness” and “adaptation” (which are themselves more or less synonymous in many contexts)?

To put it another way, Dembski invented the term CSI, claimed that it is a feature of modern living organisms (presumably of those pesky plants that nobody in creationism cares about, too, as well as “sexy” mammals and motile bacteria), and then claimed that it was a feature that could not have evolved.

There are two possible ways to process this, in the light of the strong evidence for the theory of evolution and Dembski’s failure to rebut any of the positive evidence.

1) The term is meaningless and there is no reason to think organism or anything else “have” it, or

2) The term is meaningful but the claim that evolution could not have created the feature is false.

You make a fairly good case for “2)”, but it is dependent on a coherent and reproducible definition of “CSI”. If ID/creationists start dissembling about the definition of CSI, and my dime says they will, because you have drawn attention to it, that might be a weak argument in favor of “1)”.

I think the real crux to this thing is whether or not a certain type of “information”, let’s call it “Dembskian Information” can be created or must be conserved.

How does Dembski address the effects of selection?

We all know, and I think that even Dembski himself cannot successfully gloss over, the fact that selection actually exists.

A population of gazelle is born. They are all somewhat different. Some of them are better at surviving to maturity than others. Though there are statistical blips ( some otherwise very fit gazelle will still be struck by lightning, for example ) in general the environment makes a decent decision about what works.

This works, that doesn’t. Speed, yes. Tendency to stand their ground against the lions… not so much.

The environment has made a selection based on experimentation with what works, therefore adding information to the genome. Certain choices worked, and were kept, certain choices didn’t and were discarded in a pile of bloody fur.

Did Dembski ever get around to addressing what effect selection might have on Dembskian Information?

Bullshit.

This is the definition of a scientifically vacuous concept. You can’t precisely define (let alone measure) any of the terms involved in his equations He’s asserting that something is “impossible” by assigning an arbitrary threshold on an unmeasurable quantity. Dembski is just inconsistently flinging technical terms around to create an intellectual smoke screen. In the end, he’s made a decades long career out of ineptly obfuscating an argument from ignorance.

Think about it, Joe. You wouldn’t even let an undergrad get away with this kind of crap.

OgreMkV said:

Depends on what you mean by ‘information’. It does take slightly more effort to transmit two copies of something, even using compression, than it does to transmit one copy of the same thing (using the same compression).

Its not only that; sequence duplications can change the amount of protein produced, which can dramatically change development. IIRC, there’s a couple of birth defects that result from having either too few or too many repeats.

In this case the DNA = recipe analogy is useful…if you remember that your cells are very dumb bakers. When they see “add a cup of flour add a cup of flour” they aren’t always smart enough to interpret the repeat as a typo that should be ignored. Instead, sometimes they read “add a cup of flour add a cup of flour” and add two cups of flour. Which can have a very dramatic effect on the cake. :)

Is that what one might call flour power?

Get a haircut, hippie.

What I’m wondering is, whatever has happened at UD that they would allow a thread like mathgrrl’s? Any theories?

It was the only way they could get a female (other than Denyse (blanking on last name) from Toronto) to associate with the site. And I’m not 100% sure that Denyse posts there.

Mary H said:

The egg is a self contained unit with sufficient nutrition to develop a chick. The incubation temperature is needed for enzyme function until late in the incubation when the temperature must be dropped a little because the chick begins to make its own heat. The weight loss is primarily evaporation through the shell. Of course the shell is a gas exchange medium how else would the aerobic chick breath?

I’ve been traveling, so it has been hard for me to keep up with the thread. Now that I am home, I should clarify an apparent disagreement about the egg thing.

My thinking about the energy cascades in which life exists and functions led me to say that the egg-to-chick transition was endothermic.

I have no argument with Douglas Theobald’s point that there are exothermic reactions going on with stored materials in the egg. But I wonder if a chick would really be produced if the egg were kept in a calorimeter.

It seems to me that, while there are certainly exothermic reactions that generate heat, this isn’t sufficient to sustain the process through to the complete development of a chick. That egg really does need to be held within a certain temperature range, and oxygen has to be brought in and any waste gases removed. After that the chick has to grow.

Now elementary physics of the first and second laws of thermodynamics applied to any energy-using device or organism requires that the energy be conserved and spread around according to the second law. We can’t get around that.

How that energy input gets divided up in triggering exothermic reactions that make the energy available from already stored chemicals, for bringing in matter that will be used in further processes later as well as adding to the growth of the system, for eliminating waste materials, for stimulating the electrical activity that coordinates interactions among subsystems, etc., does of course depend on the particular system.

My argument is that all living systems require a net energy input over their lifetimes, thus giving them the net appearance of being endothermic over that period. I recognize that I can be temporally strictly exothermic as I hold my breath and type, for example. But that is not sustainable. The egg-to-chick transition might be one of these, but I’m not up on the details.

Any living organism that endures ultimately takes in more energy than it uses in growth, physical activity, shuttling food and waste, and triggering stored energy releases. Any heat that is generated as a result of those energy dumps from stored chemicals can, of course, be used in any such processes as well as contributing to energizing other electrical signals and processes of coordination; and then the rest goes off as heat into the surrounding environment.

The details are obviously system dependent; but the first and second laws hold for every individual living system as well as the entire collection of living systems taken as a whole system in itself.

As I mentioned on a previous comment, I wouldn’t apply the terms endothermic or exothermic to a living system. Those terms apply to specific subsystems; and they change with time.

And, related to the topic of the thread, I agree that the uses of entropy, “information,” and probability by the ID/creationists are some of the most egregious mistakes by anyone pretending to speak as scientists.

Entropy, information, order/disorder, probability, and the asserted need for some “higher law” to countermand the laws of chemistry and physics are at the heart of all ID/creationists “scientific” arguments. Their constant word-gaming with medieval literature and authority keeps their thinking medieval. Apparently nothing of the Enlightenment has ever penetrated their thinking.

Mike -

My argument is that all living systems require a net energy input over their lifetimes

This is 100% correct.

It has been years since I was required to calculate things like Gibbs free energy (I enjoyed that stuff at the time).

I do retain a very strong interest in the “economy” of the biosphere.

All forms of life, over any reasonable time scale, net consume energy. That is unequivocal. All cellular life net consumes either direct solar energy, or chemical energy, or both.

There is a great deal of inefficiency and a great deal of not-perfectly-efficient transformation from one form of energy to another along the way. It is very common for organisms to exude heat, but this should not be mistaken for net energy production.

In fact, the better basic physical measurement for understanding the energy economy of the biosphere is power.

A very large amount of solar energy hits the earth’s surface per unit time. A proportion of that is harvested by photosynthesis, much of which is transformed into chemical energy (with imperfect efficiency). Although only a fracton of solar energy is consumed, photosynthetic organisms must compete with each other for that fraction. The chemical energy is then consumed by what can be very crudely conceived of as a vast number of “pyramids” of organisms, often with a high biomass of relatively small organisms that directly consume photosynthetic organisms on the bottom, and then layers of decreasing biomasses of larger organisms that increasingly consume solar energy less directly. However, large organisms that directly consume photosynthetic material are also present, so the concept of a pyramid is useful but very crude. Proportion of biomass tends to be related to individual size rather than directness of consumption, and even this approximation may only be true of multicellular organisms.

At any rate, all life needs a power source for sustainability.

Life has other requirements, such as water supply and ambient temperature ranges, which are independent of the need for power supply from photosynthesis or “food”.

Obviously, life cannot directly “consume” heat energy, but only solar energy and certain types of chemical energy. Ambient temperature can only impact by its effect on chemical reactions which keep organisms alive. It can be “ideal”, be “suboptimal” (causing them to suspend metabolism/growth/reproduction and/or to consume MORE energy fuel adaptations to the temperature) or it can be fatal. While it is true that a human with a limited set of clothing and shelter options needs to consume more food per unit time, all else being equal, if working in cold weather versus working at an “ideal temperature”, this is because extra energy (actually power) is required to maintain the human body at an internal temperature that is compatible with life. It is not at all because heat energy from the environment can be harvested to drive energy-input-requiring biochemical reactions.

Mike -

Also note that Douglas Theobald is, as well, 100% correct.

Exothermic reactions release heat (I’m bothering to make obvious statements because someone other than the regular posters may read my comments). Many, many biochemical processes are unequivocally net exothermic. This is not at all at odds with the fact that the biochemistry of the biosphere ALSO requires a power source, and that all living cells, studied at any reasonable scale of space and time, consume, rather than generate power.

Again, those aliens in the Matrix movies are idiots. Of course you can feed a human being (or other homeotherm) and then use the human as a weak heat source. However, it would be far more efficient just to burn the food directly.

That should be “net consume” rather than “net generate”, of course.

Shorter Theobald: if you put a charged battery in a calorimeter, its can release stored energy into that closed system.

Shorter Elzinga: right, but since no battery is 100% efficient at storing energy, its always going to take more energy to charge the battery than you get out of it.

eric -

Shorter Theobald: if you put a charged battery in a calorimeter, its can release stored energy into that closed system.

Your summary is great and insightful, but I would change “stored energy” to the more specific “heat”, since Theobald is talking about whether reactions are endothermic or exothermic.

But basically, great summary.

Now to stick it to the ID/creationists one final time for this thread -

1) We’re talking about thermodynamics and energy because we find their false claims about information, the actual topic here, to be highly analogous to their false claims about entropy.

2) There is no reason to think that biological evolution represents a net decrease of the entropy of anything. (It is actually borderline meaningless to make statements about something so impossible to determine.)

3) If it did, it wouldn’t matter, because local decreases in entropy are common and do not at all require magic to occur.

eric said:

Shorter Theobald: if you put a charged battery in a calorimeter, its can release stored energy into that closed system.

Shorter Elzinga: right, but since no battery is 100% efficient at storing energy, its always going to take more energy to charge the battery than you get out of it.

Very nice summary! Thank you. :-)

Mike Elzinga said:

eric said:

Shorter Theobald: if you put a charged battery in a calorimeter, its can release stored energy into that closed system.

Shorter Elzinga: right, but since no battery is 100% efficient at storing energy, its always going to take more energy to charge the battery than you get out of it.

Very nice summary! Thank you. :-)

No, that’s not a good summary at all.

Many people here, including Elzinga, are confusing enthalpy (H) with energy (E) and with Gibbs free energy (G). Contrary to popular belief, spontaneous reactions do not require energy input – what they require is free energy (and I’m using this in the technical sense). For any and all reactions, including the development of a chick in an egg in a calorimeter, energy is always conserved – that’s the first law of thermo. The total energy is the same at the beginning and at the end of the reaction. Period.

On the other hand, a charged battery can release free energy to power some reaction, but that DOES NOT mean that the reaction is releasing heat (i.e., that it is undergoing an exothermic process). Endothermic processes can be spontaneous, and they can release just as much free energy as an exothermic process (or more).

Douglas Theobald said:

Mike Elzinga said:

eric said:

Shorter Theobald: if you put a charged battery in a calorimeter, its can release stored energy into that closed system.

Shorter Elzinga: right, but since no battery is 100% efficient at storing energy, its always going to take more energy to charge the battery than you get out of it.

Very nice summary! Thank you. :-)

No, that’s not a good summary at all.

Many people here, including Elzinga, are confusing enthalpy (H) with energy (E) and with Gibbs free energy (G). Contrary to popular belief, spontaneous reactions do not require energy input – what they require is free energy (and I’m using this in the technical sense). For any and all reactions, including the development of a chick in an egg in a calorimeter, energy is always conserved – that’s the first law of thermo. The total energy is the same at the beginning and at the end of the reaction. Period.

On the other hand, a charged battery can release free energy to power some reaction, but that DOES NOT mean that the reaction is releasing heat (i.e., that it is undergoing an exothermic process). Endothermic processes can be spontaneous, and they can release just as much free energy as an exothermic process (or more).

First law: Energy is conserved.

Second law: Energy gets spread around.

Enthalpy and Gibbs free energy are convenient constructions that allow for work that is done against ambient pressure as well as for changes in entropy and other phase changes in which energy is stored or released in the rearrangement and/or binding of atoms. One can also account for particles going in and out of a system. There is an entire set of Maxwell relations one can use depending on which state variables one is dealing with and what proportion of the total energy one is attempting to get a handle on. They allow one to use various system state variables in measurements that one can actually do in the lab. Chemists make good use of these things; and they should.

I know the difference; and I meant energy.

Physicists tend to think in terms of the energy and are aware of that part of it that goes into molecular bonding and rearrangements, or escapes through radiation, conduction, and convection, or works against the surrounding environment. Those various Maxwell relations simply allow one to get at the proportions of the energy sent into whatever channels are available.

But in the end, it’s all about the energy. The energy is contained in the kinetic energies of particle motions and stored in the fields with which these particles interact. Every system is comprised of matter (particles) interacting with fields. And the laws of thermodynamics apply to any such system no matter the level at which one looks.

C.P. Snow had something to say about thermodynamics:

Zeroth: “You must play the game.” First: “You can’t win.” Second: “You can’t break even.” Third: “You can’t quit the game.”

JimNorth said:

C.P. Snow had something to say about thermodynamics:

So did Arnold Sommerfeld:

“Thermodynamics is a funny subject. The first time you go through it, you don’t understand it at all. The second time you go through it, you think you understand it, except for one or two small points. The third time you go through it, you know you don’t understand it, but by that time you are so used to it, it doesn’t bother you anymore. “

But what he forgot to mention is that the process cycles :)

Douglas Theobald said:

JimNorth said:

C.P. Snow had something to say about thermodynamics:

So did Arnold Sommerfeld:

“Thermodynamics is a funny subject. The first time you go through it, you don’t understand it at all. The second time you go through it, you think you understand it, except for one or two small points. The third time you go through it, you know you don’t understand it, but by that time you are so used to it, it doesn’t bother you anymore. “

But what he forgot to mention is that the process cycles :)

:-)

Many years ago, thermodynamics and statistical mechanics were separate courses in most physics departments.

One of the frequent comments by students was, “Thermodynamics made no sense to me until I took statistical mechanics.”

But, as it turned out, it often didn’t matter which order one took those two courses; so students who took statistical mechanics first would say, “Statistical mechanics made no sense to me until I took thermo.”

But, as it turned out, it often didn’t matter which order one took those two courses; so students who took statistical mechanics first would say, “Statistical mechanics made no sense to me until I took thermo.”

Interesting! It sounds like one of those courses focuses on the theory, and the other on the methods used in dealing with it.

Henry J

Henry J said:

But, as it turned out, it often didn’t matter which order one took those two courses; so students who took statistical mechanics first would say, “Statistical mechanics made no sense to me until I took thermo.”

Interesting! It sounds like one of those courses focuses on the theory, and the other on the methods used in dealing with it.

Henry J

That was approximately the case. The courses have since been combined into a single course, (and there is an undergraduate level and a graduate level course) with the result that it generally makes more sense.

Thermodynamics courses were often taught axiomatically; and such courses tended to leave out applications. Those thermodynamics courses that did teach applications often left students wondering what all those thermodynamic potentials were all about. It was often the case that students were encountering multivariable functions and partial derivatives for the first time in this course.

Without the statistical mechanics insights, many of those state variables and Legendre transformations were very mysterious.

On the other hand, without the thermodynamics applications, there was no reference to what was being clarified by the statistical mechanics derivations and concepts.

And often the thermodynamics that chemists learned was pretty much restricted to what they would typically use in the lab. So there was a lot of emphasis on things like enthalpy and the Gibbs and Helmholtz free energies, but little insight into how those connected through statistical mechanics to the atomic and molecular level.

And as we have noted before, Frank L. Lambert has been active in getting misconceptions about entropy out of the chemistry textbooks.

The current major weakness in these courses in physics appears to be relating all this to the elementary concepts of kinetic energy, potential energy, total energy, and matter-matter interactions.

It is implicit in the derivations and development of concepts that matter interacts with matter and with fields, but my own opinion is that many textbooks treat this almost as an aside without pointing out the fundamental importance of it. One doesn’t really become thoroughly aware of this fact unless one has to get into the lab and deal with eliminating as many of these interactions as possible while still being able to probe the system to learn what is going on.

Douglas Theobald said:

SWT said: To get a handle on the entropy production of a subject in a calorimeter (in this case the egg and its inhabitant), you have to work simultaneously with the mass, energy, and entropy balances for the system.

I’m really skeptical – but if you are doing something exceptionally clever, I’m quite interested. As I understand it, the entropy change of the system is never directly measured, only inferred from other state function changes that are directly measured (from, for instance, ΔG or Keq and ΔH). If we have a calorimetric enthalpy change, and we can get the equilibrium constant for a reaction (and there’s usually some experimentally tractable way to do that), then getting the entropy change of the system is easy. The problem here is getting an equilibrium constant or something analogous – it’s hard to even imagine what it could be in the case of an egg hatching a chick.

I want to expand a little on my previous comments about this, add a little information, and in the process modify my position a bit.

My original comment that one should be able to measure the change in the entropy of an egg during gestation was based on knowing that measuring the entropy production during such a process is well-documented. (It is in fact the entropy production that is calculated by merging material, energy, and entropy balances.) Engineer that I am, my natural response was that if I know the entropy production, I can integrate that over the course of the process to get the entropy change for the system.

I spent a some time this week looking in to this more thoroughly. I am reasonably familiar with the basic NEQ development of the dissipation function, but not with the particular application – I knew that a former colleague did some work with entropy production, calorimetry, and aging but never got in to the details.

For processes that are changing slowly during the calorimetric measurement, I think the entropy production measurements are fine for their intended uses. However, there is an implicit pseudo-steady-state assumption for the total rate of entropy change built in to the analysis that makes the methodology inappropriate for measuring the total energy change of the system as I’d envisioned. I’m sort of bummed about this, because I think it would be a cool study to do.

I did want to make a comment about the calorimetric measurements in the literature. The few gestation experiments I found were done in isothermal flow calorimeters, which are very well controlled open systems. It would be difficult to do these studies in a closed, adiabatic calorimeter since the gestating chick needs oxygen and will overheat if the excess metabolic heat isn’t dumped somehow.

I’m not convinced that the experiment can’t be done, but I have to admit that I haven’t found a way through the analysis to the result of interest. More to come if I need more distraction from things I’m actually supposed to be working on …

SWT said:

I’m not convinced that the experiment can’t be done, but I have to admit that I haven’t found a way through the analysis to the result of interest. More to come if I need more distraction from things I’m actually supposed to be working on …

Between my traveling and getting behind on everything else, I haven’t had much time to comment.

Living organisms are difficult to study not only because they are so complicated, but also because they are much more “delicate” (i.e., bound together by much smaller potential energies). One not only has to account for all matter and energy flowing into and out of the system, the system itself has to be kept operational within rather narrow temperature and pressure limits.

Those narrow temperature and pressure limits are to keep the internal processes functioning. So this suggests approximately isothermal and isobaric measurements; and these measurements are going to have to measure flow rates of gasses, food and waste (their energy equivalents), as well as heat flow and temperatures on a designated “input” and “output” of the system. The need to provide an ambient temperature and pressure makes those heat flow and temperature difference measurements very difficult.

Temperature differences between “input” and “output” are going to be small; and this will require sufficient sampling rates along with proper averaging and standard deviation measurements. The same can be said for flow rates.

There are other issues to consider as well. Many processes in living systems are thermally driven. There is nothing strange about thermally driven processes in condensed matter systems; they happen at nearly every level; even in some of the simplest condensed matter systems. Thermocouples and any phonon driven voltage gradients depend on temperature gradients. Within polarized molecular assemblies, electrons and charged or polarized molecules can be made to flow just by providing an ambient temperature and tiny gradient. Any slight differences in mobility from one region to another can result in thermally activated flow of matter.

And because the depths of the mutual potential wells in condensed matter are so shallow (on the order of tenths of an eV for solid materials at room temperature down to a few hundredths of an eV for matter near its liquid state at room temperature), there are many ways that processes can be initiated or driven just by maintaining them within a narrow temperature range.

We have the additional issue of energy being released from stored chemicals previously shuttled into such a system as well as any activation energies required to release that energy. Those activation energies can come from thermally driven processes if the barrier heights are on the order of binding energies of the constituents of the system. For living systems, those will be on the order of a few hundredths of an eV.

The calorimetric measurements done in a chemistry lab typically deal with far higher energies; especially if they are used in measuring chemical reactions which take place in the range of an eV or so. The kind of calorimetry one has to bring to bear on processes taking place in the range of a few hundredths of an eV is going to have to be much more delicate and clever.

Some of these kinds of measurements are already done with animals and humans that can cooperate with the experimental arrangements that measure the flow of matter and energy into and out of their systems. But still, these measurements are relatively crude.

Alternatively, one can study individual subsystems of living organisms and then reconstruct the energy flows of the assembled total organism.

About this Entry

This page contains a single entry by Reed A. Cartwright published on March 27, 2011 2:06 PM.

sudo laugh again was the previous entry in this blog.

Fratercula arctica is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter