Life’s Ratchet: book review

| 38 Comments

This is a biology book written by a physicist who performs research in cellular biology. I picked it up and started reading it, but I got stuck somewhere in the chapter on thermodynamics. So I contacted Mike Elzinga, a frequent commenter on PT, and asked him to explain something. In the process, I somehow conned him into writing a review of the book for PT. I will add only that the figures in the book are less than desirable on a Kindle (and some are fairly crude hand sketches), and I indeed intend to read Chapter 7 again when I get a chance.

Here, with appreciation, the book review.

by Mike Elzinga

This book, by Peter M. Hoffmann of Wayne State University, explains “How molecular machines extract order from chaos,” in the words of its subtitle.

That chaos of which Hoffmann speaks is the thermal motion of molecules that we call heat. Yet our historical understanding of the randomness of thermal motions suggests that it is counterintuitive to expect order to emerge out of such chaos, let alone any regularity that has the characteristics of life.

There is also a persistent meme in our popular culture that propagates the misconception that the second law of thermodynamics says everything tends toward disorder and disintegration. Life is perceived to be the “opposite” of disorder; therefore, living organisms rely on some external agency that bucks the trend toward chaos. Not only is that meme completely wrong; evolution found a way to harness the chaos. Hoffmann, while describing the use of tools like atomic force microscopy, shows us how molecules in the cells of living organisms do it.

The question “What is life?” has been around since at least the time of the early Greeks. The formulations of the answer to this question have ranged from animism, in which the entire universe is imbued with a “life force” that causes everything to move and change, to “vitalism,” which adds something extra to living organisms that animates and distinguishes them from non-living objects, to “atomism” which attributes the actions of life to the ceaseless motions of unseen particles that are the ultimate building blocks of everything.

Chapter 1. Hoffmann’s book is divided into nine chapters plus an introduction and an epilogue. He begins Chapter 1, “The Life Force,” with a compressed historical overview of vitalism and atomism, beginning with Aristotle, Democritus, and other early Greeks, focusing primarily on atomism and vitalism.

He traces atomism and vitalism through early medicine and magic, until they collided with mechanical philosophy beginning around the 1600’s. At this point, the issue of what animates life was brought into much sharper focus. Throughout the seventeenth, eighteenth, and nineteenth centuries, the questions about the role of the blood, air, and food in the animation of living organisms began to change the way research is done. The discoveries of oxygen, carbon dioxide, the nature of combustion, animal heat, electricity, physics, and chemistry all entered the picture, as did “irritability” and Mary Shelley’s Frankenstein.

Irritability was one of five “vital forces” that German biologist Carl Friedrich Kielmeyer thought was acting on living beings. He called these vital forces “the physics of the animal realm” Irritability was the ability of muscles to move on their own when disconnected from their animal hosts. This phenomenon attracted considerable ghoulish attention from scientists who engaged in increasingly gruesome experiments using high voltage electricity applied to dead animals and humans. These activities prompted Shelley’s horror story of a scientist creating life from dead flesh.

The experiments demonstrating conservation of energy by Herman von Helmholtz vanquished the vital force and put biology back on the track of mechanism by the end of the nineteenth century, at least temporarily.

But now the problem of complexity and order from chaos is brought back into focus. Darwin’s theory of evolution had, at that time, no mechanism to produce variation and novelty. Mendel’s work was discovered only later. Blending of traits didn’t work. Did evolution need an injection of a life force after all?

Chapter 1 is a whirlwind sketch of a much larger history of “the life force,” and it is by no means complete. However, for those who may not have read much of the history of science, Chapter 1 sets the stage for the questions that Hoffmann addresses later in his book. What is a life force? How is it attached to the body? Where is it attached? How does it animate the body? How can atomism work in the face of thermal chaos?

Chapter 2, “Chance and Necessity,” discusses the concepts of probability, statistics, and randomness. Hoffmann again delves into some history here. The central question of chance versus necessity is highlighted, specifically, how can regularity and law emerge out of chaos? We see again the case being made - for example, by Teilhard de Chardin - for some external guiding force that nudges the underlying chaos toward some purposeful goal.

Erwin Schrödinger, one of the discoverers of quantum mechanics, puzzled over how molecules can withstand thermal motions, and he came to essentially the wrong conclusion that cells were made stable by strong chemical bonds.

This conclusion presented a conundrum: the bonds would have to be so strong that the cell would essentially be a solid. There are, however, no solids in the cell. This conundrum is taken up later in the book. If cells are soft and squishy, why are they not torn apart by thermal motion? This important question leads to how processes within the cell actually work.

Chapter 3, “The Entropy of a Late-Night Robber,” is a short chapter on energy, entropy, and the laws of thermodynamics, particularly the second law.

Hoffmann relates a personal incident, in which he was robbed of ten dollars at gunpoint, and uses it to illustrate the spreading around of money as an analogy for the spreading around of energy. He also uses that old analogy of a messy room, in which clothing and other items in various locations represent energy microstates.

The messy room analogy has been partly responsible for the frequent misconception that entropy means “disorder.” Hoffmann repeatedly makes it very clear that entropy is not disorder, a point that should be noted carefully. He also makes the physicist’s proper connection of entropy with “information.”

Hoffmann introduces one of the “free energies” of thermodynamics, specifically the Helmholtz free energy, F = E - TS. In this equation, E is the internal energy of the thermodynamic system, T is its temperature, and S is its entropy. The Helmholtz free energy is chosen because it applies to thermodynamic processes taking place at constant temperature and volume, and all the processes going on in cells take place at constant temperature.

Hoffmann shows a simplified free energy vs. temperature graph for water transitioning through its freezing point, and he explains that the free energy is minimized during this transition. The basic idea is that a competition between the export of entropy and the lowering of internal energy results in making this free energy a minimum.

I found Hoffmann’s explanation a little unclear, and I did a double-take after reading the caption under that graph. There is nothing in the graph that shows the free energy going through a minimum, and I suspect this may cause some confusion for the reader.

The Helmholtz free energy remains constant for thermodynamic transitions taking place at fixed temperature and volume, in this example, right at the freezing point. So when a system is at its freezing point, the Helmholtz free energy remains constant while the material freezes, and there is no discontinuity in the graph, merely a change in slope coming out of the freezing point. The entropy is the negative of the slope of a Helmholtz free energy vs. temperature curve, the smaller slope being on the solid side, the bigger slope on the liquid side.

My own personal preference in describing thermodynamic processes to general audiences is to avoid the various thermodynamic potentials and go directly to basic mechanisms that a high school student would be familiar with.

For example, in the case of a freezing system such as this, molecules are falling deeper into mutual potential energy wells and staying there. To stay there requires that energy be shed from the molecules, and that energy leaves the system in the form of radiation or by way of momentum transfers to molecules that make up a containing vessel.

That departing energy spreads out among many more energy microstates in the surrounding environment; the entropy of the environment increases. Meanwhile, the molecules that are now bound together have not only less energy, they have access to fewer energy microstates because they can only vibrate in position; their entropy has therefore decreased. The combined entropy changes end up as a net overall increase in the entropy of the universe.

It is a general property of matter in the universe that it clumps, and clumping requires the shedding of energy. This is the second law of thermodynamics at work.

I don’t think Hoffmann’s explanations in this chapter will detract from later parts of the book, especially when he discusses enzymes or other catalysts mechanically distorting molecules, thereby pulling down potential energy barriers to molecular binding, or ATP energy dumps that are used to separate bound molecules.

The rest of Chapter 3 introduces Hoffmann’s use of the expression “molecular storm,” which he uses in later chapters. It is an apt description, and he finishes off the chapter emphasizing that “life is a near-equilibrium, tightly controlled, open, dissipative, complex system.” “Near-equilibrium” is an important modifier here. Systems far from equilibrium tear things apart; the processes in cells are highly efficient compared to man-made engines and nature’s tornadoes.

Chapter 4, “On a Very Small Scale,” describes the realm in which the cell processes take place. This is the nanometer scale, in which electrical, mechanical, and chemical energies compete with the kinetic energies of thermal noise. But it is in this realm, as Hoffmann points out, that there is any hope of solving the question about what life is.

There is a good discussion here about time and energy scales, energy landscapes, collective behavior, entropic forces, bonding, exclusion zones, and a number of other important phenomena that contribute to the behaviors of molecules at this scale. These turn out to be important in solving the riddle of life. The solution to Schrödinger’s riddle is also addressed.

To emphasize more graphically the points Hoffmann is making about this nanoworld, I would recommend a little high school level physics/chemistry exercise that scales up the charge-to-mass ratios of protons and electrons to kilogram-sized masses separated by distances on the order of 1 meter. When one calculates the energies of interaction of such masses, one comes up with energies on the order of 1026 joules, or 1010 megatons of TNT.

Imagining oneself sitting among such scaled-up charges and masses gives some perspective on the energy-to-mass ratios at the molecular level, and what it would be like to be the size of a molecule sitting among atoms and molecules. This perspective shows the silliness of the creationist’s tornado-in-a-junkyard argument against molecular evolution. Junkyard tornadoes are puny compared to the interaction energies of masses with the scaled up charge-to-mass ratios of protons and electrons; the kinetic energy of a kilogram mass moving at the maximum wind speed of an EF5 tornado is about 104 joules.

If we scale ourselves down to the size of a molecule inside the cell, we are dealing with energies on the order of a few hundredths to a few tenths of an electronvolt, very large energies compared to that of our now tiny selves. Atomic force microscopy can measure the force with which a “walking” molecule can pull. These forces are on the order of piconewtons (10-12 newton). A potential energy gradient of about 0.01 electronvolt over a distance of 1 nanometer is a force on the order of 1 piconewton. Advocates of “vital forces” and “intelligent intervention” will have to explain how these non-physical “forces” escape detection even though they can still push atoms and molecules around.

One of the important points that Hoffmann makes in this chapter is the relationship between quantum mechanical effects and thermal noise. In the realm and temperature of the cell, quantum mechanical effects are completely swamped by thermal noise. Much of the research on quantum computing has to take place at very low temperatures, far too low for any living system. As Hoffmann makes clear at this point, essentially all of molecular biology can be explained using classical physics.

Chapter 5, “Maxwell’s Demon and Feynman’s Ratchet,” deals precisely with these topics. The question, of course, is whether or not it is possible to violate the second law of thermodynamics. Does a vital force have to be introduced? What does it take to make a ratchet, bombarded by the impacts of photons and particles, turn in only one direction?

A clear understanding of the demon and the ratchet is the key to understanding the mechanism behind the directionality of processes taking place within the cell. What is that mechanism that extracts order out of chaos?

Hoffmann tantalizingly leaves the question open in preparation for the next chapter.

Chapter 6, “The Mystery of Life,” starts out with the declaration, “Thou shalt not violate the second law.” Here Hoffmann lays out some of the details of what has to be done, and how it is done by enzymes, allostery, and the release of energy into a ratchet at just the proper time. As you would expect, it takes energy input to make a ratchet being bombarded by a molecular storm turn in one direction; there is no violation of the second law. The molecular storm does most of the work, and a small input of energy from a molecule like ATP keeps the ratchet from “slipping backward” in what Hoffmann refers to as a “reset step.”

This chapter describes in considerable detail just how molecules like kinesin can walk along a nanotube rail; this is where we actually see the physics of how these molecules extract order out of chaos.

Hoffmann contrasts here the difference between a robust “molecular Hummer” plowing through a molecular storm, and a floppy molecular system that harnesses the energy of the storm. He shows how Nature favors the latter.

Near the end of the chapter, Hoffmann shows a very interesting graph of the speed of a molecular motor as a function of temperature . He explains the importance of thermal motion in making it work. As someone who has worked in the field of condensed matter physics, I could certainly appreciate this explicit example of the very sensitive dependence of life on temperature. The phenomena of hypothermia, hyperthermia, and the chirp rates of crickets and cicada have been known for at least a century. To a physicist or a chemist, temperature dependent phenomena such as these are very strong clues about the mechanisms of life.

Chapter 7, “Twist and Route,” is the longest chapter in the book; it is packed full of details about a number of other molecular systems that waddle, walk, stride, rotate, pump, and replenish ATP. The reader may want to reread this chapter a number of times to get the details.

Some of the experimental techniques are described in this chapter. One of the stories that particularly caught my attention was the section entitled “ATP Synthase and the Amazing Spinning Baton.” In order to prove that the ATP synthase is a rotary machine, biochemist Hiroyuki Noji and coworkers at the Tokyo Institute of Technology actually attached a magnetic bead to one of the units of the synthase and used a rotating magnetic field to turn the “handle.” When rotated clockwise, the synthase produced ADP; when rotated counterclockwise, it produced ATP!

Chapter 8, “The Watch and the Ribosome,” gets into the question of how such molecules came to be. The watch reference is, of course, to William Paley.

Hoffmann recounts Paley’s musings on what would be the nature of watches that could reproduce; he follows these musings to point out that Paley was misguided on the topic of reproduction. Would reproducing watches evolve? If they did, what does that say about the similarities among watches and their ability to “improve”? Evolution introduces chance. What if watches evolved an internal “purpose” of efficient reproduction? You would still need an external agent to select the best watches from among all the offspring of those that reproduced. Without an external agent, a watch would eventually stop being a watch. What does this then say about the artificer that designed the first watch or watches? Pursuing this line of thinking illuminates some of the major problems with locating the original intent of a designer that is now lost in the mists of the evolutionary history of watches.

The basic question is, “How do molecules evolve?” Is there a form of natural selection that applies to molecules in the way that natural selection applies to populations of living organisms? Hoffmann makes a good case for molecules evolving; in fact, given their environment, there is no other way.

Chapter 9, “Making a Living,” takes us into some broader issues having to do with regulation, systems biology and regulatory networks, emergence, reductionism, and the relationships between biologists and physicists.

Basically the issue about cells is that no cell is an island; they work in conjunction with other cells, within large systems of cells, and within a larger environment.

On the issue of holism versus reductionism, Hoffmann sees this philosophical debate as a nonissue for most scientists. I agree. As Hoffmann also observes, holism and reductionism are two sides of the same coin; we take things apart to find out how they work, but we have to put them back together again in order for them to have the properties by which we identify and describe them. A cow may be ultimately reducible to quarks, but quarks are not a cow, and you cannot predict a cow from knowing the properties of quarks. You can’t even predict a house from the properties of bricks or stones. There are no formulas for cows based on particle physics just as there are no formulas for a house based on the properties of bricks or stones.

Most complex systems, such as those we find in biology, are momentarily frozen accidents resulting from a vast sequence of contingencies. Another way to state this is that the unique properties of complex things emerge from their contingent development, the contingent relationships within themselves, and their relationships with their environment and with the other contingent things that see them and describe them.

Epilogue. Readers of Douglas Adams’s Hitchhiker’s Guide to the Galaxy will recognize the title of the Epilogue, “Life, the universe, and everything.”

Hoffmann notes that we are changing our entire perspective on the structure of the universe as a result of what we are learning about the processes in the cell. He notes also that “the universe is not a victim of the second law of thermodynamics. If this were so, the universe would just contain diffuse nebulae of hydrogen and helium.” In other words, if matter did not continue to condense, the universe would contain no stars, no planets, and no life.

Summary. Other features of the book include a good index, a useful glossary, a nice list of additional sources, and a suggested reading list. The sources are listed as they come up chapter-by-chapter, and the reading list is organized by topic. These provide additional resources for anyone who wants to dig deeper into this subject.

All in all, I found the book an enjoyable read. Despite my physicist quibbles about the description of the Helmholtz free energy in Chapter 3, I highly recommend the book as a good overview of the history, the physics, and some of the philosophical issues surrounding the question, “What is life?” I am a layman in the biology, but I found Hoffmann’s physical descriptions and the physical mechanics of these “ratchets” easy to grasp.

P.S. In addition to his book, you may find an online video of Hoffmann giving a talk covering some of the topics of his book.

Mike Elzinga is a retired physicist whose career was in pure and applied research. He worked in low temperature physics and superconductivity, optics and holography, ultrasonic imaging, solid state devices in extreme radiation environments, and the development of infrared detecting CCD imagers.

38 Comments

I’m a high school Biology teacher and this book was the best one I read all Summer. Hoffman is very skilled when it comes to explaining complex concepts in simple terms. He also uses a plethora of analogies and examples that I plan to use in teaching these concepts in my AP Biology classes.

There is also a persistent meme in our popular culture that propagates the misconception that the second law of thermodynamics says everything tends toward disorder and disintegration.

Ah, but over what time frame? A planet around a yellow dwarf would have some fraction of the yellow dwarf phase of that star, for producing results analogous to what we have here. For the universe as a whole, I reckon that running out of fusible nuclei would lead to a state of equilibrium, though not really disorder.

If we ignoring the dark energy thing, that would lead to the remains of stars and planets getting very cold, but continuing to be distinct objects, with most of the heat energy in the form of photons in space. Say, would that mean that the objects formerly known as stars and planets would then have low entropy, with most of the entropy having been carried off by photons?

(Of course, with the dark energy thing, it would get an even more extreme form of equilibrium, but that’s a different subject.)

Henry

According to Boltzmann’s definition, entropy is a measure of the number of possible microstates of a system in thermodynamic equilibrium. Using this definition, Hoffmann addresses the common misconception that “organized” systems necessarily have less entropy using a jar of marbles as an analogy.

If the marbles are neatly stacked inside the jar, each marble can jiggle somewhat, meaning that the number of possible microstates of the system is comparatively high. This means that an apparently “organized” system actually has a high level of entropy.

If we compare this to a jar in which the marbles are randomly stacked, some of the marbles are likely jammed in place and therefore cannot move. As a result, this second jar has a smaller number of possible microstates than the first, meaning it actually has a lower entropy.

Living systems have both a high degree of order AND an high degree of entropy. Hoffmann masterfully explains that BOTH of these conditions are required for living things to maintain dynamic equilibrium with their environment.

Henry J said:

There is also a persistent meme in our popular culture that propagates the misconception that the second law of thermodynamics says everything tends toward disorder and disintegration.

Ah, but over what time frame?

Henry

Indeed; we live in interesting times.

So, to the sourpusses among us, this means they should stop kvetching and enjoy the show. We will all be gone before it ends anyway.

Thanks Mike for the review. I’ll have to read the book as soon as I can. If anyone is interested in more material on the 2nd law of thermodynamics as it relates to evolutionary biology, I highly recommend this review that was published in Evolution: Education and Outreach: Evolution and the Second Law of Thermodynamics: Effectively Communicating to Non-technicians

I have a question to put out there: Although entropy is popularly described as the tendency to increase disorder, is it more generally true that entropy is the tendency to arrive at a minimum of free energy?

Thanks Mike. I found the video very interesting. I especially liked the analogy of evolution as a ratchet.

Awesome! I picked Hoffman’s book up a month ago but only just got around to it (beginning ch. 3 today). What a pleasant surprise to see it reviewed on PT. Nice work Mike.

Mark Sturtevant said:

I have a question to put out there: Although entropy is popularly described as the tendency to increase disorder, is it more generally true that entropy is the tendency to arrive at a minimum of free energy?

Entropy is the number on ENERGY microstates consistent with the macroscopic thermodynamic variables that describe the system.

Those variables are things like temperature, volume, pressure, magnetization, or whatever other characteristics of the system are related to its energy content.

If the system is in contact with a much larger system, the internal energy many not be distributed among those microstates with equal probability. However, when the system is isolated, it comes to equilibrium within itself because matter interacts with matter and the various parts of the system exchange energy until the energy is spread uniformly over all energy microstates. This means the all ENERGY microstates are equally probable. The entropy then reduces to being the logarithm of the number of ENERGY microstates consistent with the macroscopic variables describing the system.

The concept of “free energy” is really related to the conservation of energy, i.e., the first law of thermodynamics.

Whenever an experimentalist wants to measure heat, work, and internal energy of a system, the total energy must be constant; any change in the total must be zero. It comes down to bookkeeping, if you can measure part of the energy using techniques in the lab, and you make sure that no heat or work was unaccounted for, then the difference between the total energy and what you measured is obviously what is left over; that may be the part you don’t have access to directly.

For example, measurements in the lab often take place at constant pressure. If the thermodynamic change involves an expansion of the system, then work is done against that pressure. One may want to set up the experiment to measure the amount of heat that went into the system or came out of the system. Knowing that allows you to conclude that the remainder went into changing the internal energy of the system AND do work on the environment. That remainder may be one of the thermodynamic potentials; it this particular example, the enthalpy.

The various “thermodynamic potentials” are ways of reorganizing the energy balance equation in a way that allows one to use the thermodynamic variables one can measure to calculate the ones that are inaccessible. The slopes of the curves of these thermodynamics potentials are often quantities you are seeking.

I am trying to avoid using math here because I don’t know your background, but I can expand on this if you wish.

I found this book to be an excellent follow-on to ‘Into the Cool’, which discusses thermodynamics and life. Thanks to Richardthughes who recommended it on AtBC. I read it first and then “Life’s Ratchet.” They go well together, and I’d recommend ‘Into the Cool’ to anyone who found value in “Life’s Ratchet.”

You can read the bulk of chapter 8 of Life’s Ratchet at NCSE’s website here (PDF).

Sounds like a book to add to my reading list. What very little time I have had as of late, I have been drudging through Phillip Johnson’s Darwin on Trial. It will be good to get back to real science.

Mike Elzinga said: …The concept of “free energy” is really related to the conservation of energy, i.e., the first law of thermodynamics…

Thank you for answering. I am more or less a biologist, and one who at best has a superficial understanding of thermodynamics. I felt I understood about 3/4 of your response, but I would like to check on something. Are you saying that a closed system with ‘low free energy’ does not necessarily have ‘high entropy’? Are those qualities really different things, and so are not necessarily correlated?

It sounds as if the perfect companion reading to to this one would be any article by Granville Sewell, so you could see what Sewell gets wrong. Sewell is at it again over at Evolution News and Views, claiming to have had his views suppressed. That post links on to his many articles available online. (I still don’t see why, if Sewell is correct, a seed can grow into a whole plant with multiple seeds. I assume Life’s Ratchet makes the matter clear).

Joe Felsenstein said: (I still don’t see why, if Sewell is correct, a seed can grow into a whole plant with multiple seeds. I assume Life’s Ratchet makes the matter clear).

I don’t see why an intelligent designer can bypass the laws of thermodynamics. Doesn’t that mean that a very clever person could design a perpetual motion machine? Actually, I guess, I could design a perpetual motion machine, but would have a few problems in building one.

Thanks, Mike. I just got the book last week for my Nook, and intend to read it RSN (real soon now).

Mark Sturtevant said:

Mike Elzinga said: …The concept of “free energy” is really related to the conservation of energy, i.e., the first law of thermodynamics…

Thank you for answering. I am more or less a biologist, and one who at best has a superficial understanding of thermodynamics.

The origin of the expression “free energy” is located somewhere back in the history of thermodynamics. A lot of people use it in different ways that make its use confusing.

Basically the idea is that whatever amount of the total energy of a system that did not go into internal states that are inaccessible to direct observation, that remainder is “free” to do measurable work. It is a bookkeeping issue.

The macroscopic measurements we are able to make with a system determine which of these thermodynamic potentials we can use. Perhaps we can measure total heat flow and can control pressure and volume. Or perhaps we measure temperature and heat flow at constant volume. Or we measure change in temperature with change in volume when no heat can enter or leave the system.

We use different thermodynamic potentials to determine internal states and other properties of a thermodynamic system depending on what we can measure and what we can control.

All these measurements depend on the fact of conservation of energy. We just have to be sure in any of our measurements that no energy gets away and that it is all trapped within our experimental setup. We then measure what we can get a handle on, and the rest is calculated from using these various thermodynamic potentials.

Are you saying that a closed system with ‘low free energy’ does not necessarily have ‘high entropy’? Are those qualities really different things, and so are not necessarily correlated?

Well, not quite. The terms “free energy” apply to two of the thermodynamic potentials; the Gibbs free energy, G = ETS + pV, and the Helmholtz free energy, ETS. There is another thermodynamic potential called “enthalpy”, E + pV.

The Gibbs free energy has the property that it remains constant in thermodynamic processes in which the temperature and pressure are constant. The Helmholtz free energy has the property that it remains constant when temperature and volume are constant. The enthalpy is constant when entropy and pressure are constant. The slopes of these curves under different conditions give us information we are often seeking.

How the entropy is related to energy or other thermodynamic variables, such as temperature, pressure, volume, magnetization, etc., depends on the system. I can show you what is called a two-state system in which the entropy increases from zero when all of its internal constituents are in the ground state, passes through a maximum when half the constituents are in their excited state, and returns to zero when all the constituents are in their excited state. This is an example in which the entropy does not always increase when the energy of the system increases; in fact, it can actually decrease with increasing energy.

So it depends on the nature of the system. The only general rules are conservation of energy (first law), the spreading around of energy (second law), energy flows from higher temperatures to lower temperatures (basically the “zeroeth” law), and entropy goes to zero when all energy is drained out of the system (third law).

TomS said:

I don’t see why an intelligent designer can bypass the laws of thermodynamics. Doesn’t that mean that a very clever person could design a perpetual motion machine? Actually, I guess, I could design a perpetual motion machine, but would have a few problems in building one.

But if you were omnipotent - which is the type of designer we’re really talking about - you would be able to build perpetual motion machines.

At least that would be my working assumption. Any entity that could manage the energies involved in whipping up an entire universe shouldn’t have problems with trivia such as gravity shielding and magnetic monopoles.

stevaroni said:

TomS said:

I don’t see why an intelligent designer can bypass the laws of thermodynamics. Doesn’t that mean that a very clever person could design a perpetual motion machine? Actually, I guess, I could design a perpetual motion machine, but would have a few problems in building one.

But if you were omnipotent - which is the type of designer we’re really talking about - you would be able to build perpetual motion machines.

At least that would be my working assumption. Any entity that could manage the energies involved in whipping up an entire universe shouldn’t have problems with trivia such as gravity shielding and magnetic monopoles.

I would not have this objection to a “Theory of Omnipotent Manufacture” that I have to a “Theory of Intelligent Design”.

Joe Felsenstein said:

It sounds as if the perfect companion reading to to this one would be any article by Granville Sewell, so you could see what Sewell gets wrong. Sewell is at it again over at Evolution News and Views, claiming to have had his views suppressed.

And then there is this long “discussion” over at UD. As I read through it, I could feel my brain dying, so I had to stop.

stevaroni said:

TomS said:

I don’t see why an intelligent designer can bypass the laws of thermodynamics. Doesn’t that mean that a very clever person could design a perpetual motion machine? Actually, I guess, I could design a perpetual motion machine, but would have a few problems in building one.

But if you were omnipotent - which is the type of designer we’re really talking about - you would be able to build perpetual motion machines.

At least that would be my working assumption. Any entity that could manage the energies involved in whipping up an entire universe shouldn’t have problems with trivia such as gravity shielding and magnetic monopoles.

But wouldn’t the machine need continuing support from its “designer” in order to keep working?

Mike Elzinga said:

Joe Felsenstein said:

It sounds as if the perfect companion reading to to this one would be any article by Granville Sewell, so you could see what Sewell gets wrong. Sewell is at it again over at Evolution News and Views, claiming to have had his views suppressed.

And then there is this long “discussion” over at UD. As I read through it, I could feel my brain dying, so I had to stop.

Are they running some sort of contest over there to see who can come up with the worst mangling of thermodynamics?

SWT said:

Are they running some sort of contest over there to see who can come up with the worst mangling of thermodynamics?

As I may have said on some other occasion, they seem to work far harder at getting things wrong than they would if they just sat down with a good textbook and learned.

But it has never been about getting the science concepts right; it has always been about getting a free ride and publicity by “debating.” UD is a place to practice the shtick.

I still think that the leaders of the ID/creationist movement sit back and watch as they allow their rubes to go out and get slaughtered in order to test debating points.

Henry J said:

But wouldn’t the machine need continuing support from its “designer” in order to keep working?

Dunno.

But I’m pretty sure that if I had access to a material that could shield (or divert) gravity, I could make a working perpetual motion machine, sort of a water-wheel thing with one side exposed to normal gravity, and the other side exposed to “gravity lite”.

And if we had that, people could fly to other planets without having to use rockets!

Well, if you would just imagine flying to other planets you could avoid all that tedious acceleration. Yes, you could just imagine it, like FL does, and avoid reality completely!

Henry J Wrote:

But wouldn’t the machine need continuing support from its “designer” in order to keep working?

I don’t know, but if that’s true, there’s no reason it has to be something we can detect. Ken Miller speculated (and made clear that it’s no more that personal speculation) that a designer could intervene imperceptibly via quantum indeterminacy. But since FL was mentioned, I have to repeat that, of all ironies, he has provided one of the best arguments against ID ever. When I asked him years ago if human conception was an example of designer intervention, he said yes without hesitation. So much for the DI’s strategy to lead the audience - especially critics who ought to know better - to infer that those mysterious events occurred “long ago” (but don’t ask, don’t tell where or when, or whether they occurred in-vivo or by new origin-of-life events). As you know, the DI conducts a “big tent” scam, so as long as they can peddle incredulity of evolution or arguments for design to their audience, they know that that audience will infer everything from geocentric YEC to aliens planting a cell on earth ~4 billion years ago that’s ancestral to all current life. And that those mutually contradictory alternate “theories” will get virtually no critical analysis - a real one or a phony one “designed” only to promote unreasonable doubt - like the one for evolution that they demand be taught at taxpayers’ expense.

Frank J said: one of the best arguments against ID ever

IMHO, the two best arguments against ID are:

(1) On the assumption that everything that the advocate of design/creationism says is correct, and there are major problems with evolutionary biology, what did happen, when and where, why and how, so that things turned out this way, rather than something else?

(2) Do not the arguments presented apply with at least as much force (if not more) against naturalistic/scientific accounts of reproduction (or development, genetics, “micro”evolution, … and this can go so far as to include geology, history and on an on).

I think that what you brought up is an instance of (2). If I stand in a special relationship with my Creator and Redeemer, how is it that my body resulted from processes which can be studied by reproductive biology? And if one does not find an insurmountable difficulty in that, why would the origins of the abstraction “mankind” long ago and far away pose any more of a problem?

Mike Elzinga said:

Entropy is the number on ENERGY microstates consistent with the macroscopic thermodynamic variables that describe the system.

Those variables are things like temperature, volume, pressure, magnetization, or whatever other characteristics of the system are related to its energy content.

If the system is in contact with a much larger system, the internal energy many not be distributed among those microstates with equal probability. […]

Why the emphasis on ENERGY microstates? My understanding is that ALL microstates are relevant. For example if you have degenerate (equal-energy) states, each of those has the same probability as a single non-degenerate state (i.e. the population of the degenerate energy level is proportional to its degeneracy). Similarly, when computing entropy, degenerate states count as different states, not a single energy state.

gordon.davisson said:

Why the emphasis on ENERGY microstates? My understanding is that ALL microstates are relevant. For example if you have degenerate (equal-energy) states, each of those has the same probability as a single non-degenerate state (i.e. the population of the degenerate energy level is proportional to its degeneracy). Similarly, when computing entropy, degenerate states count as different states, not a single energy state.

Because it is the spreading around of energy among states that is relevant; not the spreading around of stuff (unless we are in the relativistic realm). The important point is enumerating the mechanisms that participate in the energy exchanges.

Three examples should illustrate the point.

(1) A two state system can be comprised of atoms that have a non-degenerate ground state and an accessible excited state. As energy is added to this system, its entropy increases from zero when all atoms are in the ground state, to a maximum when half the atoms are in the excited state, then back to zero when all atoms are in the excited state.

The atoms themselves may be fluorescent atoms embedded in a matrix of other atoms that don’t participate in the energy exchanges. The spatial arrangement of these atoms is irrelevant; they could be randomly distributed or in some crystalline order. The spatial arrangement plays no part in the entropy.

(2) A polycrystalline solid – e.g., copper – is heated from 0 degrees Celsius to 10 degrees Celsius. It’s entropy increases, but there are no spatial rearrangements of atoms or crystals within the solid. The mean position of every atom remains fixed. However the energy is spread around among more microstates that are the quantized, in-position vibrations of the molecules.

And this last one is one of the better examples I have seen that highlight the conflation of entropy with disorder. It is a little thought experiment.

(3) Consider an “ideal gas” made up of a bunch of perfectly elastic ball bearings within a perfectly rigid steel container in outer space far away from the effects of a gravitational force.

Now, at a particular instant in time, suppose we spray all the ball bearings in one half of the container green, and those in the other half red. Suppose the “paint” doesn’t change anything other than the color of the ball bearings. (This is a thought experiment).

From this point in time onward the entropy of the ball bearings remains just as it was before the coloring was added. However, the colors begin to mix.

The entropy of the ball bearings remains the same as it was because the energy and momentum exchanges are taking place among the ball bearings has not changed.

Now, one could describe the color mixing with some expression that looks like the expression for entropy, but it would NOT apply to energy microstates; it would have nothing to do the thermodynamic entropy of the system.

Part of the conflation of entropy with disorder comes from the frequent use of the ideal gas as an example for teaching about entropy. The entropy of an ideal gas does include the positions as part of the description of the “phase space” of energy microstates. But all this falls apart for tightly bound systems and other systems in which the positions of the constituents of the system are not part of the phase space.

Most of the better textbooks on thermodynamics and statistical mechanics start out by enumerating microstates in things like two-state systems and/or collections of Einstein oscillators. The ideal gas introduces too many apparent paradoxes.

TomS Wrote:

I think that what you brought up is an instance of (2).

Yes, and I should have credited you on what is an excellent line of argument that is too often neglected. And the “kind” of argument that at least has a shot with competing with the catchy, but misleading anti-evolution sound bites, given the audience we need to inform. The technical refutations are necessary, but require more investment in time than most people have to fully understand.

What makes this the argument especially nice is that it comes from an evolution-denier.

Mike Elzinga said:

gordon.davisson said:

Why the emphasis on ENERGY microstates? My understanding is that ALL microstates are relevant.

Because it is the spreading around of energy among states that is relevant; not the spreading around of stuff (unless we are in the relativistic realm). The important point is enumerating the mechanisms that participate in the energy exchanges.

Three examples should illustrate the point.

I certainly agree that the spreading around of energy is relevant, but I disagree about the “spreading around of stuff”. I’ll try to go through your examples and explain where & why I disagree.

(1) A two state system can be comprised of atoms that have a non-degenerate ground state and an accessible excited state. As energy is added to this system, its entropy increases from zero when all atoms are in the ground state, to a maximum when half the atoms are in the excited state, then back to zero when all atoms are in the excited state.

Agreed, with the caveat that only the thermal component of entropy is (necessarily) zero in the all-ground and all-excited states.

The atoms themselves may be fluorescent atoms embedded in a matrix of other atoms that don’t participate in the energy exchanges. The spatial arrangement of these atoms is irrelevant; they could be randomly distributed or in some crystalline order. The spatial arrangement plays no part in the entropy.

Here I strongly disagree. The simplest example I know of is the residual entropy of disordered crystals cooled to absolute zero. Carbon monoxide is the classic example: when cooled near absolute zero, it forms a crystaline solid with a mostly-random orientation of the molecules (C-O vs. O-C). This gives it a nonzero entropy even when all molecules are cooled into their ground states. And this isn’t just a theoretical argument: the residual entropy can be measured via the classical thermodynamic definition of entropy, by integrating dS=dQ/T as it’s cooled to absolute zero.

You may argue that, while the C-O and O-C orientations of the carbon monoxide molecules are nearly symmetric, they aren’t exactly, and that means that there’s a not-quite-zero energy difference between the orientations due to interactions with neighboring molecules. And You’d be right. But do you really want to claim that if the interaction energies were exactly (not just approxamately) the same, the residual entropy would suddenly vanish? I don’t think that’s plausible.

For that matter, there’s actually a nonzero energy dependency on the position of the molecules anyway. Is the material in a nonzero gravitational field? If so, then there’s an energy associated with the vertical component of each atom’s position. Is there a nonuniform magnetic field, etc?

If you look really really closely, I’m pretty sure you’ll find that the positions of the atoms are energetic degrees of freedom, and hence by your definition do contribute to the entropy… unless some degree of freedom happens to have exactly zero associated energy, in which case it suddenly stops contributing to the entropy, and the system’s entropy abruptly drops. Which makes no sense at all that I can see.

(2) A polycrystalline solid – e.g., copper – is heated from 0 degrees Celsius to 10 degrees Celsius. It’s entropy increases, but there are no spatial rearrangements of atoms or crystals within the solid. The mean position of every atom remains fixed. However the energy is spread around among more microstates that are the quantized, in-position vibrations of the molecules.

Agreed, but since there are no relevant non-thermal degrees of freedom here, the non-thermal contributions to entropy aren’t going to be relevant anyway.

And this last one is one of the better examples I have seen that highlight the conflation of entropy with disorder. It is a little thought experiment.

(3) Consider an “ideal gas” made up of a bunch of perfectly elastic ball bearings within a perfectly rigid steel container in outer space far away from the effects of a gravitational force.

Now, at a particular instant in time, suppose we spray all the ball bearings in one half of the container green, and those in the other half red. Suppose the “paint” doesn’t change anything other than the color of the ball bearings. (This is a thought experiment).

From this point in time onward the entropy of the ball bearings remains just as it was before the coloring was added. However, the colors begin to mix.

The entropy of the ball bearings remains the same as it was because the energy and momentum exchanges are taking place among the ball bearings has not changed.

Now, one could describe the color mixing with some expression that looks like the expression for entropy, but it would NOT apply to energy microstates; it would have nothing to do the thermodynamic entropy of the system.

Are you seriously claiming that the entropy of mixing isn’t real? For real gasses, it certainly is real and measurable (and I don’t see why your painted ball bearings should be any different).

More specifically: can you give a procedure for unmixing the ball bearings that doesn’t produce at k*T*N*ln(2) (where N is the number of ball bearings) of heat (or some other form of compensating entropy increase)? My understanding of the resolution of Maxwell’s demon implies that you won’t be able to do it. (And furthermore, if you do find such a procedure, I claim I can apply it to real gasses and violate the second law.)

gordon.davisson said:

I certainly agree that the spreading around of energy is relevant, but I disagree about the “spreading around of stuff”. I’ll try to go through your examples and explain where & why I disagree.

Agreed, with the caveat that only the thermal component of entropy is (necessarily) zero in the all-ground and all-excited states.

I’m not sure what we are disagreeing about here, Gordon. The example of the two-state system didn’t have to be atoms with a non-degenerate ground state; I merely picked that for simplicity to illustrate the difference between a mechanism that contained the energy of the system and spatial arrangements of the atoms that had nothing to do with the energy content of the system.

If the ground states of the atoms were degenerate, then the entropy would not be zero. But it would still go through a maximum and then back to zero when all atoms are in the excited stated; unless, of course that excited state was also degenerate.

But, again, the point was to illustrate the difference between the mechanisms that contain the energy of the system and the spatial positions that have nothing to do with the energy content of the system. I could just as easily have picked for the two-state system a collection of spins in a magnetic field.

Of course degenerate energy states count in the entropy. Spatial rearrangements count only IF they come with different energies.

In trying to pick examples for a general audience, don’t throw in all the complexities that an experienced professional needs to know about. Avoid unnecessary pedantry.

(And by the way, although I didn’t say it explicitly, the above system is completely isolated. Any photons that pop out from a decay of an excited state are reflected back into the system and can produce and excited state in another atom. Energy is not leaving the system by way of photons leaving the system. I was avoiding unnecessary pedantry.)

You may argue that, while the C-O and O-C orientations of the carbon monoxide molecules are nearly symmetric, they aren’t exactly, and that means that there’s a not-quite-zero energy difference between the orientations due to interactions with neighboring molecules. And You’d be right. But do you really want to claim that if the interaction energies were exactly (not just approxamately) the same, the residual entropy would suddenly vanish?

I have no idea how you got the impression I said anything of the sort in any of my examples. Of course they would be different states IF the reorientation exchanged a C with an O; these are different atoms. Their interactions with the neighboring atoms might very well be different. Interchanging an O with an O would not be another state. All interchanges of identical atoms represent the same state; interchanges of non-identical atoms produce a different state.

Are you seriously claiming that the entropy of mixing isn’t real? For real gasses, it certainly is real and measurable (and I don’t see why your painted ball bearings should be any different).

Are you seriously claiming that is what this example is all about?

What possible meaning does a chemical potential have for the colors in that example? What energy exchanges are being affected by the colors as a result of the mixing of those colors?

You missed the point entirely with the colored ball bearings. The colors in that thought experiment have nothing to do with the distribution of energy among states. That energy is contained in the kinetic energy of the ball bearings. The obvious implication with ball bearings is that they are identical; interchanging any pair of them doesn’t produce another energy state. The color in this example has nothing to do with the energy distribution among states in that example.

The point is that spatial rearrangements of things don’t necessarily come with different energies. What meaning does ∂S/∂E have if you apply the concept of entropy to the rearrangement of colors? It can’t be the reciprocal temperature. How would the temperature (average kinetic energy per degree of freedom) change when the colors mixed?

More specifically: can you give a procedure for unmixing the ball bearings that doesn’t produce at k*T*N*ln(2) (where N is the number of ball bearings) of heat (or some other form of compensating entropy increase)? My understanding of the resolution of Maxwell’s demon implies that you won’t be able to do it. (And furthermore, if you do find such a procedure, I claim I can apply it to real gasses and violate the second law.)

If this is an “ideal gas” of ball bearings, the colors will unmix after a sufficient length of time (ergodic theorem).

Now if you want to get into all the paradoxes of an ideal gas, what do you think will happen to the ball bearings after a sufficient length of time? What happens to the entropy? Do the colors have anything to do with it?

Quantum mechanically, there is a huge difference between identical and nonidentical particles. In counting the number of states available to the system, for identical particles, all permutations of them count as ONE state. For nonidentical particles, permutations make many states. That affects S = k ln(N) if N is the number of states. So painting some balls red and other balls blue in Step 1 will not change entropy, but mixing them will increase it, and the final state will have higher entropy than it had pre-painting – because each swap of a red and a blue counts as a new state, but two grays counted as one state pre-painting.

diogeneslamp0 said:

Quantum mechanically, there is a huge difference between identical and nonidentical particles. In counting the number of states available to the system, for identical particles, all permutations of them count as ONE state. For nonidentical particles, permutations make many states. That affects S = k ln(N) if N is the number of states. So painting some balls red and other balls blue in Step 1 will not change entropy, but mixing them will increase it, and the final state will have higher entropy than it had pre-painting – because each swap of a red and a blue counts as a new state, but two grays counted as one state pre-painting.

How does the “paint” in this thought experiment change the energy of a ball bearing and its interaction with another ball bearing when they collide? How does it change the mass? How does in make the wave function of a ball bearing symmetric or antisymmetric?

What makes particles at the atomic level different? It is certainly more significant than the “paint” that marked a ball bearing in this example.

I should add that diogeneslamp’s comment gets at a key point about particles at the atomic level. There are no infinitesimal gradations among particles at the quantum level. This means that how they interact with other particles in their vicinity depends on what kind of particle they are; and those differences are discrete.

Spatial rearrangements of letters in a string of characters are not representations of rearrangements of different atoms or particles, because letters don’t interact among themselves.

You can’t use letters as stand-ins for atoms and molecules unless you are able to specify the interaction energies for each pair of letters as a function of their separation or how much energy is required to remove a letter from a collection of other letters (a “chemical potential”).

This is why rearrangements of junkyard parts, clothing, furniture, or coins cannot be used as stand-ins for atoms and molecules.

Mike Elzinga: I should add that diogeneslamp’s comment gets at a key point about particles at the atomic level. There are no infinitesimal gradations among particles at the quantum level. This means that how they interact with other particles in their vicinity depends on what kind of particle they are; and those differences are discrete.

Which makes physics simpler than biology! Discrete species, and the same species across time and space.

Oh, and that also would rule out evolution* for particles up through molecules. (Or maybe I shouldn’t say that; science deniers are apt to misconstrue that as a denial of evolution!)

*(in the sense in which the word “evolution” is used in biology)

Henry

Henry J said:

Which makes physics simpler than biology! Discrete species, and the same species across time and space.

Oh, and that also would rule out evolution* for particles up through molecules. (Or maybe I shouldn’t say that; science deniers are apt to misconstrue that as a denial of evolution!)

*(in the sense in which the word “evolution” is used in biology)

Henry

Egads! Kinds?

But, hey; one “kind” of atom gives birth to a different kind. Ya just hafta put the heat on in stars.

Duane Gish is turning in his grave.

About this Entry

This page contains a single entry by Matt Young published on September 2, 2013 6:28 PM.

Juniperus occidentalis was the previous entry in this blog.

Slaying Meyer’s Hopeless Monster is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter