Evolution of complexity, information and entropy

| 121 Comments | 1 TrackBack

Since we have seen some poorly argued claims about entropy and its relevance to evolution, I will explore the concepts of entropy as they apply to genome evolution and will show that the evidence shows how simple processes like variation and selection are sufficient to explain the evolution of complexity or information/entropy in the genome.

While various ID authors (here and elsewhere) have argued that such natural processes are unable to explain the evolution of information in the genome, it should be clear that the actual evidence contradicts any such suggestions.

In the past I have argued with various people on the topic of entropy. Jerry Don Bauer a.k.a. Chronos has shown some interesting confusions as to the concept of entropy and believes that he has shown that using the laws of entropy he has shown that macro-evolution could not have happened.

First some background information

Jerry defines entropy and shows that entropy is always positive (no surprise here since entropy is the log of a number larger than or equal to 1. Based on the fact that entropy is positive he concludes that the tendency is positive and thus complex macro evolution has been disproven:

S = log2W, S = log2(100,000,000), S = 26.5754247590989, therefore S is positive showing a tendency of disorder. Complex macroevolution would have violated one of the most basic and well proven laws of science. And since we know that nothing violates a law of science as a tendency, we can most assuredly conclude that complex macroevolution never occurred.

Link

Jerry can be seen backtracking in later responses:

I certainly do not mean to imply that this is my work: “if W, the number of states by some measure, is greater than 1 then S will be positive by your formula. Thus any number of states will be “showing a tendency of disorder.” This is not my work and was done much earlier by such greats as Boltzmann and Feynman et al.

further backing up and further obfuscating

I did state that that if S is positive, entropy is increased. And this is not a tendency in this case. It’s a fact of this specific example. I would ask you to examine your logic. If entropy increases then disorder has occurred. If S is positive then entropy has increased because ‘S’ IS the entropy we are considering. If you are going to continue in this vein of logic, then I will have to ask you to show how that the tenets of thermodynamics is just wrong in that everyone has it backward. Rising entropy denotes order and decreasing entropy denotes disorder.

link

Another whopper

P1: With every generation in homo sapien, entropy increases in the genome.

P2: Complex macroevolution requires that the genome have a lower entropy over time through the generations.

Therefore, complex macroevolution did not occur

Link

Gedanken quickly exposes the fallacies in Chronos’s argument

By the way, Chronos has not demonstrated either of his premises P1 nor P2.

He has not demonstrated that the entropy must be increasing, simply because his argument confuses the positive value of entropy with a delta or change of entropy in a positive direction. Even if there were an argument that demonstrated this was a positive delta, Chronos has decided not to give such an argument and relies on the value being positive – an irrelevant issue.

Then Chronos has not demonstrated that change over time requires an decrease in entropy. (Or any particular change in entropy – for example changes occur and they are different, but they have the same number of informational or microstates and thus S has not changed.)

Link

Anyone can decypher this one?

Begging your pardon, but it’s not me saying that when entropy is positive it “tends” toward disorder. When entropy is positive there is no longer a tendency involved. It has already happened. The reaction is over and a passed event. Therefore the term tendency no longer applies. And anytime entropy is positive the system has disordered:

Link

Gedanken explains what is wrong with Chronos’s argument

So what is wrong with Jerry’s claims? Other than the confusion of tendency and value that is.

In fact some excellent papers are published by

Adami and Schneider

which show how contrary to Jerry’s claims, entropy in the genome can decrease through the simple processes of variation and selection.

Despite the fact that Jerry seems to be blaming Feynman for his errors, it should be clear or soon become clear that Jerry is wrong.

I encourage the readers to pursue the thread I pointed out in which one can see how several people make significant effort to address the confusions exhibited by Jerry. If anything it shows why the abuse of mathematics appears to be so widespread.

As I have shown in some detail above, a correct application of entropy is not that complicated.

The following is a more indepth introduction to the exciting findings about entropy and information/complexity.

Schneider provides us with some interesting data

Information/entropy increase/decrease

Note how the information increases from zero to about 4 bits

From PNAS we find

Fig. 3. (A) Total entropy per program as a function of evolutionary time. (B) Fitness of the most abundant genotype as a function of time. Evolutionary transitions are identified with short periods in which the entropy drops sharply, and fitness jumps. Vertical dashed lines indicate the moments at which the genomes in Fig. 1 A and B were dominant.

In Evolution of biological complexity Adami et al show

To make a case for or against a trend in the evolution of complexity in biological evolution, complexity needs to be both rigorously defined and measurable. A recent information-theoretic (but intuitively evident) definition identifies genomic complexity with the amount of information a sequence stores about its environment. We investigate the evolution of genomic complexity in populations of digital organisms and monitor in detail the evolutionary transitions that increase complexity. We show that, because natural selection forces genomes to behave as a natural “Maxwell Demon,” within a fixed environment, genomic complexity is forced to increase.

The approach is very simple first assume a genome with site i which has the following probabilities for the four nucleotides involved

One can show that the entropy for this site can be calculated to be

And the entropy tendency or information can be defined as

Now sum over all sites i and you find that the complexity or information is given by

Figure 3 above shows how entropy after an initial increase decreases at the same time the fitness increases. This information increase/entropy decrease is exactly what happens when selection and variation are combined. Figure 3 shows some beautiful examples of evolutionary transitions.

I am not the only one who has reached this obvious conclusion

Andya Primanda addresses the claim that “Can mutations increase information content? “ from Chapter 3 of The Evolution Deceit by Harun Yahya.

Some excellent websites which expand on the materials presented here can be found

Adami: Evolutionary Biology and Biocomplexity

and

ev: Evolution of Biological Information

A recent paper which identifies some problems with Schneider’s approach can be found here. Despite the problems, the authors recover most of the same conclusions.

Empirically, it has been observed in several cases that the information content of transcription factor binding site sequences (Rsequence) approximately equals the information content of binding site positions (Rfrequency). A general framework for formal models of transcription factors and binding sites is developed to address this issue. Measures for information content in transcription factor binding sites are revisited and theoretic analyses are compared on this basis. These analyses do not lead to consistent results. A comparative review reveals that these inconsistent approaches do not include a transcription factor state space. Therefore, a state space for mathematically representing transcription factors with respect to their binding site recognition properties is introduced into the modelling framework. Analysis of the resulting comprehensive model shows that the structure of genome state space favours equality of RSequence and RFrequency indeed, but the relation between the two information quantities also depends on the structure of the transcription factor state space. This might lead to significant deviations between RSequence and RFrequency . However, further investigation and biological arguments show that the effects of the structure of the transcription factor state space on the relation of RSequence and RFrequency are strongly limited for systems which are autonomous in the sense that all DNA binding proteins operating on the genome are encoded in the genome itself. This provides a theoretical explanation for the empirically observed equality.

1 TrackBack

Panda's Thumb Posts from De Rerum Natura on May 25, 2004 1:36 PM

There are some interestng posts over at the Panda’s Thumb right now. Evolution of complexity, information and entropy Evolution of complexity, information and entropy Gene duplication versus ID... Read More

121 Comments

TY, that was excellent!

Pim wrote:

Since we have seen some poorly argued claims about entropy and its relevance to evolution, I will explore the concepts of entropy as they apply to genome evolution and will show that the evidence shows how simple processes like variation and selection are sufficient to explain the evolution of complexity or information/entropy in the genome.

Unfortunately, neither entropy, complexity or information has anything at all to do with evolution. So I guess you could call this a “red herring”. If you explained the evolution of *organization* (not order), now that would really be something!

As for the rest of your post, very interesting. Instead of answering each point, allow me to refer you to my prior responses to this issue:

http://tinyurl.com/2sk5c

The search term is “Nelson’s Law”

Contrary to Charlie’s suggestions that the issues are a red herring let me point out that it is the ID movement who is claiming that evolutionary mechanisms cannot explain the origin of information and complexity. Is Charlie suggesting that we blame Dembski for introducing the concept of entropy/information? Or in this case is Charlie arguing that Jerry’s comments are irrelevant?

As far as organization is concerned, scale free networks, gene duplication all help understand such issues as modularity, degeneracy, robustness, evolvability.

The real red herring may be the suggestion that the issue is one of ‘organization’. But let’s focus on the issue at hand in this thread which addresses the arguments by ID proponents about information/entropy.

Unfortunately, neither entropy, complexity or information has anything at all to do with evolution. So I guess you could call this a “red herring”.

Thermodynamics has nothing to do with biology? Call me an uninformed, ignorant layman–and you should, because I am–but that bald assertion strikes me as more than a little, um, radical.

Oh, and replace “biology” with “evolution.” Duh.

Pim wrote:

let me point out that it is the ID movement who is claiming that evolutionary mechanisms cannot explain the origin of information and complexity.

I don’t represent the “ID movement”, I don’t speak for the “ID movement”, I don’t defend anyone’s views but my own. I don’t know Bill Dembski, I’ve never read his books and I could care less what he thinks. I speak on my own behalf, and I defend what *I* say.

charlie and Nelson's law Wrote:

Which is exactly why I have put forth Nelson’s Law. It separates out this problem and allows it to stand alone on it’s own merits. Nelson’s law involves logical entropy and separates it from thermodynamic entropy. It measures the disorder of a system and is a pure number, with no units. Life can be described as organization and Nelson’s Law states that “things do not organize themselves”. The evolution of life involves in increase in organization, a decrease in logical entropy. Nelson’s law forbids this. Things cannot organize themselves without input from outside. You cannot seek refuge from this dilemma by taking advantage of the confusion between the two forms of entropy.

Nor can one take refuge from the logical and practical answer to Nelson’s law namely that the input from outside is exactly what natural selection is all about. In fact the entropy in the genome can be shown to be linked to the correlation between the genome and the environment and natural selection tends to increase this correlation, thus reducing the entropy.

Simply arguing that it looks like a machine and that it requires input from the outside does not help eliminate evolutionary processes as ‘designers’. In fact it strengthens the proposed evolutionary mechanisms.

Nelson’s law is nothing much different from appeal to entropy with the same fallacies. Its ‘prediction’ that complex machines require intelligent design are meaningless when intelligent design cannot exclude natural processes as its designer.

charlie Wrote:

I don’t represent the “ID movement”, I don’t speak for the “ID movement”, I don’t defend anyone’s views but my own. I don’t know Bill Dembski, I’ve never read his books and I could care less what he thinks. I speak on my own behalf, and I defend what *I* say.

That is all nice but this thread is *not* about Charlie and all about the confusions exhibited by ID proponents when it comes to entropy and evolution. If you want to be included, fine and I appreciate that your comments help make my point that ID proponents such as Dembski and Jerry are ‘misguided’?

I was fairly amazed by Chronos’ assertion that “With every generation in homo sapien, entropy increases in the genome.”

Now my thermodynamics is a bit rusty, but wouldn’t that mean our offspring would progressively degenerate into piles of primordial goo? Isn’t the continued presence of life predicated on at the least a zero net change in “entropy”?

And does anyone actually take this seriously?

A good explanation of problems with common creationist arguments based on the SLOT can be found here or here

Pim wrote:

Nor can one take refuge from the logical and practical answer to Nelson’s law namely that the input from outside is exactly what natural selection is all about. In fact the entropy in the genome can be shown to be linked to the correlation between the genome and the environment and natural selection tends to increase this correlation, thus reducing the entropy.

With all due respect, this is not true. How do you measure the entropy in the genome and how do you demonstrate that natural selection reduces this entropy? There is no evidence that natural selection can “design” anything. All it can do is change the frequency of already existing variation.

Its ‘prediction’ that complex machines require intelligent design are meaningless when intelligent design cannot exclude natural processes as its designer.

There is no empirical evidence, either observational or experimental that supports the idea that natural processes are capable of design.

I think I have to agree that the issue of entropy in evolution is a bit of a red herring. First of all (and most importantly), the order of nucleic acids in the genome is not actually subject to the laws of thermodynamics. It certainly has aspects that bear a resemblance to familiar concepts in thermo but others are unfamiliar. A current major area of research in physics is to develop a framework to describe certain nonequilibrium phenomenon (of which evolution is an example) with familiar concepts of thermodynamics and statistical mechanics. I want to be clear that I’m not saying there isn’t some definition of “entropy” that applies to evolution and maybe it always increases or decreases or whatever - I think the hope is that there *is* such a thing, actually - I’m only saying that you can’t just lift thermodynamics and apply it to anything you want.

Besides which, increasing entropy is not really associated with increasing disorder at all. That’s just an analogy. There are several famous examples of situations where increasing entropy INCREASES ORDER.

There is no empirical evidence, either observational or experimental that supports the idea that natural processes are capable of design.

For the love of aliens, Charlie, could you at least make the slightest effort to express your own beliefs accurately?

Numerous people have gone around the bend with you numerous times regarding the same issue.

What Charlie really means to say is “No evidence shown to me, alone or in combination with any or all other evidence, will convince me that natural selection is responsible for the diversity of organisms and structures, including all known extinct organisms and all known living organisms.”

This statement, Charlie, unlike yours, is honest and forthright.

The statement you made, highlighted above, merely begs numerous questions, such as what you mean when you use terms such as “empirical,” “evidence,” “observational,” “experimental,” “natural processes” and “design.”

Woe be unto those who would ask you to define these terms in an effort to understand how anyone could be so deluded to assert that **no evidence exists** to support evolution by natural selection.

Woe be unto those!!!

Charlie Wrote:

There is no empirical evidence, either observational or experimental that supports the idea that natural processes are capable of design.

Of course this is an argument based on personal incredulity contradicted by the actual empirical evidence that shows how natural selection and variation (in other words natural processes) are capable of ‘design’. Charlie may surely argue that he is unfamiliar with the scientific evidence but that is hardly of any interest to me.

Charlie continues his strawman Wrote:

There is no evidence that natural selection can “design” anything. All it can do is change the frequency of already existing variation.

Charlie, until you accurately represent evolutionary processes that is variation AND selection, your comments have to be rejected. In fact your argument supports my claim namely that by changing the frequency of already existing variation natural selection does increase the correlation of the genome with the environment.

So yes there is theoretical, empirical and observation and experimental evidence to support these claims.

Chris Wrote:

I think I have to agree that the issue of entropy in evolution is a bit of a red herring. First of all (and most importantly), the order of nucleic acids in the genome is not actually subject to the laws of thermodynamics.

Which is why I use the Shannon entropy.

chris Wrote:

There are several famous examples of situations where increasing entropy INCREASES ORDER.

I would be interested in some references to these famous examples. But he discussion is not about order but rather about information/complexity and the claims made by ID proponents as to the limitations of natural processes to increase information/complexity.

Thank you, Town Crier. I was just about to slash my wrists. Christ I’m sick of this “design” crap.

Unless the designer of ab organism can suspend the laws of nature, he or she remains subject to the Second Law. If naturally evolved organisms are impossible because of thermodynamic considerations, designed organisms are no less impossible.

There is much confusion that in Shannon information theory entropy/uncertainity is a measure of information. However, in reality “information” is defined as “a measure of the decrease of uncertainty at a receiver.” (See here and here.)

Imagine that we are in communication and that we have agreed on an alphabet. Before I send you a bunch of characters, you are uncertain (Hbefore) as to what I’m about to send. After you receive a character, your uncertainty goes down (to Hafter). Hafter is never zero because of noise in the communication system. Your decrease in uncertainty is the information (R) that you gain.

Since Hbefore and Hafter are state functions, this makes R a function of state. It allows you to lose information (it’s called forgetting). You can put information into a computer and then remove it in a cycle.

Many of the statements in the early literature assumed a noiseless channel, so the uncertainty after receipt is zero (Hafter=0). This leads to the SPECIAL CASE where R = Hbefore. But Hbefore is NOT “the uncertainty”, it is the uncertainty of the receiver BEFORE RECEIVING THE MESSAGE.

About “Nelson’s Law”:

From the URL (and its “destinations”) - “Life can be described as organization and Nelson’s Law states that “things do not organize themselves”. The evolution of life involves in increase in organization, a decrease in logical entropy. Nelson’s law forbids this. Things cannot organize themselves without input from outside. “

I don’t know what “Nelson’s Law” really is, but Charlie (or anyone else reading this) can refute the characterization seen in this thread in their own kitchen. Pour some salad oil in a bottle, add some water (or vinegar), and mix thoroughly. Then let the mixture (which should be the disorganized state that “Nelson’s Law” predicts will be the final state) sit - untouched, completely isolated from all influences. (Heck, we can be really anal and put it in total darkness.)

We all know what will happen - the completely disorganized mixture will spontaneously, completely of its own accord, without any input of energy, information, design, or any other influence, organize into two perfectly-separated phases. The “thing” will most definitely “organize itself”. It’ll happen each and every time, without fail.

What’s really neat is that the same chemical principles underlie the majority (IMO, at least) of organization in biology.

Art,

Nelson’s Law is a tautology that Charlie came up with and named after his middle name.

Art wrote:

We all know what will happen - the completely disorganized mixture will spontaneously, completely of its own accord, without any input of energy, information, design, or any other influence, organize into two perfectly-separated phases. The “thing” will most definitely “organize itself”

I don’t expect that you’ve read *all* of the messages I wrote in talk.origins, but if you had, you would have seen that I carefully explained the difference between “order” and “organization”. These terms are sometimes interchanged and some confusion occurs, which is why I carefully defined what I meant. Your example with the oil and vinegar is an example of an increse in order, which can and does occur naturally, as you point out, without intelligent intervention. In this case, order is defined as “a condition of logical or comprehensible arrangement among the separate elements of a group”. Ther are many such examples, from ice crystals forming to the separation of immiscible liquids by density. But organization is another matter. I define it as “a system made up of elements with varied functions that contribute to the whole and to collective functions of the system”. Here are two examples of my views:

“The only difference I see is between living systems and non-living systems. Non-living systems do not adapt means to ends, they do not adapt structure and process to function and they do not self-organize. And one must be careful not to confuse organization with order. There’s a lot of talk about ordered systems in the non-living world, snowflakes, tornadoes, etc. but this is not the issue. Living systems are beyond order, which is simply a condition of logical or comprehensible arrangement among the separate elements of a group. Like putting files in alphabetical order or using a seive to separate items by size. Organization is a much different structure in which something is made up of elements with varied functions that contribute to the whole and to collective functions, such as exist in living organisms. Ordered systems can result from non-intelligent processes, as has been seen many times and cited by the examples given. But organized systems require intelligent guidance. They need to be put together with intent and their assembly requires insight. They need to be the product of intelligence because it is necessary to determine if they are functioning properly and that can only be acheived by insight. Since living systems display organization, they display means adapted to ends and structures and processes assembled to perform specific functions, it becomes self-evident that they are the product of a higher intelligence.”

And,

“a mousetrap has a quality called organization, which is much different from complexity or order. Each part of the mousetrap, the platform, the holding bar, the spring, the hammer and the catch each have specific functions. And each of these functions are organized in such a way that they support the other functions and the overall function of the mousetrap, which is to catch mice. The function of the platform is to hold the parts, but it’s there ultimately to facilitate the process of mouse catching. The function of the spring is to exert a force on the hammer, but it’s ultimate goal is to enable the process of mouse catching. All of the parts have functions that not only support the other functions, but ultimately support the overall function of the device. This type of organization is not obtainable without insight, and insight always requires intelligence. There is no way that these parts could be assembled in such a manner without insight. A mousetrap is a simple machine, made up of several structures and processes and exists for a purpose. The construction of the mousetrap was initiated with intent, and fashioned for a purpose. Living organisms are similarly machines, with structures and processes that work together to create a function. In fact, all complex, highly organized machines in which means are adapted to ends are the product of intelligent design. The important point is that the adaption of means to ends, the adaption of structure and process to function requires insight. A moustrap is unevolvable without intelligent input, not because you can’t take it apart without it losing it’s function, it’s unevolvable because you can’t put it together in the first place using only random, non-directed, accidental occurrences. The selection of the parts, the configuration in which they’re aligned, the assembly into one unit all require intelligent decisions at every step of the way. Similarly, living organisms show the same characteristics. It’s not that you can’t remove parts and lose total function, it’s that you can’t explain why these particular parts were selected, why they’re integrated together in just such a way and how they were assembled from raw materials without invoking an intelligent agent.”

Reed wrote:

Nelson’s Law is a tautology that Charlie came up with and named after his middle name.

Go to my website and scroll down to the very bottom, right corner where you will find my middle name. It is not Nelson. Marshall Nelson is my pen name. I did, in fact, name the Law after myself. And Nelson’s Law is NOT a tautology, it is a falsifiable scientific law.

Charlie said:

Organization is a much different structure in which something is made up of elements with varied functions that contribute to the whole and to collective functions

Ah, such a clear and useful definition of “organization”! Come to think of it, Charlie, you never were able to explain why Great White Wonder’s swimming hole didn’t satisfy your definition of “organization.” Would you like to take a stab at that now?

Or you could simply endorse what I stated in my earlier post and spare us the backpedaling.

Charlie Wrote:

Go to my website and scroll down to the very bottom, right corner where you will find my middle name. It is not Nelson. Marshall Nelson is my pen name. I did, in fact, name the Law after myself.

My bad, when looking on T.O. I thought I saw you say that it was after your middle name.

And Nelson’s Law is NOT a tautology, it is a falsifiable scientific law.

It is so a tautology: (to paraphrase your logic) “Everything that we’ve observed made by mankind did not arise without intelligent input.”

Your use of “falsifiability” is also flawed. “Falsifiability” does not make something true, only testable. Only after repeatedly failing to falsify a “falsifiable” claim is it considered valuable. I have yet to see “Nelson’s Law” tested on biological entities. Therefore, your attempts to say “Nelson’s Law prevents X” are invalid.

Reed wrote:

It is so a tautology: (to paraphrase your logic) “Everything that we’ve observed made by mankind did not arise without intelligent input.”

Reed, that’s an incorrect characterization of my “logic”. Perhaps it would be better to just quote me, rather than trying to paraphrase my logic. I said:

Nelson’s law is not an abstract conceptualization, it has clear and easily observable qualities, chief among which is the adaptation of means to ends. It also includes the relatedness of structure and function and the adaptation of structures and processes for the purpose of accomplishing the function. All of the aspects of a refrigerator meet these criteria, The physical construction of the cabinet, compressor, heat exchanger, etc. exist for the purpose of carrying out the cooling function. The process of compression and expansion of gases in the refrigerator are adapted to the function of cooling as well. All of these aspects are assembled in such a way as to work together, each part and process performing a specific function that contributes to the overall goal of cooling. All known entities that we have observed, that have the above mention qualities, are the products of intelligent design. Living organisms also have these qualities and possesss the same inherent properties as a refrigerator, or any other functional machine. My conclusion, therefore, is that they also were designed for a purpose. Nelson’s Law clearly states that entities of this nature, fitting this description, and having these properties and qualities, do not create themselves without the benefit of intelligent intervention.

Nelson’s law is not an abstract conceptualization, it has clear and easily observable qualities, chief among which is the adaptation of means to ends. It also includes the relatedness of structure and function and the adaptation of structures and processes for the purpose of accomplishing the function.

Charlie, you are clearly refusing to address the fact that your definition of organization means that swimming holes and beaches are intelligently designed (even those without tire swings or showers).

Your refusal to answer can only be taken as an admission that your theory is, at best, half-baked.

Charlie Wrote:

Reed, that’s an incorrect characterization of my “logic”. Perhaps it would be better to just quote me, rather than trying to paraphrase my logic.

Here is what you state in your quote:

All known entities that we have observed, that have the above mention qualities, are the products of intelligent design.

Your “all know entities” that you cite are human creations. Thus “Everything that we’ve observed made by mankind did not arise without intelligent input.” is an accurate paraphrase of your logic.

does it seem to anyone else that naming a made-up “law” after one’s own self-assigned nickname is an even more pretentious social faux pas than simply naming it after oneself?

not, of course, that that alone is any argument against the “law” in question. nonetheless, bad taste, n’est pas?

as for the “law” itself, i cannot say with certainty that it is flawed - frankly, it’s too vaguely and verbosely stated for me to make a whole lot of sense of it. however, it does occur to me that - if it is to be of any use - it really ought to be able to determine whether or not the Oklo reactors were intelligently designed could anybody with a better sense of what Charlie’s trying to say perhaps take a stab at working out if it does or not?

Charlie,

Thanks for the clarification (I think). But I don’t buy your distinction between “order” and “organization”. At least in its entirety.

But even if I did - I think its easy to see how storms (tornadoes, hurricanes, et.c) are organized according to some of your rules (those that do not reduce your terminology to tautology), and I could probably dissect the oil-water system into an organized, as opposed to ordered, one as well.

So, either way, I don’t see how “Nelson’s Law” is of any particular use, except as a vehicle to wander into semantic hair-splitting. It’s much easier (and more correct) to accept that “SLOT’s”, regardless of their derivation, simply cannot rule out evolution.

LOL … This has got to be the silliest thing I’ve ever seen posted on the Net on “thermodynamics”– To start with it’s genetics and has nothing to do with anything I have ever discussed concerning devolution of the genome.

What’s even more telling is the rest of the forum is like cool, go man, go . ….we are really doing some science here. PvM hasn’t done a thing here with this nonsensical ‘math’ and I’ve seen him laughed out of other forums with this same stuff. Observe:

*******The approach is very simple first assume a genome with site i which has the following probabilities for the four nucleotides involved******

I did not assume ANY genome. I introduced a study by evolutionary biologists where all deleterious mutations were already identified. There is no probability involved with that. Sheeeze.

******One can show that the entropy for this site can be calculated to be******

LOL … OK, what is it, PvM? People he didn’t calculate any entropy. He threw out an empty formula with no numbers in it and thinks he has calculated entropy. And everyone else: Hey good job, man. I understand this now.

*****And the entropy tendency or information can be defined as*****

You didn’t define anything and again you didn’t calculate anything because there are no numbers in your formula.

******Now sum over all sites i and you find that the complexity or information is given by******

You cannot sum over anything! You don’t have any figures in this formula to sum over.

Of course, he then graphs all of this stuff he never calculated to start with and he does this before he doesn’t calculate anything.

LOL … You people don’t know he is just cutting and pasting stuff he doesn’t even understand?

******I was fairly amazed by Chronos’ assertion that “With every generation in homo sapien, entropy increases in the genome.”*******

Don’t believe anything Francis (PvM, the very confused YECer) posts. He’ll have you so confused you won’t know whether you’re coming or going because he is totally confused.

This came not from me (Chronos) but from a peer reviewed study submitted in Nature by two well respected evolutionary biologists name Eyre-Walker of Sussex University and Keightley of EdinBurgh.

http://homepages.ed.ac.uk/eang33/

They found in this study that in man’s evolutionary walk from Chimp, the genome devolved at the rate of 1.6 deleterious mutations per generation AFTER natural selection had weeded out other of these mutations. This is the number that accumulate in the genome. But normally when we discuss this study, the figures are rounded off to two.

Here is the abstract:

High genomic deleterious mutation rates in hominids.

Eyre-Walker A, Keightley PD.

Centre for the Study of Evolution and School of Biological Sciences, University of Sussex, Brighton, UK. [Enable javascript to see this email address.]

It has been suggested that humans may suffer a high genomic deleterious mutation rate. Here we test this hypothesis by applying a variant of a molecular approach to estimate the deleterious mutation rate in hominids from the level of selective constraint in DNA sequences. Under conservative assumptions, we estimate that an average of 4.2 amino-acid-altering mutations per diploid per generation have occurred in the human lineage since humans separated from chimpanzees. Of these mutations, we estimate that at least 38% have been eliminated by natural selection, indicating that there have been more than 1.6 new deleterious mutations per diploid genome per generation. Thus, the deleterious mutation rate specific to protein-coding sequences alone is close to the upper limit tolerable by a species such as humans that has a low reproductive rate, indicating that the effects of deleterious mutations may have combined synergistically. Furthermore, the level of selective constraint in hominid protein-coding sequences is atypically low. A large number of slightly deleterious mutations may therefore have become fixed in hominid lineages.

http://www.ncbi.nlm.nih.gov/entrez/[…]uids=9950425

Those who don’t care to read through the science can find a good read here:

http://www.open2.net/truthwillout/e[…]n_walker.htm

Donald A. Syvanen Wrote:

From an chemical engineering undergraduate student…

Now that an undergrad has deigned to tell the authors of his textbooks that they don’t understand thermodynamics, I’m sure we shall all sleep better at night.

Donald A. Syvanen Wrote:

God does not need to keep all the laws of thermodynamics like we do.

So, you admit that creationism is a priori untestable and therefore not a topic of scientific investigation. Glad to see that you agree with us.

Welcome to the bigs, Donald.

Neil

About this Entry

This page contains a single entry by PvM published on May 25, 2004 9:22 AM.

Dog Breeds and Genetics was the previous entry in this blog.

Ancient Wings: A cool phylogeny visualization is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter