A quick explanation of Wasserstein Metric

| 6 Comments

Note that Dembski has uploaded a revised manuscript which now correctly attributes the measure to Renyi and thanks the many critics for their contributions

I am not a mathematician but let me give it a try and others can amend and revise my comments.

The Kantorovich/Wasserstein distance metric is also known under such names as the Dudley, Fortet Mourier, Mallows and is defined as follows.

where refers to the expectation of the random variable x and means that the minimum is sought on all random variables X which take a distribution F and random variables Y which take a distribution G.

where is the set of all joint distributions of random variables X and Y whose marginal distributions are F and G.

These metrics define a ‘distance’ between two stochastic distributions and are one of many such metrics that have been mathematically defined. There is a good paper on many of these metrics On Choosing and Bounding Probability Metrics. Different circumstances ask for different distance metrics.

These metrics have found applicability in non-linear equations, variational approaches to entropy dissipation, Phase transitions and symmetry breaking in singular diffusion, random walks, Markov processes and many more. Needless to say these metrics are quite commonly applied in a variety of applications. Applications of this metric to Markov processes may be of interest to evolutionary theory.

Adapted from Central Limit Theorem and convergence to stable laws in Mallows distance

Another way of looking at this is by assuming one has two samples X and Y of the same size and . The Mallows distance between empirical distributions is

where the minimum is taken over all possible permutations of

Rachev, S. T. (1984), The Monge-Kantorovich problem on mass transfer and its applications in stochastics, Theor. Probab. Appl., 29, 647-676.

As far as some interesting applications are concerned

Minimal Entropy Probability Paths Between Genome Families

6 Comments

PvM,

Thanks for that explanation. I quit math after linear algebra and differential equations. which means that I am not a mathematician either. I am still trying to figure out why Dembski brought this up in his reply. I am also completely unsure why this method has more probabalistic power over the Renyi equation that was discussed. I know I’m a math idiot, but can one of you demi-gods out there help on this one.

MB

As I am starting to ‘understand’ these issues, the Wasserstein metric presents a weak topology onto space. While the original Renyi measure provides what Demsbki calls “variational information” it needs to be tied in with the nature of the actual path(s) taken. Thus he attempts to ‘coordinate the variational information with the topology of the underlying probability space”.

Where is Dembski going with this? I see this as working towards a measure that may be helpful in suggesting if there exist “probability paths”. Probably to show that there exist ‘irreducibly complex’ systems to which the probability paths may be lacking. But so far I have failed to see how any of these measures may be helpful here.

Pim, I think the broad outline of the argument is:

1 The dead-ends so vastly outnumber the workable arrangements evolution can find, if it searches randomly, it can’t find them quickly enough

2 Evolution randomly searches all possibilities

3 Therefore, it couldn’t have found all these workable arrangements

His critics seem to have said that 1 isn’t certain, but 2 is just plain wrong. Since lots of smart people like Wolpert say his arguments are failures, I haven’t bothered to study them in depth myself. Life is short.

that paper on minimum entropy probability paths between genome families is pretty funny. (I figure you probably googled it and didn’t have the chance to read it).

those guys have a total lack of understanding of biology. their basic idea is to take a DNA sequence, compute an AGTC frequency vector, and then talk about “minimizing the entropy path integral” during a sequence of base pair substitutions en route to a destination DNA sequence.

“Minimizing the entropy path integral”?!?

where did that come from? it seems these guys really think that the overall composition is influenced *not* by horizontal transfer conditions, or, say, the rest of the genome’s base-pair content (reasonable speculations).…but by the idea that base pair frequencies stay away from an entropy maximizing (.25,.25.,25.25) during the move from one sequence to another. DUBIOUS, to say the least. it’s more likely that base pair frequencies will track the genome content as a whole, or that local area of the chromosome.

But the best part of the whole paper is when they claim that the *advantage* of their approach is that “this allows us to compare sequences based on their composition as a whole, rather than by sequence alignment”. jeeez…as if throwing away all the order information is going to *improve* your understanding of evolutionary relationships between sequences! In these guys’ world, AGTC and ACTG are equivalent. Forget sequencing the genome, guys…let’s just go back to chargaff’s rules!

I have looked at the paper and its approaches seem quite interesting.

They state

The variational principle which we are formulating is that the most efficient transition between probability state vectors is the probability path which minimizes the line integral of the entropy function.

An interesting working assumption which makes sense from the perspective that an admissable path is likely not going through a stage in which the genome is at maximum entropy. Seems to me that they are looking at a genome mostly under selective pressures

measures. Preferred functions will take into account the ‘conserved nature’ of the distributions. A distribution is least conserved when each of the t possible alphabet characters occurs with frequency 1/t and more conserved when one or two alphabet characters dominate. Two distributions should be judged closer if they share the same conserved characters than otherwise. A simple example of a function which disregards conserved nature is variational distance, the sum of di®erences in frequency for each letter in the distributions.

They then point out that a variety of entropy based distance measures have been proposed but most of them seem to suffer from a variety of problems.

The idea that a probability path will avoid randomizing the genome quite interesting. Perhaps Hmmm can help us understand his objections better?

See also “A new distance measure for comparing sequence profiles based on path lengths along an entropy surface” by Gary Benson which explains the motivations for this new measure

About this Entry

This page contains a single entry by PvM published on August 12, 2004 6:26 PM.

Why Dembski should more often look at a mirror was the previous entry in this blog.

More on the evolution of the whale ear is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter