The Privileged Planet Part 2: The failure of the ‘Design Inference’

[size=200%]The Design Inference[/size] Gonzalez et al appeal Dembski's Design Inference to show how the correlation of habitability and measurability shows evidence of 'purpose' in the universe. Various people such as Wein, or various authors on, have already shown what is wrong with the design inference. I will limit my comments to the claims by Gonzalez et al to show that their appeal to the design inference is inappropriate.
[size=150%]A quick review of the design inference[/size] Basically, the 'Design Inference' attempts to identify events with low probability (or in the somewhat confusing language of Dembski, high complexity) and a specification (an independently given pattern). Such events are claimed to be �designed'. Low probability is determined through elimination of regularity and chance processes, specification is provided through an independent description of the event. It is important to realize that the 'Design Inference' is a purely eliminative argument with all the problems of such an approach such as �gap arguments' and 'argument from ignorance'.. Del Ratzsch explains the limitations of the 'Design Inference': "I do not wish to play down or denigrate what Dembski has done. There is much of value in the Design Inference. But I think that some aspects of even the limited task Dembski set for himself still remains to be tamed." "That Dembski is not employing the robust, standard, agency-derived conception of design that most of his supporters and many of his critics have assumed seems clear." From: Del Ratzsch, "Nature design and science" Dembski's "No Free Lunch" received similar criticisms from William Wimsatt (Paul Nelson's dissertation director, whose name was also found on the cover of Dembski's 'Design inference'. Wimsatt responded to an announcement of Dembski's latest book 'No Free Lunch' as follows: I could not in conscience fail to respond to the ad for Bill Dembski's new book, ""No Free Lunch", and to the general tenor of the political push generated either within or by others using the so-called "intelligent design theory". This is not a theory, but a denial of one, and a denial whose character is widely misrepresented, at least in the press. and Unfortunately "popular" presentations of "Intelligent Design" have tended to give the impression that it rested solely on mathematical demonstrations. Anyone who could have succeeded in showing that natural selection is incapable of generating biological structures according to standards from mathematics or logic would have constructed a mathematical proof that would have dwarfed Godel's famous Undecidability theorem in importance. As one who read Dembski's original manuscript for his first book, found much to like in it, and had appreciative remarks on the dust jacket of the first printing, I can say categorically that Dem ski surely has shown no such thing, and i call upon him as a mathematician to deny and clarify the implications of this advertising copy. In addition to Ratzsch and Wimsatt's critical remarks about the 'Design Inference', Richards himself seems to consider the 'Design Inference' of limited usability: We think there are lots of events and structures for which we are rational in concluding "intelligent design," but for which it is impossible (or really hard) to run a probability on them. If we had to do so to infer design, we would almost always be unjustified in inferring design. For instance, I still don't know how to run a probability on Stonehenge or the black monolith in 2001: A Space Odyssey. Still, I think both are designed, and I think we're rational in so concluding. So in order to be able to be able to appeal to the "Design Inference", Gonzalez et al have to show that the events are of low probability and that there is an independent specification. My claim is that they have failed in all aspects to make a convincing case. In fact, I intend to show that the authors are aware of the limitations of their use of Dembski's 'Design Inference' yet are still appealing to it to draw their conclusions of 'purpose'. [size=150%]The Improbability Fallacy[/size] Gonzalez et al attempt to make their case for the design inference by first pointing out that: "Complexity is improbability" (finally a clear definition of Dembski's version of complexity) and argue that: 1. Conditions that allow for habitability are improbable 2. The conditions that allow for measurability are improbable. [size=125%]Habitability and probability[/size] But is this correct? Are conditions that allow for habitability improbable? Dembski requires a probability of 10^-150 before a design inference can be triggered. In addition Dembski requires the elimination of chance and regularity pathways while Gonzalez et al only consider chance pathways. Richards argues that "But what if you want to know if, say, the structure of the natural laws themselves, or the cosmos as a whole, is the result of purpose or design? Well, you'll at least need to modify Dembski's approach." In other words, when Gonzalez et al infer 'purpose' they include natural law as the 'designer'. Richards seems to recognize the challenges and limitations of the 'Design Inference' approach when stating on ISCID that: We view Dembski's arguments as a valuable rational reconstruction for capturing an important subset of designed structures. We also think he makes a critical insight that design detection has as much to do with pattern recognition as with probabilities. However, Dembski's reconstruction is optimized to avoid false positives, not to allow a design inference for all discernibly designed structures. So it shouldn't be treated as a Procrustean Bed into which we have to fit everything that's discernibly designed. In that light Richards states that "Also, while we use Dembski's criteria for detecting design, we don't depend on them exclusively. We also draw on the work of John Leslie, Del Ratzsch and others.". My biggest gripe with this is that despite this, Dembski's 'Design Inference' is given a central position in arguing for 'purpose' in public presentations and even in the book itself. Do we know enough about the parameters for habitability to be able to state that conditions for habitability are improbable? I claim that at most one can argue that with our present knowledge we simply do not know. Certainly it is far too early to argue that habitability is improbable. In addition, habitability is defined in terms that treat our own situation preferentially. In other words habitability describes a 'terrestrial' planet not too warm, not too cold with water and oxygen. Add some ad hoc requirements, like those used by Gonzalez et al, such as for example a large moon, plate tectonics, a large planet to deflect some of the incoming threats etc and one can make a case for the uniqueness of earth. But a similar argument could be made for almost anything. By tightening the requirements for habitability to describe the present situation for our earth one can indeed reach a small probability but then the question becomes one of prediction. Is the earth unique or have we forced upon it enough criteria that make it appear to be unique? Has habitability become a description rather than a prediction? Or in other words are the requirements set for habitability that lead to a low probability actual requirements or 'ad hoc' descriptors of our own situation. In other words are we making our own earth 'privileged'? If that is the case the outcome is not surprising. However any claim of �purpose' seems to be leaping to conclusions. While one can certainly attempt to tighten the probabilities, one should not ignore those circumstance which would loosen the probabilities such as the fact that we can only observe a fraction of the total Universe or that habitability requirements need to be loosened to take into consideration other forms of life. And finally, habitability is not a concept easily quantified. Arguments of improbability thus become subjective claims. [size=125%]Measurability and probability[/size] Definition: Measurability refers to those features of the universe as a whole, and especially to our particular position in it - both in space and time - which allow us to detect, observe, discover and determine such features as the size, age, history, laws and other properties of the physical universe. Gonzalez et al claim that they do not argue is that every single condition for measurability is optimized on the Earth's surface or that it is easy to make measurements and scientific discoveries on Earth. The argument is that the Earth allows for a "stunning diversity of measurements, from cosmology and galactic astronomy to stellar astrophysics and geophysics; and it allows for this rich diversity of measurement much more so than if Earth were ideally suited for, say, just one of these sort of measurements." So in other words, the earth is �optimal' in a weighted, statistical sense. But is that really supportable with quantifiable evidence? I argue that the authors have failed to show that this is the case. Largely because the concepts of habitability and measurability have not been given quantifiable values. The authors argue that they have avoided �cherry picking' by taking examples from every important discipline. As I will argue, they have done so at the cost of ignoring examples that show poor measurability and thus have failed to show that the earth is somehow unique in this aspect. So what about measurability? As with habitability, measurability lacks a workable definition, but it also suffers from a selection bias. In case of habitability we may make the mistake of using factors typical for our earth as relevant to habitability in general. With measurability, the same problem arises in that what we cannot measure will remain unknown. So what is the full space of measurability versus what we can actually measure on earth? How do we know if we are optimal if we cannot define measurability due to our ignorance? In other words, either measurability is trivially true (we measure what we can measure) or it is fraught with an appeal to ignorance since we fail to consider that which we do not know. Gonzalez et al argue that the moon is an important factor for measurability since 'perfect solar eclipses' allowed verification of Einstein's theory of relativity and the study of the corona. As Gonzalez et al point out, these eclipses are not really perfect due to the eccentricity of the Earth and Moon's orbits. How do we know that Einstein's theory would have fared worse or better if the earth had not had perfect eclipses? If the earth is such a special place how come that astronomy had to wait for space based platforms before it really took off? I will look in more detail into some of the examples used by Gonzalez and show that they do not support their claims. In summary I argue that the authors have failed to show that measurability and/or habitability are improbable. In fact I argue that given the present status of our scientific investigations and knowledge, any such claim of improbability runs the risk of 'appeal to ignorance'. [size=150%]The Specification Fallacy[/size] In addition to improbability, the Design Inference also a requires a specification. Remember that one of the requirements for this specification is that it is independent of the event. According to Gonzalez "The correlation of habitability and measurability forms a meaningful pattern" or in other words, a specification. Are they correct? Once again I have to disagree with their claim for a variety of reasons. First of all they have not shown that the correlation between the two is independent. In fact the existence of a correlation suggests that these two events are NOT independent. What if measurability is strongly correlated with habitability through some form of necessity? In fact, Gonzalez argues that the correlation between the habitability and measurability effects of the Moon can be explained by natural law. Surely appealing to natural law as the designer of 'purpose' seems to weaken their argument. In addition the authors base their claims on a sample size of one. While the authors try to defuse this criticism by arguing that they have in fact many examples from a variety of disciplines their correlation argument is based on a single data point namely 'earth'. No other examples of �habitable' environment are described. Under such circumstance, one should frown upon any claim of correlation. Especially when measurability and habitability are poorly defined, and unquantified. [size=120%]The Designer Ice Creams[/size] Let me give you an interesting example that I hope will clarify some of my objections. When studying data on ice-cream sales and the number of drownings, it was found that there existed a strong correlation between the two. Using the same Design Inference approach chosen by Gonzalez et al, one may argue that drownings are improbable and in addition that the correlation between drownings and ice-cream sales shows evidence of a specification. In other words one should conclude, according to these authors, "design" or purpose. While I am not sure why one would expect design in something like drownings, this example shows the risks involved in using the approach chosen. And in this example ice-cream sales and drownings were actually quantifiable; not much ignorance on our part as to their exact numbers. So statistically the case is much stronger than the case made by Gonzalez et al. Of course we all 'know' that the correlation was not evidence of 'design' but caused by a third independent factor namely 'sun shine'. Since more people go out swimming during sunny days and since more people eat ice-cream during sunny days one quickly realizes that this correlation was due to a third factor. So what if measurability and habitability are intrinsically correlated or correlated through a third factor? I could think of various reasons why this may be the case. As an interesting side-note, in addition to drownings, correlations have been found between ice cream sales and murder, boating accidents, and shark attacks. [size=150%]Conflation of design and purpose[/size] My biggest gripe is with the authors' confusion of design and purpose. While science can show function, showing purpose requires a philosophical or theological assumption or direct or indirect knowledge of the motivations of the designer. While Dembski denies that motives, opportunity, means etc are indispensable for a reliable design inference, actual design inferences in archeology, criminology and SETI strongly depend on such assumptions. Would a designer be interested in correlating habitability and measurability and that uniquely to earth? Perhaps, but such an argument is easier made based on theological assumptions than based on unbiased assumptions. Without much knowledge about motives, stating that the correlation between the habitability and measurability is what we would from a designer seems somewhat circular. In other words. when purpose is inferred based on our ignorance then we enter a dangerous realm of 'gap arguments'. Gonzalez et al argue that a designer would be interested in having his creation learn about the world and universe they live in hence measurability and habitability would be expected to correlate. But then again, there may be regularity and chance processes that result in a similar correlation. Lacking a clear definition of both habitability and measurability and the inherent observer bias, it is hard to argue that we can reach a design and purpose inference. After all how can we eliminate regularity and chance when we do not even understand what factors are required for habitability and measurability? [size=12-%]Irony alert[/size] An interesting quote from Gonzalez and Ross's article Home Alone in the Universe (From: First Things 103, May 1, 2000 ) It is difficult to quarrel with the simple physical interpretation of the WAP (Ed: Weak Anthropic Principle): it is just a type of observer selection bias. We should not be surprised to observe, for example, that we are living on a planet with an oxygen-rich atmosphere, for the simple reason that we require oxygen to live. The WAP "explains" why we should not observe ourselves to be living on, say, Titan, but it fails to account for the origin of the oxygen in our atmosphere and hence for the rarity of planets with oxygen-rich atmospheres. However, Barrow and Tipler, no doubt motivated by the philosophical CP, have burdened the basic physical interpretation of the WAP with unwarranted philosophical extrapolations. In considering the WAP with regard to the observable universe, they claim that we ought not be surprised at measuring a universe so finely tuned for life, for if it were different, we would not observe it. The irony of this comment about motivation and burden should not escape the reader. Relevant Links The Privileged Planet Part 1: Where Purpose and Natural Law freely Mix Part 1 Where purpose and function meet


I’m a bit mystified as to why “perfect solar eclipses” are necessary, as opposed to a “neat” coincidence.

The measuring the deflection of a star’s image by the sun would only require an eclipse, not a perfect one. And even if there weren’t a moon capable of producing total eclipses, that was only the most technically feasible way at the time (1910s) to confirm Einstien’s General Relativity. Another was the theory’s correct prediction of Mercury’s anomalous orbital precession. There have been other confirming observations since using atomic clocks and binary pulsars. Also, Gravity Probe B is scheduled to launch later this month.

I should also point out that there is an instrument know as a coronagraph. It is a solar telescope that uses an obscuring disk to produce an artificial eclipse. That way astronomers don’t have to wait for the moon to cooperate, or move an entire obseratory to the narrow path of an eclipse.

Seth, your comments are right on the mark. Indeed larger moons would work well as well, in fact perhaps better since the area of the eclipse on earth would be larger. In my next posting I will address the Einstein validation claim. Indeed, in 1915 Einstein correctly predicted (retrodicted) the precession of Mercury. This showed him that he was on the right track. But Mercury’s impact on earth’s habitability is hardly that impressive.

Thanks for naming the tool coronagraph I was looking for this tool since I had been told that such a tool existed. But without a name it was not that easy.

I think a larger moon would be disadvantageous for measurement in that the deflection is greatest closest too the sun, so a larger moon would make accurate detection of the effect more difficult, if not impossible. Further, a coronagraph obscures the suns disk, but does not eliminate atmospheric refraction, ie, it does not eliminate the blue sky that obscures the stars. However, these problems would have delayed the confirmation of general relativity only until the deployment of orbital observatories; ie, by 80 years at most. Hardly significant.

On a different point, there is a plausible suggestion that plate tectonics is implicated in long term thermal regulation of the atmosphere by recyling carbon from sea sediments to the atmosphere following extensive glaciation. However, as supernova’s produce heavy elements in abundance, the conditions for producing and distributing carbon, oxygen and nitrogen into space also distribute heavy metals in abundance. This suggest the probability of a planet having carbon based life, and of it having plate tectonics are not independant.

Tom Curtis

Tom: This suggest the probability of a planet having carbon based life, and of it having plate tectonics are not independent.

Indeed, and yet it appears that when Gonzalez et al attempt to calculate probabilities this is NOT taken into consideration.

The coronograph and my argument about a larger moon was about observing the emission spectrum of the Corona which allowed science a deeper insight into what makes the sun work. I believe seminal work was done in the middle of the 19th century by Young.

In addition, it seems that much of the recent advancements in astronomy are due to space based platforms. Perhaps the earth was not that suitable after all for scientific measurements?

Actually, a larger Moon would NOT instantaneously interpose itself between the Earth and the Sun. It would gradually do so, and around when totality begins and ends, there will be plenty of near-Sun sky visible.

So one will be able to see stars near the Sun without much trouble.

As to needing a space-based telescope to test GR, one can go part of the way with a coronagraph telescope carried by a balloon or an airplane.


It does not surprise me that Gonzalez et al. do not take into account dependant probabilities. Most fine tuning arguments I have come across do not; and until we have a successful unification of General Relativity and quantum mechanics, we cannot calculate the dependant probabilities the most essential parameters of such arguments. Therefore, at this stage of our knowledge, they are at best speculation; and frequently sloppily argued speculation.

With regard to the coronagraph, I thought the issue was testing the deflection of light by the sun by observing stars. Hence my comments. I certainly wouldn’t dispute the usefulness of coronagraphs for observing the corona.

Finally, I think the earth is a good platform for scientific observation - though not as good as space. We can easily imagine scenarios where astronomy would be a still born science - solar system in a thick nebula, dense atmosphere like venus, Asimov’s “Nightfall” scenario, etc. Likewise we can imagine scenarios that would have aided astronomy, such as being located in one of the Magellenic clouds. It is difficult to see that the earth is improbable in this regard; and most of the scenarios for reduced observational capacity are plausibly negatively correlated with conditions for life, ie, the probability of high mearsurability is not independant of the probabity of high habitability.

Tom Curtis


I agree with all your comments. My first point was that a larger moon would make detecting stellar displacement more difficult, not that it would make it impossible. I did indeed forget about balloons and planes. They, however, would only halve any time delay in confirmation of GR.

My second point was that, even allowing that some factors could reduce measurability, a point I think we should concede, such reductions are minimal for any realistic metric of measurability. An intuitive metric is the ratio of time between sufficient cultural development to propose a testable theory, and relevant delay in confirmation due to loss of measurability. Given this measure, even the largest moon consistent with life would not reduce measurability of GR by more than 1% (ie, 80 years divided by the time since the first city). Even an optimistic metric using the time since the origin of science (say 400 years) yields only a 5% loss of measurability. (I don’t think these metrics are usefull for any purpose other than clarrifying my point, of course.)

I am sorry if I was insufficiently clear.

Tom Curtis

On a pretty mundane level, do arguments from probability have any relevance or value when applied to a current reality or to a previous occurence? There are two absolutes involved in probability - a 0 in X probability, and an X in X probability. Any other equations would seem to me to be meaningless, at least after the fact.

Supernatural events would seem to me to be beyond the scope of probability equations, so it would follow, for me, that for ID or Divine Creation proponents to claim that probability favors their explanation, or works against natural cause, is simply nonsense.

I have an IDC friend, who has a friend - an “internationally renowned mathematician” - who claimed to have mathematically proven both the existence of God and the higher probability of Divine Creation than Natural Cause. I trust he had fun doing it, but I wonder why he bothered.

We’re here, as we are, precisely because of the narrow set of conditions which exist here - at least some of which are affected by off-planet forces. Any changes in those conditions and we wouldn’t be here, or if we were we’d be different. As a proof of Design or Purpose, this seems rather limp, though I guess it is easier to claim their role when looking back from the lofty perspective of end result.

I agree that these arguments for proof of Purpose are quite ‘limp’. But I foresee that “Privileged Planet” is going to follow the path of “Icons of evolution” as an apologetic and curricular tool. I will address some of these issues in a separate posting called the “Privileged Wedge”.

The point of the corona measurement was that it was a major confirmation of already existing theories on the spectra of stars.

For almost all stars, we see a continuum spectrum interrupted by thin absorption lines, which (as was known from Earth experiments) you could get by observing a hot solid or liquid (which emits the continuum spectrum) through a cooler gas (which absorbs light at discrete frequencies).

If you instead observe a hot gas (like interstellar nebulae), it emits light at the same (for the same composition) discrete frequencies.

The neat experiment that made use of the eclipse was to get the emission line spectrum of the solar corona (the cooler gas that surrounds the hot “photosphere”, which is all we normally see). It had the same spectrum in emission lines that can normally be seen in absorption lines from a solar spectrum, neatly confirming the model of where the emission lines came from.

A “perfect” eclipse is nice for that because by just blocking out the photosphere, you have the maximum amount of the corona visible, allowing a spectrum from less sensitive measuring equipment.

However, a coronagraph is not hard to build, and I expect it wouldn’t take long at all to make this same observation from a planet with no eclipses.

On top of that, if the obscuring body is bigger than the sun, you not only get longer eclipses with a broader area of ground covered (as mentioned), it also allows sampling of more of the corona. That is, the dimmer yet outer layers of the corona would also be available for observation during some parts of the eclipse, as they are not from the Earth.

On confirmation of GR, we should also note that it’s not much more difficult to confirm it through gravitational bending around Jupiter. The deflection at Jupiter is about a tenth as much as at the Sun, but you can make observations for a much longer exposure time and from big, fixed observatories. There was a race on when the eclipse observations were made, and it would only have been a few years more (at most) till the Jupipter observations also confirmed the light-bending prediction.

On top of all that, GR would have been widely believed if there are no confirming observations because it’s such a neat theory, solving philosophical, not observational, problems with Newton’s gravity. However, it’s predictions continue to be confirmed (on solar-sytem scales at least) with dozens of independent observations.

If phenomena advantageous to science are alleged to count as evidence in favour of ID, then any phenomena which are not maximally advantageous to science should count as evidence against it. It might be useful to construct a list of these. This would support a charge of “cherry-picking”.

Mr. Wein has a good point, though IDers might make arguments like:

* We have no idea why the Designer did that * The Designer wanted the Universe to be challenging for us to discover * The Designer set up those difficulties as a punishment for our sins * We are not meant to know certain things

I’ll propose an example: the ultimate constituents of the Universe.

The first attempts were rather grotesquely wrong – theories like the earth-air-fire-water theory. This inability to get up on the right foot suggests a certain difficulty in discovering those constituents.

But it took a lot of chemical experimentation to demolish such theories and to come up with the modern conception of elements; Lavoisier’s elements are essentially correct. And quantitative analysis to come up with the Law of Definite Proportions, modern atomic theory, and valence-bond theory. And discovery of lots of elements to come up with the Periodic Table of Elements.

So far, no mathematics more sophisticated than algebra was needed, but that was to change in the early twentieth century when atoms were discovered to be incorrectly named and when quantum mechanics was discovered. One needed to know how to solve partial differential equations and group theory in order to apply QM to atomic structure, though the simpler cases do offer various shortcuts.

Atomic nuclei also turned out to be composite, and eventually protons and neutrons also. But finding that out has required more and more elaborate and expensive lab equipment, becoming highly computerized in recent decades. Currently, the maximum collision energy per particle is around 100 GeV, about 100 times the mass-energy of a proton (remember E = mc^2), and it will go up to 1 TeV (1000 GeV) over the next decade.

Although such energies will help test some of the features and extensions of the Standard Model of particle physics, such energies are nowhere close to Grand Unified Theory energies, which are about 10^15 GeV. At such energies, the three interactions of the Standard Model become pieces of a single interaction.

Finding that out has required some very tricky mathematics, since if one tries to calculate self-energies and the like, one gets troublesome divergences. However, these can be dealt with by absorbing them into observed quantities, a trick called “renormalization”. Richard Feynman was one of the inventors of that trick.

The Grand Unified Theory energy of 10^15 GeV is tantalizingly close to the quantum-gravitational Planck energy of 10^19 GeV, which suggests some connection. But despite heroic efforts, it has been difficult to find a “Theory of Everything” that includes both gravity and the Standard Model as special cases.

It’s also not understood why the familiar elementary particles have masses are 1: nonzero 2: much less than the GUT/Planck mass scales

But that curious circumstance has allowed our Universe to have the complexity it does; it allows large masses of familiar elementary particles to avoid becoming crushed by gravity.

About this Entry

This page contains a single entry by PvM published on April 11, 2004 1:59 PM.

That a priori muddle was the previous entry in this blog.

Happy Easter is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.



Author Archives

Powered by Movable Type 4.381

Site Meter