Intelligent Design and statistics - a troubled alliance

| 26 Comments

This is a guest appearance by Peter Olofsson. I have not contributed anything to this essay and am posting it as a courtesy to Peter. Professor of mathematics Olofsson is the author of two excellent books on probability and related subjects, of which one is a textbook used by universities.

Here starts Peter’s text:

William Demsbki’s “explanatory filter” is a proposed method to infer intelligent design in nature. While it has been criticized from many points of view, there seems to have not yet been a comprehensive treatment within the framework of mathematical statistics. In this article I argue that even if many of Dembski’s assumptions are accepted, the filter still runs into serious trouble when it comes to biological applications.

Continue reading A troubled alliance at Talk Reason

26 Comments

Excellent observation

The inconsistency is striking. Design skeptics are required to make sure that a rejected hypothesis is superseded; design proponents are not. I am fairly certain that most biologists would consider Dembski’s shopping cart model “poor and unacceptable” and thus, appealing to logic, could safely eliminate it and go on to more important activities.

While Intelligent Design requires scientists to give unreasonably detailed step by step pathways, it has effectively isolated itself from scientific rigor. Requiring ID to be scientifically rigorous is just a pathetic requirement it seems…

Peter Olofsson also presents a good definition of randomness in his book Probability, Statistics, and Stochastic Processes

Probability theory is the mathematics of randomness. This statement immediately invites the question “What is randomness?” This is a deep question that we cannot attempt to answer without invoking the disciplines of philosophy, psychology, mathematical complexity theory, and quantum physics, and still there would most likely be no completely satisfactory answer. For our purposes, an informal definition of randomness as “what happens in a situation where we cannot predict the outcome with certainty” is sufficient. In many cases, this might simply mean lack of information. For example, if we flip a coin, we might think of the outcome as random. It will be either heads or tails, but we cannot say which, and if the coin is fair, we believe that both outcomes are equally likely. However, if we knew the force from the fingers at the flip, weight and shape of the coin, material and shape of the table surface, and several other parameters, we would be able to predict the outcome with certainty, according to the laws of physics. In this case we use randomness as a way to describe uncertainty due to lack of information.

From PvM’s quotation of Olofsson:

It will be either heads or tails, but we cannot say which, and if the coin is fair, we believe that both outcomes are equally likely. However, if we knew the force from the fingers at the flip, weight and shape of the coin, material and shape of the table surface, and several other parameters, we would be able to predict the outcome with certainty, according to the laws of physics. In this case we use randomness as a way to describe uncertainty due to lack of information.

Back when I was in the Navy I used to win a fair amount of money betting on my own coin flips. With practice one can control an apparently well-flipped coin to show a pre-specified side a statistically (and profitably) non-50/50 proportion of the time. One must use the ‘catch the coin’ procedure, though, rather than the ‘let it fall to the floor’ procedure.

RBH

The question now becomes: Is design somehow different from randomness as defined by Olofsson?

“what happens in a situation where we cannot predict the outcome with certainty”

I argue that the distinction between random processes and design may be mostly of a linguistic nature and less of a scientific nature. After all, what is design? The instantatiation according to a plan, a plan which we often may not fully comprehend… In other words, design and randomness may converge. The level of randomness depends on our understanding of designers’ motivations, capabalities etc. Once we can reduce these uncertainties, what appears to be ‘random’ may become inevitable.

With a nod to PvM, sounds very similar to Behe’s attitude at Dover that it was up to his critics to perform the experiments that would possibly refute his theory. He had more “fruitful” pursuits on which to spend his time.

Ahhhhh, the comfort of absolute certainty of one’s position.

Also, how much do you want to bet we see the following comment about the explanatory filter quote-mined real soon:

It is logical, it applies to biological systems, and it is based on well-established principles from mathematics and statistics.

In comparison, the explanatory filter from The Design Inference has been less criticized.

I’m somewhat disturbed that Peter doesn’t reference Wilkins and Elsberry 2001 here, or my 1999 review of Dembski’s The Design Inference, both of which largely concerned criticism of the explanatory filter.

Also see this page for more links concerning The Design Inference.

I argue that the distinction between random processes and design may be mostly of a linguistic nature and less of a scientific nature. After all, what is design? The instantatiation according to a plan, a plan which we often may not fully comprehend… In other words, design and randomness may converge. The level of randomness depends on our understanding of designers’ motivations, capabalities etc. Once we can reduce these uncertainties, what appears to be ‘random’ may become inevitable.

Agreed. And I’d add that it seems to be their thesis that design and randomness (well, what they frequently call randomness—evolution) do converge. They have no ability to distinguish between design and evolution (other than their supposed EF), there are no predictions that one differs in its derivative nature from the other, and what comes out is a “design” that is what evolution predicts.

But what I think may count for more is that the “design” that we see occurring, and the “randomness” that we observe happening in normal environmental processes, are not fundamental and pure processes. We “design” often enough in new areas by using somewhat random attempts to meet problem with solution. And what we sometimes call “randomness” in the environment is anything but, as Darwin demonstrated.

Beneath it all, however, are the causal processes (in classical science) which are what actually allow us to differentiate between the “random”, the “designed”, the “natural processes”. Much randomness exists in cognition and intelligence, but signals propagate through the randomness and are rationally correlated to produce coherent phenomena. Yet what is unusual about that? Signals propagate through ecosystems, the atmosphere, the oceans, and in much more simple ways these are rationalized into patterns and enduring forms. Causal processes do allow for much more strict regularity in organisms than in weather systems, however the causality of the human differs not at all physically from the rest of the causal processes we observe (and I’ll repeat parenthetically what I’ve noted elsewhere. Regularity is the mark of design, as opposed to Dembski’s “eliminative induction”, and its absence is evidence of a lack of design. And we do find regularity in the evolutionary processes and evidences).

The real analogy with intelligence is that it seems to emerge out of a fair amount of randomness like other self-organizing entities do. Shouldn’t IDists be asking why it is that the brain has similarities with “natural processes”, rather than assuming that self-organizing entities like humans would be similar to the unseen gods? Why did self-organizing processes exist prior to life, and why do observed organizing entities come in only after the much organization has already occurred? Have they no sense that life might be a part of the organizing processes of the universe (which may all be due to God, or not), and that it appears to be adapted to interact with and cognize the other organized and organizing processes?

We mimic nature. Laughably, some IDists want to make this out to show that nature was made like we make things. Well, in a sense they’re correct, as we simply are part and parcel of nature. They, of course, don’t mean that. Somehow the construction of the human mind in the modern sense, by our mimicking and adapting ourselves to the processes of nature, is still an imposition upon nature in their minds, not another part of the self-organizing capacity of nature (we may be short-sighted, but there’s nothing strange about that in evolution, with even our limited foresight being unusual).

The fact is that we can’t really do anything without doing what the universe was already doing, if in contexts which didn’t exist in the past. Design is a refinement in the self-organizing activities of “nature”, and quite noticeably is utterly incapable of producing vertebrate life at the present time.

If anyone asks why the IDists are trying to rip apart evolution instead of bolstering their “hypothesis”, look no further than the last sentence above. NO observable life or machine is capable of designing life, hence statistically trying to show that life is responsible for life’s existence would founder on the complete absence of evidence for life which is capable of designing, say, mongooses (this in addition to their inability to differentiate design from evolution, of course).

Glen D http://tinyurl.com/b8ykm

This is to point out that the commenter signing his posts as MarkP is NOT Mark Perakh. However, I agree with his last comment 141264. The quotation from Olofssons’s essay in MarkP’s comment shows, in my opinion, a regrettable misjudgment on Olofsson’s part (and I told Peter about it). Although I posted Peter’s essay as a guest appearance, it does not mean I share all of his views and statements. I thought, though, that the debatable elements of Peter’s essay (of which one is imho the statement quoted by MarkP) are secondary, while his statistically-based analysis of the “explanatory filter” is a useful addition to the critique of Dembski’s alleged magical tool for inferring design. I am convinced that “Explanatory filter” is an illogical and useless schema which can only mislead those who may try to apply it to real life problems. It ignores feedbacks, superimposition of various causes, and is in fact impossible to use because the first and the second “nodes,” in Dembski’s schema, require to determine the values of events’ probabilities prior to determining their causal history. This is usually impossible. Also, the third “node” requires determination of “specification” as an allegedly a category independent from probability, which, I think, is contrary to the reasonable concept of specification. I have written about it at length in several publications and web postings, but Peter seemed to be not familiar with them. His partial polite bows to Dembski may be due to the fact that he is new to the game, so he takes Dembski too seriously, being not familiar with the well known history of Dembski’s unethical behavior, his supercilious attitude to critics, and many other facts which show that in debating Dembski, a politeness characteristic of scientific debates is out of place. When I first started criticizing Dembski, I also tried to adhere to the norms of a debate common in science, and even stated that he is a man “of many talents.” With time, my attitude gradually changed as I came to realize what the phenomenon named Dembski actually is. Perhaps, if Olofsson continues dealing with Dembski’s pseudo-mathematics, he also will regret the words quoted by MarkP.

As to quote-mining by ID advocates, there is nothing new in that, so quoting Olofssons’s passage in question hardly will make much of a difference, given the already well documented IDists’ passion for quote-mining.

Regularity is the mark of design

I hope I’m not too pedantic in correcting what I wrote above, since really “regularity” is simply the mark of natural processes, including “design”. I wrote the bit above (which in some contexts is adequate, but not in that post) because Dembski’s “elimination of regularity” as being the cause of, say, the flagellum, is completely the oppposite of what would be needed to show “design”.

As it happens, though, there is nothing that we observe in the classical sciences which doesn’t more or less fit the term “regularity”, including “accident” or “chance”. The “elimination” of regularity as the cause of organisms is wrong in so many ways, including the fact that regularity is well known in biology, and it follows evolutionary patterns, not known design patterns.

I was a bit surprised as well that anyone would suggest that the EF has not been discussed much, and wrote a little about it in a post that got lost in the poor workings of this website (fading in and out). Like Wesley said, though.…

Glen D http://tinyurl.com/b8ykm

Olofsson’s paper is well done, but I’m not sure that it teaches us very much beyond what we already know, i.e. that ID is crap. The worst thing about ID is not that it’s false–lots of interesting ideas are false–but that it is not a fertile error. The best that you can claim is that debunking the explanatory filter provides an opportunity to explain some aspects of hypothesis testing that aren’t generally understood–Olofsson’s good on that–but even that positive result is pedagogical, which is to say, no new statistical concepts or methods arise from confronting Demsbski.

“Regularity is the mark of design”

another view is that regularity is the mark of perception. we can mostly only perceive what is regular, because only regularity is compressible into representations which our finite minds can hold. regularities can be seen, theorised about, and cause us to wonder how the universe can be so regular (ie mathematical). the great mass of random information is largely invisible to us.

This essay strikes me as both well-meaning and kind of tired. Dembski has clearly started with Absolute Truth. He already knows the answer, and has constructed a mathmatical model as required to produce that answer. Since the answer itself is dead wrong, there are sure to be errors sprinkled liberally throughout any system of rationalizations.

So the roundabout suggestion that Dembski’s efforts would not pass any decent college course in experimental design pretty much goes without saying. But WHY would his Filter fail this course? Is it better to answer this ‘why’ question by identifying specific technical errors in the construction or application of the Filter, or is it better to point out that doing so misses Dembski’s entire purpose, which is exactly the opposite of designing an experiment most likely to illuminate the (initially unknown) correct result. Pretending that religious apologetics is really experiment-construction, and then analyzing these apologetics on that basis is exasperatingly misguided.

For me, what matters is that Dembski has started with an Absolute Truth which is both totally wrong, and totally non-negotiable. He has done this for reasons having nothing to do with statistics. Digging into his statistics and finding those errors sure to be there is a sterile exercise. Finding that, gee, Dembski has *stacked the deck* is like finding that the ocean is full of water. It’s not until we can start to figure out how Dembski’s Faith has warped his brain that we can approach getting a handle on the underlying issue here.

Flint, it may be important to fence-sitters and the gullible to have it shown that Dembski’s work is completely vacuous.

If we’re going to show something, I think it might be more persuasive to the hypothetical (and maybe large?) “fence sitters and gullible” to show that Dembski’s errors have been highlighted from every possible perspective - mathmatically, statistically, logically, philosophically, even theologically - yet Dembski has ignored every correction he hasn’t outright denied. In other words, his walks-on-water response to his own errors is probably more persuasive than the errors themselves.

This is true natural selection in action: Those least capable of admitting errors are those who tend to accummulate the most of them. However religion might be properly applied, surely using it as a way of convincing yourself you’re infallible isn’t it.

Oloffson Wrote:

Thus far, I have not voiced any serious objections to the explanatory fillter. It is logical, it applies to biological systems, and it is based on well-established principles and techniques from mathematics and statistics. But no more Mr. Nice Guy.

I think he did this to be “surgical”, and so that he will not be accused of being “obsessively critical”. This appeals to me; for example, I also like Wolpert’s Jello article, in which only one, and not several dozen points are made.

I feel torned between several different directions here. Indulging risks come out as commentbombing… No news there, I’m afraid. :-) So at it then:

First, I’m enjoyed that two countrymen (Olofsson, Häggström) are so active in probability theory and in debating creationism.

From Olofsson I learn that design isn’t welldefined since SC needs EF to detect it and EF fails.

From Häggström I learn that NFL is basically a very simple probability reasoning on black box algorithms. And that it fails for Dembski since he assumes that a fitness function is generated from a uniform distribution, while mutations easily show that functions are clustered. Otherwise we would most likely die or possibly transform to a bacteria for each mutation. ( http://www.math.chalmers.se/~olleh/Dembski_2.pdf ; according to his page the text that Olofsson refers to.)

(I also take home that hypothesis testing is logically most like a proof by contradiction. I like to save that for my pet peeve that science is not “inductive reasoning”. It is very like falsifiability’s modus tollens, and whoever said “science doesn’t make proofs but disproofs” seems to have been more on the money than I thought. But that is another story…)

My own criticism of Olofsson is that he doesn’t explicitly comment on Dembski’s famed misuse of a ‘Universal Probability Bound’. He only notes that “Dembski explores various criteria for what exactly is a small enough probability, and although these criteria are debatable, the use of probability considerations per se is hardly controversial” and goes on to construct specific criterias instead.

Mark: “Also, the third “node” requires determination of “specification” as an allegedly a category independent from probability, which, I think, is contrary to the reasonable concept of specification.”

But it seems to be a good point that even with a reasonable concept EF fails miserably.

Flint: I basically agree with all that you said, but it must be noted that it isn’t a fault as such to start with an often vaguely seen conclusion and work back to the premises. Einstein is famed amongst other things for his ability to do this.

But Dembski is no Einstein (either). The problem, as you say, is that if one starts with the conclusion one must work out predictions and do the tests, like Einstein could often do by theory alone, and sort out the bad premises to see if there remains a consistent theory in there. One must aim to fail to succeed. A few of Dembski’s many problems is that he wants to succeed so badly he fails, by constantly changing definitions, obfuscation, double standards (as Olofsson so nicely caught), and refusal to test.

Second, reading Häggström various sources I get some questions on evolution, and a suggestion for a thread.

It seems there is a confusion, maybe an urban legend, about NFL and coevolution. If so, perhaps it merits a revisit here.

It starts when I read the swedish (unfortunately) magazine folkvett. In the 1/2006 number Marianne Rasmuson, a professor emerita in genetics, states that coevolution is a problem for Dembski’s use of NFL. (Without references, at least on the web version.) She also says that coevolution refers to two species evolutionary struggle. ( http://www.vof.se/folkvett/20061rasmuson.html )

Häggström answers here that this is from an article by Orr. (H. Allen Orr (The New Yorker 30 maj 2005.) And that it is wrong. He refers ( http://www.vof.se/folkvett/20062dembski.html ) to an article, which is equivalent to the precursor of the one Olofsson mentions. ( http://www.math.chalmers.se/~olleh/Dembski.pdf )

In these articles Häggström notes that he can’t find any NFL theorem adapted to the situation, but that it is easy to device one. He makes a simple derivation, and it seems obvious that it doesn’t complicate Dembski’s claim. (Which easily fails on Häggströms main reasoning as noted before.)

Now, Häggström published the precursor in mars 2006. At about the same time, in feb 2006, there was a comment on Talk Reason about where Wolpert’s and Macready’s paper on NFL and coevolution was. It was apparently published late in 2005. ( http://www.talkreason.org/articles/jello.cfm ; http://www.talkreason.org/Forum.cfm?MESSAGEID=662 )

The abstract claims boldly: “In this paper, we present a general framework covering most optimization scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multiarmed bandit problems and evolution of multiple coevolving players. As a particular instance of the latter, it covers “self-play” problems.

In these problems, the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multiplayer game. In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. However, in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.” ( http://ieeexplore.ieee.org/search/w[…]mber=1545946 )

Now that Häggström put his latest paper on NFL on the web and already sent it for publication, I have some questions, which might be the suggestion for a post:

- Wikipedia more or less conflates coevolution and molecular coevolution. Was Rasmuson too specific (or simply wrong), can we discuss NFL and molecular coevolution?

- Wolpert and Häggström seems to agree, real coevolution (at least between species) isn’t a problem as such for Dembski’s use of NFL. Rasmuson says something else, something I have myself read (here, I think) and used for comments. Is Orr’s article somehow the source, or is it a new urban legend in the making?

- Wolpert and Häggström disagrees on self-play with champions. I can’t see why Häggströms simple argument doesn’t hold generally. (But I haven’t read Wolpert et al yet.) And is this possibly gene coevolution within a genome?

Third (though the second comment was held up),

PvM: “I argue that the distinction between random processes and design may be mostly of a linguistic nature and less of a scientific nature. After all, what is design? The instantatiation according to a plan, a plan which we often may not fully comprehend… In other words, design and randomness may converge. The level of randomness depends on our understanding of designers’ motivations, capabalities etc.”

You seem to find evidence for a creator’s design in Olofsson’s definition of randomness. Interestingly, I find the converse.

What Olofsson say is that what is randomness is is hard to describe completely satisfactorily. But that an informal definition is sufficient. Why is it sufficient? Because we can define probability itself well by Kolmogorov’s axioms.

This example shows why Dembski’s failure with EF, and thus SC, implies that design by a nameless, unobserved creator is illdefined and no informal definition is sufficient. This is thus far a folk psychology concept without meaning.

The only meaning of design we seem to be able to do is when we say that it is constructions observed to be made by designers by some form of algorithms. (Animals, humans, computers.)

BTW, conflating randomness and design (“design and randomness may converge”, “level of randomness depends”) can only lead to confusion.

In comment 141355 Torbjörn Larsson quotes the following sentence from my comment 141272:

Also, the third “node” requires determination of “specification” as an allegedly a category independent from probability, which, I think, is contrary to the reasonable concept of specification.

I guess that by quoting this sentence, Torbjörn intended to say that its validity was not clear.

A detailed discussion of that matter can be seen here, here, and here. It was also discussed in detail in my book Unintelligent Design (chapter 1).

Regarding Torbjörn’s praise of the papers by his compatriots Olofsson and Häggström, I share his very positive view of these papers. These two very good papers share though one shortcoming (as Wesley pointed out regarding Olofsson’s paper, in his comment in this thread): both omit references to many earlier publications where the pertinent topics were discussed. In particular, Dembski’s misuse of the NFL theorems (which is the subject of Häggström’s paper) was the subject of this article (which is the online version of chapter 11 in the anthology Why Intelligent Design Fails, Rutgers Univ. Press, 2004), and in a shorter version, of this essay, which is the online version of an article in the Skeptic, No.4, 2005.

Since there seem to be problems with PT’s handling comments (some of them do not appear here despite being properly posted) this is just a test to see if this comment will appear as posted.

In comment 141359 Torbjörn Larsson wrote:

Is Orr’s article somehow the source, or is it a new urban legend in the making?

A discussion of Orr’s article from the standpoint of NFL theorems can be seen here.

Mark:

Thanks for answering!

“I guess that by quoting this sentence, Torbjörn intended to say that its validity was not clear.”

Actually yes and no, and I’m sorry if I did not make myself clearer, if it was at all possible. Olofsson seems to make a case that specification may be given a valid definition in his model of EF, which seems on the face of it reasonable to me. If it isn’t crystal clear that it is invalid, his arguments means EF still fails. Of course I yield to your greater knowledge here, and I look forward to study your references which may make the invalidity clearer to me.

“Orr’s article”

Again, thanks! The link in your article (now broken, BTW) suggests an earlier article in Boston Review.

It seems Orr indeed makes the suggestion about coevolution, but from his position that NFL is exclusive to targeting algorithms. Which, as you point out, is wrong. And both Häggström and, it seems, Wolpert et al finds typical coevolutionary scenarios to be no problem for NFL. Your own article says “the NFL theorems may be invalid for co-evolutionary algorithms, but this is a different story” which is hereby noted - it isn’t exactly wrong, and furthermore “self-play” may indeed be a problem.

So Orr may still be repeating an erroneous argument, which has nothing to do with Wolpert’s work.

And Häggström is correct in his “folkvett” article, but doesn’t find your article or Wolpert’s et al papers. Seems to be a clear trend here. ;-)

“Self-play” isn’t clearly (to me, at least) a factor in molecular coevolution, so I need to drop the coevolution NFL argument for the time being. Again thanks, for the help in clearing this up.

REPLY FROM PETER OLOFSSON:

Thanks to Mark Perakh for posting my article and thanks for the comments. My intent was to view the EF within the context of mathematical statistics, an approach that has been hinted at here and there in the anti-ID postings but not developed. If you think that I am too lenient on the logical structure of the EF, just think of my approach as putting the EF in the best possible light and argue that it still has serious shortcomings when it comes to the relevant biological applications. I attempted to illustrate this by paralleling the Caputo example and the bacterial flagellum to show what is lacking in the latter case.

Some individual replies:

WESLEY ELSBERRY and MARK PERAKH: I did not do the thorough search for references I would normally do for a “scholarly” paper. I am sorry that I overlooked some references but I will add them as I revise my paper. I know that both of you have published extensive criticisms of the filter and my contribution is to be viewed as a complement, developing the perspective of a mathematical statistician. GUYEFAUX: An insightful comment (#141320). You got it!

TORBJORN LARSSON: I decided against discussing the universal probability bound as I did not see it crucial to my argument. Being a mathematical statistician, I just thought of it as a very low significance level and let it rest at that. There are certainly issues with the bound and Dembski’s numbers and motivations have evolved (sorry, I mean “have been redesigned…”) over time.

Finally, regarding my “definition of randomness,” it is from an underground textbook where it works fine. It is not to be extended into the realm of philosophy of science.

WHOOPS! It should be “undergrad” textbook, not “underground.” A Freudian slip indeed!

Torbjörn Larsson Wrote:

In these articles Häggström notes that he can’t find any NFL theorem adapted to the situation, but that it is easy to device one. He makes a simple derivation, and it seems obvious that it doesn’t complicate Dembski’s claim.

Häggström’s formalization of coevolution is very crude and does not seriously investigate coevolution. Wolpert and Macready’s formalization is more neat and they show one way of using their formulization to take into account the relation between fitness values and population dynamics. Ultimately, however, what matters is that Wolpert and Macready coevolution paper relaxes the condition (from the original NFL paper in the same journal) that the performance measure only depends on the fitness function via the sampled fitness values.

Erik Tellgren writes that “Häggström’s formalization of coevolution is very crude and does not seriously investigate coevolution. Wolpert and Macready’s formalization is more neat and they show one way of using their formalization to take into account the relation between fitness values and population dynamics.”

Here Erik seems to misunderstand the point of the formalization of coevolutionary NFL that I suggest (in a mere footnote in my original Dembski-debunkment paper). Of course the formalization is “very crude”. And of course it “does not seriously investigate coevolution”. The whole point is that it is the obvious extension to coevolution of the NFL theorems (for evolution of a single species in a fixed or a varying environment) that Dembski does use in his book. It is exactly as crude, and exactly as relevant to evolutionary biology, as those that Dembski invokes - that is, extremely crude and utterly irrelevant. This shows that the argument put forth by Orr and others, namely that the problem of Dembski’s argument is that it fails to take coevolutionary aspects into account, misses the point.

Of course, sophisticated mathematical modelling of coevolution is an interesting and important topic in evolutionary biology. But to invoke it against Dembski’s argument strikes me as an example of hammering in a nail with a sledgehammer.

About this Entry

This page contains a single entry by Mark Perakh published on October 22, 2006 11:24 AM.

Shocking news: Panda bites thumb was the previous entry in this blog.

The Poynter Center on “Intelligent Design, Science Education, and Public Reason” is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter