Desperately Dissing Avida

| 63 Comments | 3 TrackBacks

Writing for the Discovery Institute, Casey Luskin has dissed evolutionary research performed using the Avida research platform. (Luskin is a new “program officer” for the DI.) As I wrote last year, computer models employing evolutionary mechanisms are a thorn (or maybe a dagger?) in the side of ID creationists. The models allow testing evolutionary hypotheses that in “real” life would take decades to accomplish or are impractical to run in wet lab or field. They also allow close control of relevant variables – mutation rates, kinds of mutations, the topography of the fitness landscape, and a number of others, enabling parametric studies of the effects of those variables on evolutionary dynamics. A number of publications using Avida (see also here) have established that it is a valuable complement to wet lab and field studies in doing research on evolutionary processes.

In his testimony at the Dover trial on September 28, Rob Pennock described a study that has particularly irritated ID creationsts, The evolutionary origin of complex features, published in Nature in 2003. In that paper Lenski, Ofria, Pennock and Adami showed that there are circumstances under which structures that meet Behe’s operational criterion for irreducible complexity (IC) – loss of function due to knockout – can evolve by random mutations and selection. Since IC is the core negative argument of ID – IC structures and processes allegedly cannot evolve by incremental “Darwinian” processes – the demonstration that they can evolve by Darwinian processes knocks out IC as a marker of intelligent design. And since IC is a special case of Dembski’s Specified Complexity, it also weakens Dembski’s core argument.

Various ID creationists have criticized the Lenski, et al., study on a variety of specious grounds, and I’ve discussed those critiques in several places, including an extended discussion here. Luskin’s critique is shallower and less informed than some I’ve read. I’ll hit a few low points in his critique.

Luskin wrote

Pennock and his other co-authors claim the paper “demonstrate[s] the validity of the hypothesis, first articulated by Darwin and supported today by comparative and experimental evidence, that complex features generally evolve by modifying existing structures and functions” (internal citations removed). Today in court, Pennock discussed the paper today asserting that it was a “direct refutation” of irreducible complexity and a “general test” of Darwinian theory.

I do not have access to Pennock’s testimony at the moment, but that’s about what I’d have said. Co-option and modification of existing structures is a ubiquitous phenomenon in evolution at levels ranging from molecular mechanisms to high-level structures like wings. And in the Lenski, et al., study, sure enough, those same phenomena were observed occurring under the Darwinian mechanisms – reproduction, heritable variation, competition, and mutation.

Luskin spent a good deal of space exploring the conjecture that Pennock’s co-authorship of the Lenski, et al., paper was a conspiracy to get an expert on Behe’s irreducible complexity involved without directly citing Behe. He wrote

I can think of no reason why a philosopher, who otherwise never authors technical papers in scientific journals, whose career specializes in rebutting ID, should be a co-author a [sic] technical research paper in a top technical science journal on the evolutionary origin of biological complexity, a claim which ID challenges, unless that paper somehow required some expertise on ID. Indeed, this paper now appears strategically arranged: is it mere coincidence that this paper appeared as a primary exhibit in the first trial against teaching ID? The reality is that Avida study, in which Pennock was third author, has much to do with strategically rebutting ID.

Looks more to me like it empirically rebuts irreducible complexity.

In his conspiracy theorizing Luskin neglected to mention that in addition to his appointment in philosophy, Pennock is also a Member of the Digital Life Laboratory at Michigan State, along with two of the other authors, Charles Ofria and Richard Lenski. Chris Adami is also associated with the Devolab as a collaborator. The work published in the Lenski, et al., paper is well within Pennock’s professional purview: it’s by four colleagues associated with the same lab. It wouldn’t surprise me at all to find that Pennock suggested the study’s main outlines to his co-authors, since IDists use the notion of irreducible complexity as their primary weapon in their culture war against evolutionary theory and Pennock is interested in that effort. Knowing something about Avida, I have no problem imagining that Pennock saw the Avida platform, a main tool in the Devolab, as an excellent tool to do some research on the question of the evolvability of IC structures.

That they didn’t mention intelligent design isn’t amazing. As far back as Darwin the question of how those kinds of structures could evolve has been raised, so Behe contributed no new issue to address. Lenski, et al., anchored their paper directly to Darwin in the first paragraph. Since the ID creationists have not published anything in the professional literature of biology to which to refer in the context of the Lenski, et al., paper, it seems strange to complain that they didn’t reference ID. If ID had some actual professional literature to cite one might sympathize, but it doesn’t. Luskin’s conspiracy theory is more than a little incongruous coming from the socio-political movement that didn’t bat an eyelash when the ID-sympathetic editor of an obscure taxonomic journal slid around the publishing society’s editorial guidelines to get Meyer’s Hopeless Monster published. Do I detect projection here?

Then Luskin repeated a common ID creationist criticism by writing

Pennock asserted on the witness stand that this study accurately modeled biological reality. Well, if biological reality was pre-programmed by an intelligence to evolve certain simple logic functions, then he’s right. Avida programmers knew that EQU was easily evolvable from the proper combination of only 5 primitive logic operations before the simulation even began. This is called “evolution by intelligent design,” because the environment seems literally pre-programmed to evolve the desired phenotype.

In fact, Avida programmers had no idea whether digital critters capable of performing input-output mappings corresponding to EQU could evolve in Avida. That human programmers could write an Avida instruction string that performed the EQU mapping is irrelevant to the question of whether it could evolve via Darwinian mechanisms. Programmers writing code is ID’s position, not evolution’s. (Incidentally, Luskin misrepresents what actually evolved in the Lenski, et al., experiments. “EQU” didn’t evolve any more than flight evolved in animals like birds and bats. Morphological structures that enable flight evolved; flight didn’t. Similarly, assembly language programs that performed the input-output mapping corresponding to EQU evolved, not EQU itself. That’s not a trivial distinction. A given function can be performed by a number of different structures.)

Further, while human programmers could write an Avidian critter to perform the input-output mapping corresponding to EQU using 5 “primitive” logic functions, the 23 lineages that evolved in the main condition in the Lenski, et al., study did so in 23 different ways. The 23 lineages that evolved to perform the mapping were all different, and in addition to EQU they performed 17 different combinations of the “primitive” functions, ranging from four to eight. None of them evolved the ‘EQU’ program the human programmers wrote. That phenomenon extends to other aspects of the Avida critters. For example, running Avida with no fitness landscape, so selection is on replication efficiency alone, evolves critters that perform self-replication in fewer instructions than any human-written program. Recall Leslie Orgel’s Second Law: Evolution is smarter than [programmers] are.

Luskin then wrote a particularly error-filled paragraph.

Pennock seemed impressed that the digital organisms “invented” many creative ways of performing EQU. But the flaw of the simulation lies therein: EQU was destined to evolve because the addition of each logic function greatly improved the fitness of the digital organism. Pre-programmed into the Avida environment were functionally advantageous digital mutations which were guaranteed to keep the digital organisms alive and doing interesting things. Thus, if one assumes that anything more than extremely minor cases of irreducible complexity never exist, then Pennock’s program show evolution can work.

There are four main problems with Luskin’s representation in those four sentences. First, “functionally advantageous mutations” were not “pre-programmed” into the Avida environment. Random mutations occurred, some of which were deleterious (in the sense of decreasing reproductive fitness) or even lethal, some were neutral, and some were advantageous. Gee. That’s just what a slew of biological research teaches us: mutations come in three basic flavors.

Second, Luskin claimed that those mutations were “… guaranteed to keep the digital organisms alive …”. That’s flatly false. Tens of thousands of digital organisms die in the course of an Avida run under the conditions Lenski, et al., used. Some die because they fail to replicate due to lethal mutations in their replication code, and some die because they’re killed – over-written – by a reproducing neighbor regardless of their advantageous mutations. Thousands of species emerge, flourish for a while, and then go extinct, and thousands of lineages go extinct. There are no guarantees at all.

Third, it is not necessary that “…the addition of each logic function greatly improve[s] the fitness …”. While the fitness landscape defined by the values assigned to the various logic functions in the Lenski, et al., study was fairly steep from simple to complex logic functions, a number of Avida runs I have done with a flatter topography produce lineages that also perform the most complex logic functions. It just takes longer because the dynamics of lineages evolving on flatter landscapes are slower. So long as there is at least some net selective advantage for performing a more complex function, more complex functions can evolve.

Finally, do we now have a distinction between micro-IC and macro-IC? Luskin’s reference to “extremely minor cases of irreducible complexity” suggests that we have to make that distinction, but where the boundary might be is not clear. Do I hear echoes of “microevolution is fine, but not macroevolution”?

I’ll note here that what one might reasonably believe to be cases of irreducible complexity, like a three-legged stool which cannot function if any of the three legs or the seat is ‘knocked out’, are no longer IC. William Dembski has recently added two new operational tests for ICness. In addition to Behe’s original knockout operational criterion, now Dembski tells us we must also determine (1) that a simpler structure cannot perform the same function, and (2) after a successful knockout one must show that no adaptation and/or rearrangement of the remaining parts can perform the original function. As Dembski tells us, that means that a three-legged stool is not IC since a solid block of wood can keep one’s butt off the ground. I have argued elsewhere that Dembski’s additional operational criteria mean the Death of Irreducible Complexity (see also here and Mark Perakh’s post here). On Dembski’s new and improved definition, not even Behe’s mousetrap is IC.

Finally, Luskin claimed that a control condition in the Lenski, et al., paper that employed a fitness landscape that was flat across logic functions except for EQU showed that

… when there is no selective advantage until you get the final function, the final function never evolved! This accurate modeling of irreducible complexity, for there is no functional advantage until you get some to some minimal level of complexity—that’s IC in a nutshell. Yet the study found that the evolution of such a structure was impossible.

Of course they claim this is what they “expected,” but without using the words “irreducible complexity,” they just demonstrated that high levels of irreducible complexity are unevolvable.

I’ll be darned. Luskin is back to If it can evolve incrementally by indirect routes involving intermediates that are themselves functional, it ain’t IC. If it can’t evolve, it is IC, and by the way, IC shows that it can’t evolve. In other words, we’re back to the “We don’t know how it could have, so it couldn’t have and therefore ID” style of argument. And there’s that “high levels of irreducible complexity” phrase – we’ve got to deal with microIC and macroIC again, too. Stuff that’s got just a little bit of IC can evolve, but stuff that’s got a whole lot of IC can’t. When did irreducible complexity become a scalar variable? Luskin must be resurrecting Behe’s even more question-begging “evolutionary” definition of irreducible complexity. I guess it’s handy to have a series of alternative definitions of a core concept ready to hand.

In a way I feel sorry for Luskin. It can’t be easy writing about genuine research when you have no clue what it did and what it means. On the other hand, he has plenty of role models for that behavior at the Discovery Institute, and I have no doubt that he’ll learn fast.

RBH

3 TrackBacks

Hoppe Fisks Luskin on AVIDA from Dispatches from the Culture Wars on September 30, 2005 1:49 PM

After Rob Pennock's testimony the other day, Casey Luskin - who now works for the Discovery Institute - wrote an attempted critique of Pennock's claims concerning digital evolution. Pennock is the co-author of a paper published in Nature based on... Read More

Here's our semi-regular roundup of technobiology-related (or otherwise interesting) links: Over at The Panda's Thumb, they ran a great article a few days ago debunking creatinist/ID claims about some recent artificial life work with Avida. Here are Read More

Dunford Fisks Luskin from Dispatches from the Culture Wars on October 11, 2005 10:28 AM

Casey Luskin, formerly of the IDEA club and now working for the Discovery Institute, has been busily blogging the Dover trial over the last couple weeks, posting responses to the testimony of the expert witnesses. Unfortunately for the DI, it's... Read More

63 Comments

Finally, do we now have a distinction between micro-IC and macro-IC? Luskin’s reference to “extremely minor cases of irreducible complexity” suggests that we have to make that distinction, but where the boundary might be is not clear.

Surely something is IC or it isn’t? Isn’t that the whole point of IC?

I met some of the MSU/CalTech Digital Life Lab folks at ALife9 in Boston. They’re a nice bunch of folks.

Pennock is working on a version of Avida called Avida-ED which is designed for educational use.

http://www.msu.edu/~pennock5/resear[…]vida-ED.html

If he or one of the others is reading this, let me suggest the following:

Build the Avida-ED lesson plan specifically to refute, in sequence, each claim of ID. Don’t bother point this out explicitly… just write the lessons using ID’s antievolution tracts as a guide. It wouldn’t be hard, since all of ID’s basic claims (irreducible complexity, no free lunch, conservation of information, etc.) are easily refuted with a system like this.

It would also result in an excellent lesson about evolution, since many of ID’s claims rely on exploiting common misunderstandings. So, by debunking them, you would be sure to hit each major misconception.

Here’s another even simpler system that refutes a lot of Dembski’s nonsense about “conservation of information”:

http://www.lecb.ncifcrf.gov/~toms/paper/ev/

Luskin wrote:

Pennock asserted on the witness stand that this study accurately modeled biological reality. Well, if biological reality was pre-programmed by an intelligence to evolve certain simple logic functions, then he’s right. Avida programmers knew that EQU was easily evolvable from the proper combination of only 5 primitive logic operations before the simulation even began. This is called “evolution by intelligent design,” because the environment seems literally pre-programmed to evolve the desired phenotype.

This is goint to be the major challenge to things like Avida. In some ways, it’s valid: Avida is not a real biological system. However, to levy this as a criticism of Avida, the IDers have to radically weaken their own arguments.

Behe and Dembski both make *strong* claims regarding the *inability* of evolutionary processes to generate certain results. Behe claims that evolutionary processes cannot generate structures that exhibit what he calls irreducible complexity. Dembski claims a kind of “law of conservation of information.” Trying to pin down what Dembski actually means is like nailing jelly to the wall as he constantly changes his definitions, but my understanding is that Dembski is claiming that no system can show an increase in ordered information without an external conscious designer manually adding information to the system. More specifically, Dembski defines something called complex specified information as information that lies beyond the universal probability bound. He claims that the presence of functional CSI implies design as the only explanation.

Both of these constitute claims that there exists a scientific law forbidding the evolutionary origin of X. Dembski’s is actually the strongest claim: he is essentially claiming a law of conservation. A conservation law is a *very* strong claim about the nature of the universe… perhaps one of the strongest types of claims that science can make.

When you propose a universal scientific law restricting X, what you are saying is that absolutely no system, natural or synthetic, can do X. Example: the law of conservation of energy. Neither nature nor man can produce a perpetual motion machine.

So if there is a law of conservation of information or a law prohibiting the evolution of irreducibly complex causal structures, then no system, natural or synthetic, should be able to show information increase and/or the evolution of irreducible structures.

Artificial life systems (and even some simple genetic algorithms) have demonstrated both. This flatly disproves both of these as universal laws. Case closed. It doesn’t matter that the systems are synthetic. Natural laws apply to all systems.

In order to criticize artificial life as inapplicable to ID, the IDers have to weaken their arguments to say “oh, well, I guess some evolutionary systems can show these effects, but natural systems are highly unlikely to do so if they have not been designed.” This weakens their arguments from proposals of fundamental scientific law to arguments from incredulity. It also puts a strict time limit on their arguments: from now until someone successfully shows what artificial life has shown in real chemistry. Tick, tick, tick… (I expect to see this within 10 years, if it hasn’t happened already and I don’t know about it.)

Or, it puts them in the theistic evolution camp, since one of the claims of theistic evolution is “yes, evolution might be responsible for life, but it could not have done so without a guiding intelligence.” Theistic evolution is one of the things that the Discovery Institute was contracted by Howard Ahmanson et. al. to specifically refute, so their financiers are not going to be happy if they end up as theistic evolutionists.

By the way: Avida is actually only the most well-publicized piece of artificial life work that is devastating to ID– in reality there is artificial life work going back to the early 90s that demolishes ID.

How about dissing this one? Casey Luskin might want to pick his battles carefully. The volume of scientific research coming out of even an ongoing graduate research program is enough to swamp all the junk chirned out by DI since inception.

Scientists Uncover Rules that Govern the Rate of Protein Evolution http://pr.caltech.edu/media/Press_R[…]PR12737.html

For William Dembski’s (and Luskin’s) claims to have any merit whatsoever, they MUST be able to submit the claims to a test like this:

Given a data set (i.e. a long string) that represents a computer program (such as Avida), and the inputs to that (a population of digital organisms and a fitness landscape), measure (algorithmically) the ‘CSI’ (or whatever he asserts is ‘conserved’) in that data set.

Given a 2nd data set that also represents a computer program (Avida), and any inputs to that program (a different population of organisms, and the same fitness landscape), measure (algorithmically) the ‘CSI’ in that data set.

For any two such data sets the if the measure of ‘CSI’ in the first data set is less than the measure of ‘CSI’ in the 2nd, then that implies that the 2nd data set can’t have been produced by merely running the first data set (as a program with inputs), since that would violate the ‘law of conservation’. If Dembski’s algorithm can always distinguish the evolutionary descendants from the ancestors (because their ‘CSI’ is always less), then, hey, he’s got something. Otherwise his claims are groundless. There’s no reason, other than CSI being an empty concept that is therefore impossible to measure, that Dembski shouldn’t meet this challenge.

Adam Lerymenko wrote

In order to criticize artificial life as inapplicable to ID, the IDers have to weaken their arguments to say “oh, well, I guess some evolutionary systems can show these effects, but natural systems are highly unlikely to do so if they have not been designed.”

Another way to put that is that the various evolutionary simulators, Avida, Tom Schneider’s ev and others, have demonstrated that the purely mechanical processes invoked by evolutionary theory (random mutations, selection, etc.) can generate all the phenomena that IDists claim demonstrate intelligent design. That pushes them to the “Well, what designed those mechanisms?” question. That’s the theistic evolution move.

But that move is anathema to the core IDists (and to their straight creationist brethren) for several reasons. First, of course, there are stochastic contingencies associated with the evolutionary process that make the goal-directedness so beloved of IDists problematic. Evolution incorporating stochastic contingencies may come up with similar-functioning outcomes (e.g., the 23 different outcomes in the Lenski, et al., study), but can’t guarantee that one particular outcome will occur. Since IDists and their creationist brethren very badly want that one outcome – human beings – to be intended by the intelligent designer, a process incorporating stochastic contingencies can’t guarantee that a desired goal will occur and is therefore suspect for them.

Second, the principal IDists, with perhaps the exception of Behe, are committed to an interventionist intelligent designer, one that intermittently pokes a finger into the process to alter its course. Pushing the source of the “information” (whatever meaning that term has for IDists this week) in the biome out into the selective environment removes the interventions from (biological) view. You can’t look at biology and “see” evidence of interventions, you have to do physics or something to find evidence for interventions. That’s a chancier move, though the cosmological IDists (the “Privileged Planeteers”) make that move. But it still suffers from the stochastic contingency problem in biology: tinkering with the physical environment does not guarantee that human beings will result: the divine tinkering might have produced an intelligent T. rex in the image of … well, of what?

========================

JY wrote

Given a data set (i.e. a long string) that represents a computer program (such as Avida), and the inputs to that (a population of digital organisms and a fitness landscape), measure (algorithmically) the ‘CSI’ (or whatever he asserts is ‘conserved’) in that data set.

Given a 2nd data set that also represents a computer program (Avida), and any inputs to that program (a different population of organisms, and the same fitness landscape), measure (algorithmically) the ‘CSI’ in that data set.

I posed a similar sort of challenge to the IDists on ARN a few months ago. Not surprisingly, none took it up (including Salvador, who didn’t find it “interesting”). I find it very telling that IDists who advertise a methodology for detecting design (IC, SC, CSI, or whatever it’s called) decline the invitation to validate and calibrate their methodology on phenomena of known provenance. It almost looks like they don’t care whether it’s reliable and valid. Does that surprise anyone? Luskin was identified in a DI press release as a “scientist”. Perhaps that is a worthy extramural project he could do as a new employee to get in good with his new bosses. Or maybe not.

RBH

Maybe I’m missing something, but what does CSI say about information on a higher level, such as human creativity? For example, if I suddenly had an inspiration and wrote a symphony this weekend, where did that symphony come from? Does Dembski claim that it was pre-specified in the environment that produced me, by some kind of designer? Or was it planted in my brain by some kind of supernatural, (perhaps undetectable “quantum”) process? If so, then how can there be Free Will, a concept which, paradoxically, most creationists would say is god-given?

RBH Wrote:

I posed a similar sort of challenge to the IDists on ARN a few months ago. Not surprisingly, none took it up (including Salvador, who didn’t find it “interesting”). I find it very telling that IDists who advertise a methodology for detecting design (IC, SC, CSI, or whatever it’s called) decline the invitation to validate and calibrate their methodology on phenomena of known provenance. It almost looks like they don’t care whether it’s reliable and valid. Does that surprise anyone? Luskin was identified in a DI press release as a “scientist”. Perhaps that is a worthy extramural project he could do as a new employee to get in good with his new bosses. Or maybe not.

I work quite a bit on computer-based artificial life systems and contribute to a blog about that and related topics (click my name :).

I’m quite certain based on what I’ve seen that they have absolutely no interest in doing this… not because they wouldn’t want to test ID, but because at least a few of them understand things well enough to know that the result would… well… not be the kind of result that they would want. Doing an experiment that shows CSI *increasing* as a result of an externally unguided evolutionary process for several possible candidate ways of measuring CSI is not going to get you a lot of praise as an “intelligent design scientist.”

Another silly thing that they say repeatedly about things like Avida is that they can’t possibly work cause they violate the second law of thermodynamics. This is very silly. As far as I know, though I cannot personally verify this, the computers that ran Lenski et. al.’s experiments were consuming energy from the power grid. I would guess this to be the case, as my own artificial life work has always required something to be plugged into the wall.

Interesting blog. Thanks for the link. See my brief bio.

RBH

Another silly thing that they say repeatedly about things like Avida is that they can’t possibly work cause they violate the second law of thermodynamics.

The ones who say that haven’t been informed that they’re supposed to be using code-names these days.

What I mean is that, as you pointed out in another comment above, what they need desperately for any of these arguments to hold water is some “law of conservation.” 2LoT used to be their law of choice in the good ol’ days.

IC, CSI, etc. are just place-holders for a long-ago demolished argument.

It is both heartening and astounding to me that some of us (especially the real scientists among us - I don’t ever wish to pretend to be more than an interested layperson here) - can discuss the POSSIBILITY that the ID crowd would ever allow their claims to be challenged by ANY sort of experiment, even one done by very sympathetic, but also rigorous and ethical, scientists outside of their control. We are all now agreed on one opinion, I will daringly venture to say: the people in the DI have motives that have nothing to do with any general definition of “science” and are motivated (putting aside the $$$) all-but-entirely (I would now say, entirely, myself) by their “faith.” Science as now practised is their openly avowed enemy; one that they have shown may be attacked without regard to fact or truth by any means necessary (well, no violence is being done or called for, of course, change that to “almost” perhaps). It is heartening that, very often, regular posters at PT and elsewhere can actually suggest that the DI and other IDers could ever, under any circumstances, allow the fair testing of the fai.. scientific claims. That we can credit them with even the smallest reasonableness speaks well for our own position and goals. It is astounding because we know fully what these people are, and yet still postulate their being somehow within the grasp of reason, or honest debate. I still recall my own sincere (naive) postings years ago, which often echoed this sort of confusion. I really thought (not a young man any longer) that, given honestly gathered and tested science against nothing but their own fixed ideas, non-loony Creationists would perhaps still be firm in their faith but acknowledge, as the earlier scientists, deeply Christian, who faced Darwin’s challenge mostly did, the current set of unrefutable facts. That no old timer bothered to respond, “You must be new ‘round here!” still amazes me.

darwinfinch,

The intended audience is not the presuppositionalist DI/ID stalwarts whose faith defends them from evidence. It is the wider group of people who hear the DI claim to be doing “science” and wonder, but who will read and comprehend the fact that what the IDists do bears only the most superficial resemblance to what genuine scientists do.

Apologists like Luskin will not be convinced by evidence. The only reason to spend time and effort rebutting their ill-informed misrepresentations is that wider audience.

RBH

Yeah, I’ve heard that. But the limits have really been reached, haven’t they? For myself, they have.

Simply linking to any of many well-written refutations of these people, with a gentle aside r update, would do the job far better, I believe. The people at talkorigins (and elsewhere) do exactly that very successfully. If anyone is truly convinced by the DI position, they are not lurking here at PT because they wish to engage the questions raised here, but to root for their side. As the worst sort of true fans, “fanatics,” they will never admire the efforts of the opposition, nor bother to read them, much less understand them. I certainly don’t wish to dissuade others from taking on the idiocies of the full-time ID trolls who pop up here, whether for the reason you suggest, to test their own grasp of the topics, to polish their style, or to simply entertain themselves. My comments were intended as frustrated (perhaps very frustrated) praise of such people’s efforts, which they, being at least as intelligent as myself and probably often much more sensible, must realize.

Jeff, an IDist would say that the symphony had come from an intelligent designer, in this case you.

For any two such data sets the if the measure of ‘CSI’ in the first data set is less than the measure of ‘CSI’ in the 2nd, then that implies that the 2nd data set can’t have been produced by merely running the first data set (as a program with inputs), since that would violate the ‘law of conservation’. If Dembski’s algorithm can always distinguish the evolutionary descendants from the ancestors (because their ‘CSI’ is always less), then, hey, he’s got something. Otherwise his claims are groundless. There’s no reason, other than CSI being an empty concept that is therefore impossible to measure, that Dembski shouldn’t meet this challenge.

You’ve made a slight misunderstanding of the law proposed by Dembski, he doesn’t claim no information is produced, but that any information would never, in the history of the universe, exceed, well it was 150 something, I forget what.

Hiya’ll,

Thanks for the explaination. But what criteria are used to establish the boundries for these “data sets”? Whatever they are, they are apparently able to make an “objective” distinction (if that’s possible) between “intelligence and “non-intelligence”. In other words, as a designer, I’m not a subset of the data sets - I’m outside the system. And I could also write a evolutionary computer program and call it “intelligent”, excluding it from the “data sets”, but not the data it generates. Sounds arbitrary to me.

Hey, Adam Ierymenko,

Would you (or anyone else here) be willing to take over an argument I really botched up here: http://www.uncommondescent.com/inde[…]archives/353

I only have a dab of experience with L-systems and some general reading on genetic algorithms and evolutionary programming and I started making mistakes.

The stuff about the Data sets wasn’t written by me, it was written by someone else on this board, whom I was trying to quote in comment , and then correct with the third paragraph. I forgot to put quote marks around it, the comment is Comment #50339 ( I suggest you read it before you read the rest of this post, otherwise the post won’t make sense). I think the boundaries of the data sets referred to would be determined by the fact that they were different programs ( The author I was quoting was talking about a simulation.) In terms of deciding which data set was which in real life (Distinguishing the designer from the designed), the boundaries between the two would be set in the same way we determine when two rocks aren’t the same rock, i.e chronological and spatial dissimilarity, as determined by a rational agent (the problem of coming up with a precise method for deciding when two objects are two different objects, and not part of the one object is a venerable problem of philosophy.)

As a designer, in the original quotation, you would be the first data set ( I know, it’s creepy to think of yourself as a set of data). In this sense I suppose that the intelligent designer of ID might also be a data set, albeit a transfinte one if he is the Christian God as most IDist’s believe.

The computer program you talked about writing would only count as intelligent (In the narrow ID sense) if it was over a certain level of complexity. A better way to describe ID then intelligent design is actually CD (complex designer) the designer of ID need not be intelligent in the sense of everday paralance, only complex.

Norman Doering

I really sympathise with you. The whole ID debate get’s so petty, every time an IDist or a Darwinist see’s his or her opponent making a simple mistake the whole thread degenerates into an arguement over that singular little factual mistake, it’s petty, there’s a lot more important issues at stake in this debate then proving that we’re more erudite then our opponents.

By the way, that last post was meant to be addressed to Jeffw, but I forgot to put it in. Sorry, it’s just I assumed there would be no more posts before I had finished writing mine, how wrong I was, how very, very wrong.

Ok, here’s a little thought experiment which I hope will help me understand this CSI “law of conservation” of complexity (or whatever it’s called).

First you write a program that generates all possible bit combinations in 4GB of memory (or whatever memory your computer has). Fairly simple algorithm, probably less than ten lines of code (although it would generate enormous output). Call it “program A”.

Then you write a program that interprets those strings as Intel 80x86 instructions. Not a trivial program, but not too difficult (I’m sure Intel uses quite a few simulators like this to test their chip logic). Call this “program B”.

Define CSI “data set” A as the union of programs A and B.

Define CSI “data set” B as output of all data generated by passing program A’s output to program B and executing it. Note that is the output of running of all possible programs in 4GB memory on an Intel machine (without user input).

Now would Dembski’s conservation law tell me that data set B can’t be any more complex than data set A? Data set B seems vastly more complex to me.

jeffw wrote: “… write a program that generates all possible bit combinations in 4GB of memory …”

Not pratically possible due to limitations of data storage in our universe. There are more bit combinations possible than there are electrons in the universe.

Remember the mathematical argument against 6 monkeys typing for a billion years ever writing Hamlet. (Or find it on the net if you don’t remeber it.)

“jeffw wrote: “… write a program that generates all possible bit combinations in 4GB of memory …” Not pratically possible due to limitations of data storage in our universe. There are more bit combinations possible than there are electrons in the universe.”

Well, this is a theoretical thought experiment so we don’t consider practicality, but yes it is even practically possible. You just write some nested loops which generate each string combination, and then you pass it to program B before generating the next one. It would certainly take an impractical amount of time, but again, this is a thought experiment. Practically has nothing to do with it.

jeffw wrote: “.….. we don’t consider practicality,… this is a thought experiment. Practically has nothing to do with it.”

I don’t like those sort of angels dancing on pin arguments.

However, I think judging the programs A and B complexity might be achieved by Kolmogorov and ideas about “Algorithmic information theory” and “algorithmic complexity.”

http://en.wikipedia.org/wiki/Algori[…]ation_theory http://en.wikipedia.org/wiki/Andrey[…]h_Kolmogorov

Norman. counting the number of angels on the head of pin isvery, very important.

“jeffw wrote: “.….. we don’t consider practicality,… this is a thought experiment. Practically has nothing to do with it.” I don’t like those sort of angels dancing on pin arguments.”

Then scale it down to a level that you feel comfortable with. Instead of 4GB, use 1Meg, 1K, or 128 bytes.

The main question posed by my example this: Is a turing machine + the “description” of a set of all possible programs for that machine (not the set itself), more or less “complex” than the output generated by the set of all the described programs?

I’m not sure that Kolmogorov complexity is useful here. All you have to do is change the description of the set of programs to “4GB” from 1K, and you will increase the functional complexity of CSI Set “B” enormously (factorially?), while the length of CSI Set A increases by only a few bytes.

jeffw wrote: “… Then scale it down to a level that you feel comfortable with. Instead of 4GB, use 1Meg, 1K, or 128 bytes.”

That changes everything. You asked: “Is a turing machine + the “description” of a set of all possible programs for that machine (not the set itself), more or less ‘complex’ than the output generated by the set of all the described programs?”

I haven’t got a clue. I need a way to actually measure complexity and I can’t really figure out how to do that. That’s why I hate these kind of angels on pins arguments.

What follows is my last post on Dembski’s site, I had the same problem with coming up with real answers – so I just reject the argument and find a different one: ———

Gumpngreen wrote: “Dembski considers your primary objection to be what he calls a ‘gatekeeper’ objection.”

I suppose it is. I don’t think, based on my limited reading, that ID qualifies as science. I don’t care about Karl Popper or other philosophical arguments because I have my own intuitive “science detector.” It works this way: Real science engages the real world when ever it can. Miller and Urey engage the chemicals of life, fossil hunters engage fossils, programmers write genetic algorithms…

Dembski’s ideas might be used to engage the real world in other ways, but they are not being used to do so.

For example, if Dembski can really detect specified complexity then he should get some people to go out into the real world and actually measure the amount of specified complexity in, perhaps, animal communications. There is a controversy about whether dolphins have a language:

Engage that controversy, do dolphins have a language? Shouldn’t Dembski’s concepts have a value there? http://www.dauphinlibre.be/langintro.htm

Compare the specified complexity of dolphin language, bird songs, whales, octopi, etc.. Make it at least a real scalar value, (if not a multidimensional one), by testing the concept against the real world. If you don’t, you’re arguing about how many angels can dance on the head of a pin.

Once you do that the real world will challenge your ideas with its reality, just like a good theory about mitochondrial DNA can get shot down by one little fact.

Gumpngreen wrote: “…objections are made in attempts to find fault with design because of the threat that design is claimed to pose to ‘science’…”

It is a threat to science if done the way it currently seems to be done, by fighting court battles because of religious motivations.

Gumpngreen wrote: “… philosophies improperly equated with being science.”

My philosophy is simple: engage the real world and stop sounding like you’re arguing about angels dancing on pins.

Gumpngreen wrote: “These objections are not made because the theoretical or empirical case for design is scientificially substandard.”

A lot of scientists say it is substandard and I’m inclined to take their word for it because it agrees with my own subjective evaluation.

Gumpngreen wrote: “I suggest you try reading Dembski’s books before you attempt further critiques.”

I should, you’re right. But I’m not that motivated too. In the end my opinion will not matter.

Before I take more interest in ID than I do now, I have to see it engage the real world. I have to see scientists using it on something other than a negative argument against evolution.

“Before I take more interest in ID than I do now, I have to see it engage the real world. I have to see scientists using it on something other than a negative argument against evolution.”

My theory of ID has it neither as a negative arguement, or as a theory. Basically I think it is the consquence of 1 theory and 1 obseravation. The theory is that all complex objects have a designer, the observation is that the universe, and life is a complex thing. This leads to the syllogism

1- All complex things are designed (theory) 2- The universe is a complex thing (observation) 3- Therefore the universe is designed

The main points at which my idea can be criticised are as follows

1- We have no evidence for (1) on a cosmic scale. ( Humes arguements against the telelogical arguement are relavant at this point). 2- Darwinian evolution ( and or the anthoporic principle) makes an exception to (1).

I would argue against 1- that we have no reason not to expand the inference, and clearly we would accept it in many expanded forms (i.e the SETI analogy), and I would say to (2) time will tell. That’s what’s so exciting about evolutionary simulations, the whole things going to be settle beyond reasonable doubt one way or another soon, once the first extremely complex AI lifeforms which leave closed the possibility that information has been frontloaded into the system have evolved then Selectionist Evolution winds, alternately if they don’t evolve…

Hiya’ll wrote: “… This leads to the syllogism… 1- All complex things are designed (theory)…”

I don’t buy that first item as it is worded. A rock you find in forest is complex, (at least in the sense of having lots of different atoms and molecules and crystal structures - maybe even a fossil) it even contains a lot of “specified” information about it’s history if you know to interpret it. But does anyone think it’s designed? – I suppose an IDer might. Is a rock designed to give us information about the past?

Norman Doering

The rock isn’t complex enough to qualify under (1). How do we define “complex enough?” I don’t know, but we have to assume there’s a definition somewhere because the idea of “complex enough” works so well in practice (i.e archeology, the Search for Extraterestial Intelligenence to use the usual analogies ). Maybe Dembski’s filter is applicable, I don’t know whether it works in practice or not, my intrests are philosophy and psychology, I don’t know a lot about maths. I am under the impression that a bunch of presitigious maths guys thinks that, at least in certain circumstances it’s a good idea ( Dembski’s design inference book was published by cambridge after all). But I really don’t have a clue whether or not you could apply it to biology

Quoth Norman Doering: “A rock you find in forest is complex, (at least in the sense of having lots of different atoms and molecules and crystal structures - maybe even a fossil) it even contains a lot of “specified” information about it’s history if you know to interpret it. But does anyone think it’s designed? — I suppose an IDer might.” (Emphasis mine)

But that’s sort of the point isn’t it? The ID’er doesn’t, but should. Then again I never really thought the complexity argument helped creatio… I mean ID much.

Edin Najetovic wrote: “But that’s sort of the point isn’t it?”

Yes, but go here: http://www.uncommondescent.com/inde[…]archives/353

And make that point and see how far you get. There is an unbridgeable chasm between the conceptions of the world.

Re “That’s like saying weather simulations are irrelevant to predicting the weather.” Sometimes they are. ;) LOL

They’re good enough to keep the weather channel in business ;)

Sometimes they are. ;) LOL

You are confusing “relevant” with “accurate”.

Re “That’s like saying weather simulations are irrelevant to predicting the weather.”

Sometimes they are. ;) LOL

But I notice that when the computer models predict a hurricane is going to approach them, people move out of the way. Quickly. ;>

About this Entry

This page contains a single entry by Richard B. Hoppe published on September 30, 2005 1:35 PM.

Move over Jon Stewart: The Discovery Institute issues own “fake news” was the previous entry in this blog.

Science Friday is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter