Unacknowledged Errors in “Unacknowledged Costs”

| 51 Comments

Back over the summer, William Dembski was talking up “Baylor’s Evolutionary Informatics Laboratory”, and one of the features there was a PDF of an essay critiquing the “ev” evolutionary computation program by Tom Schneider. Titled “Unacknowledged Information Costs in Evolutionary Computing”, the essay by Robert J. Marks and William A. Dembski made some pretty stunning claims about the “ev” program. Among them, it claimed that blind search was a more effective strategy than evolutionary computation for the problem at hand, and that the search structure in place was responsible for most of the information resulting from the program. The essay was pitched as being “in review”, publication unspecified. Dembski also made much of the fact that Tom Schneider had not, at some point, posted a response to the essay.

There are some things that Marks and Dembski did right, and others that were botched. Where they got it right was in posting the scripts that they used to come up with data for their conclusions, and in removing the paper from the “evolutionaryinformatics.org” site on notification of the errors. The posting of scripts allowed others to figure out where they got it wrong. What is surprising is just how trivial the error was, and how poor the scrutiny must have been to let things get to this point.

Now what remains to be seen is whether in any future iteration of their paper they bother to do the scholarly thing and acknowledge both the errors and those who brought the errors to their attention. Dembski at least has an exceedingly poor track record on this score, writing that critics can be used to improve materials released online. While Dembski has occasionally taken a clue from a critic, it is rather rarer that one sees Dembski acknowledge his debt to a critic.

In the current case, Marks and Dembski owe a debt to Tom Schneider, “After the Bar Closes” regular “2ndclass”, and “Good Math, Bad Math” commenter David vun Kannon. Schneider worked from properties of the “ev” simulation itself to demonstrate that the numbers in the Marks and Dembski critique cannot possibly be correct. “2ndclass” made a project out of examining the Matlab script provided with the Marks and Dembski paper to find the source of the bogus data used to form the conclusions of Marks and Dembski. vun Kannon suggested an easy way to use the Java version of “ev” to quickly check the claims by Marks and Dembski.

(Also posted at the Austringer)

The “Unacknowledged Costs” paper was originally hosted on a web server at Baylor University as part of what Dembski called “Baylor’s Evolutionary Informatics Laboratory”. Since Baylor had no official link to the “Evolutionary Informatics Laboratory”, Baylor pulled the plug on the site. There seem to be many different off-campus servers that now host the content formerly on the Baylor server, but the confusion over web-hosting had at least one effect: there was a period when the PDF of the essay was readily accessible, but the associated Matlab script links were broken. I recall looking at the essay some time back, thinking that the conclusions seemed obviously bogus, but being stymied by the lack of access to the Matlab scripts.

“After the Bar Closes” commenter “2ndclass”, though, got the Matlab scripts and proceeded to do some analysis. Here at Panda’s Thumb and at “Good Math, Bad Math”, he hinted that there was a basic error in the code resulting in the odd numbers that Marks and Dembski had come up with. He also sent me email around that time, and we corresponded. “2ndclass” did not have Matlab to work with (a license costs several hundred dollars), so he had translated the program into C# to do some work on it. I pointed to the GNU Octave software, which is largely Matlab-compatible, and “2ndclass” was then able to work with both his translation and the original scripts. I made some comments on obtaining a better figure for the “p_s” value that is at the basis of the critique made in the Marks and Dembski essay, and “2ndclass” took that to the next level.

There are, it turns out, two major errors identified by “2ndclass” in the original Matlab script from Marks and Dembski, both of them associated with forming a vector “t” representing whether an offset within an “ev” genome should be “recognized” or not. Errors can be either false positives (“recognized” when it should not be) and false negatives (unrecognized when it should be). In the code from Marks and Dembski, they were obviously trying to set up a situation where 16 targets were to be recognized, and the remainder of the 131 offsets in the genome were to remain unrecognized. The first error has to do with initializing the “t” variable. Matlab has functions that will fill a vector or matrix with zeros or ones, unsurprisingly called “zeros” and “ones”. “t” was set to be all ones, and then a for-loop set each of the 16 target offsets to one. In order for that to work, “t” should have either been initialized to zeros first, or the for-loop should have set the target sites as zero (which would have entailed an inversion of the logic elsewhere, so that appears not to have been the intent). The second major error was that the “t” and “targ” vectors were incorrectly specified, leading to a row-vs-column mismatch; only the first index in “t” could possibly be altered by values in “targ”. The upshot was that the Marks and Dembski script happily churned out copious quantities of numbers that had nothing at all to do with the actual situation in the “ev” program. It’s a classic case of “Garbage in, garbage out”. Correcting these defects produced estimates of “p_s” many, many orders of magnitude smaller than the uncorrected script, obviating the claims that blind search was more effective than evolutionary computation and that “ev”’s evolutionary computation contributed only a small fraction of the bits seen in the results. [“2ndclass” notes in the comments here that the Marks and Dembski use of a 131-base genome is also an error; “ev” uses 256 bases in its genome. This would also underestimate the smallness of “p_s”. Chalk up yet another point on which M&D are indebted to critics. – WRE]

One of the advantages of Matlab for developing scripts such as the one used by Marks and Dembski is that it is an interpreted language, and the variables used in the script can be examined in the environment after the script runs, or via use of the debugger. Matlab makes it easy to check intermediate results. However easy the tools make checking results, though, if a programmer isn’t interested in accuracy, then even easy checks (“Hmmm, maybe I should see what t loaded with values from targ actually looks like…”) will be ignored. And that is a large issue here. Why would Marks and Dembski, given a preliminary set of numbers at odds with much existing research, fail to scrutinize how they obtained those numbers? At least, the question is interesting when applied to Robert Marks, who does have a serious record of scholarship in computer science. Given the way they close their paper, there is more than a slight amount of irony:

There is a wider lesson to be learned from this case study. To maintain its integrity, the field of evolutionary computing needs to institute the following control: all computer simulations of evolutionary search must make explicit (1) a numerical measure of the inherent difficulty of the problem to be solved (i.e. the endogenous information) and (2) a numerical measure of the information contributed by the search structure toward solving the problem (i.e., the active information). This control acts as a safeguard against inflated claims for the information-generating power of evolutionary algorithms.

What controls might have kept Marks and Dembski from making inflated claims of their own remain unspecified.

“2ndclass” went first class, though, in giving Robert Marks the first chance to deal with the obviously erroneous claims made in the essay. Here is his email to Marks, released by permission:

Date: Wed, 26 Sep 2007 10:08:19 -0600 To: “Marks, Robert J.” Subject: Serious error in the paper, “Unacknowledged Information Costs”

Dr. Marks,

A few months ago I asked you some questions about your work in the Evolutionary Informatics Lab, and you graciously took the time to respond, which I appreciate.

The point of this email is to give you a heads up on a major problem in your response to Tom Schneider’s ev paper. The upshot is that your empirically-determined value for p_S is off by many orders of magnitude, and the cause for the discrepancy is a bug in Histogram.m. I’m telling you this in advance of the problem being reported by ID opponents, in case you would like to report the problem yourself first.

I have sent the following summary to others for verification, and I expect the problem to be reported shortly. Furthermore, there are easy ways to test your reported value for p_S that demonstrate it to be wrong, so I expect that others will eventually take notice. For instance, Tom Schneider’s site has a java GUI version of ev that allows you to change parameters such as population size. Changing the population size to 4390 should result in 10 perfect initial organisms if p_S = 1/439, but it doesn’t, even if you tweak the code so the target is the same as Schneider’s.

Obviously, this problem doesn’t invalidate the work of the EIL. The objective in reporting this problem will be to show that nobody has carefully read the paper in question, not even the ID proponents at Uncommon Descent who are making grandiose claims about the EIL.

Regards, *************

*** START OF SUMMARY *** This is quick response to a paper by William Dembski and Robert Marks, “Unacknowledged Information Costs in Evolutionary Computing: A Case Study on the Evolution of Nucleotide Binding Sites,” which is a response to Tom Schneider’s paper, “Evolution of Biological Information.” I explain why their empirically-determined value for p_S cannot possibly be correct, I point out the bug that led to their erroneous value, and I show how to come up with a better estimate.

D&M’s reported value of .00228 for p_S should raise questions in the minds of readers. Why would a targeted evolutionary algorithm require 100 times as many queries as random sampling? (See the last paragraph in section 1 of D&M’s paper.) Why would random sampling be able to find the target in 439 queries while parallel random walks, starting at the target, are unable to find the target again in 64000 queries? (See the random walk w/o selection starting at generation 1000 in Figure 2a in Schneider’s paper.)

But the surest evidence of a problem is Figure 3 in D&M’s paper, which shows that the perceptron heavily favors vectors with very few binding sites or very few non-binding sites. D&M legitimately use this fact to explain why p_S is higher than 2^-131, but the same fact also puts a limit on how high p_S can be. For example, if every vector with 15 binding sites is more probable than any vector with 16 binding sites, then p_S for a vector with 16 binding sites can’t possibly be 1/439. There are {131 \choose 15} = 2e19 unique vectors with 15 binding sites, and it’s impossible for each of those vectors to have a probability of more than 1/439. p_S has to be lower than 2e-19 if every vector with 15 binding sites has a probability higher than p_S.

To find the actual value of p_S, we first look at the bug that resulted in the erroneous value. If we look at the vector t after running Histogram.m, we see that it consists of all 1’s, while the obvious intent of lines 25-27 is that it contain 1’s only at the targeted binding sites and zeros elsewhere. The problem is obviously in line 24. (In Octave, there is also a problem in constructing the vertical t vector with a horizontal targ vector. I don’t know if this is also the case for MATLAB.) Once these problems are fixed and we insure that t is constructed correctly, the resulting histogram shows a p_S of zero even if we run a billion generations, so it would seem that p_S is too small to be determined empirically.

But it can be roughly determined indirectly. The histogram of errors in Figure 2 of D&M’s paper is useful in that it accurately represents the case in which every site is a targeted binding site. By symmetry, this is equivalent to the histogram for the case in which there are NO targeted binding sites. In the latter case, every positive is a false positive, so the number of errors is the number of binding sites recognized by the perceptron. From the histogram, it appears that about .6% of the queries yield vectors with 16 binding sites, which can be verified by looking at histt(17) after running the script. There are {131 \choose 16} = 1.4e20unique vectors with 16 binding sites, so the average p_S for these vectors is .006/1.4e20 = 4e-23.

IF p_S for Schneider’s target vector is about average for all vectors with 16 binding sites, then it should be close 4e-23. That’s a big IF, but it’s a premise that can be tested in a scaled-down problem. I scaled the problem down to 32 potential binding sites, ran a billion random queries, and looked at the counts for only the outcomes that had exactly 4 binding sites. Upon looking at these vectors, sorted in order of frequency, a clear pattern emerged: The vectors in which the binding sites overlap significantly were found at both extremes (which differed by less than an order of magnitude), while the vectors with little or no overlap were found in the middle, very close to the average frequency. Assuming that this pattern holds when the problem isn’t scaled down, p_S for Schneider’s non-overlapping target vector should be pretty close to the average p_S of all vectors with 16 binding sites, which is 4e-23, or 2^-74.

Even if this estimate is off by a large margin, the true p_S is obviously much smaller than the value reported by D&M, which invalidates some of D&M’s conclusions and leaves others intact. The conclusion that Schneider’s evolutionary algorithm is less efficient than blind search is false. The fact remains that the perceptron introduces “active information”, which I find insightful, although the active information is much less than D&M claim (if my estimate for p_S is in the right ballpark, 57 bits as opposed to 122 bits). *** END OF SUMMARY ***

The PDF of the essay was taken down from the “evolutionaryinformatics.org” site shortly thereafter. However, other than that amount of reaction to the news that the essay’s conclusions have a basis somewhat less stable than Birnam Wood, Marks has not taken advantage of the opportunity provided by “2ndclass” to publicly retract the findings.

Tom Schneider’s critique of the criticism, though, appeared to have less impact on Marks and Dembski. Schneider responded to various criticisms on August 3rd and 4th. Back on August 12, 2007, Schneider had this response on his “ev” blog:

2007 Aug 12. In Unacknowledged Information Costs in Evolutionary Computing: A Case Study on the Evolution of Nucleotide Binding Sites” Dembski claims that “Using repeated random sampling of the perceptron, inversion is therefore expected in 1/pS = 439 queries.” That is, according to Dembski random generation of sequences should give a solution in which there exists a creature with zero mistakes more quickly than the Ev program running with selection on. If this were true, then random generation of genomes would be more efficient than natural selection, which takes 675 generations for Rs to exceed Rf in the standard run. While anyone familiar with natural selection will immediately sense something wrong with Dembski’s numbers, it is reasonable to test his ‘hypothesis’. For the discussion of 2007 Aug 03 I used the running Ev program and counted each generation. This was a little unfair because there was only had one mutation per generation. This might make a tighter Rs distribution around zero bits because each generation is similar to the last one. Thus the estimate may be too high. A more fair test is to generate a completely new random genome each time. What is the distribution? For 100 independent generations, the mean was -0.02114 bits and the standard deviation was 0.26227 bits. This is essentially the same standard deviation as before! so the probability of getting 4 bits is, again, 4/0.26227 = 1.11x10-16. Considering 439 queries as 3 orders of magnitude, Dembski’s estimate is off by about 13 orders of magnitude.

While it is good that Marks and Dembski have begun the process of removing false claims about “ev” from their websites, that process is still incomplete, and leaves other commentary standing. For example, another paper still available on their website relies upon the bogus “ev” critique in making claims:

Using the information rich structure of ev, a random query will thus generate a successful result with the relatively large probability of pS = 0.00228. Interestingly, ev uses an evolutionary program to find the target solution with a smaller probability than is achievable with random search. Ev therefore takes poor advantage of the underlying search structure, induce negative active information with respect to it. With the adoption of the method sketched in this paper for assessing the information costs incurred by search algorithms, such improper claims for the power of evolutionary search would find their way into the literature less often.

As Schneider notes, Dembski has been criticizing Schneider’s work and person publicly based on the bogus data from the busted Matlab script. The process will not be complete until an effective retraction of the claims is made such that the people who heard the false information from Marks and Dembski are likely to also have heard of its demise, and any resulting publication notes the problem in the first attempt and credits those who put in the effort that Marks and Dembski did not.

51 Comments

I am sure that we will get a full and complete apology, Dembski style, and that all those involved will receive proper acknowledgement when the article is finally published in the DI newsletter.

But seriously, if all of the flawed creationist argument were retracted when their errors were pointed out to them, there would be none left. Oh well, at least maybe now Dembski will quit whining that mathematicians don’t pay any attention to him.

Of course, this also raises the possibility that the “errors” were deliberate and that they were hoping that no one would notice. This is a common creationist tactic. After all, as Leonard McCoy once said: “I find that evil usually wins, unless good is very very careful”. There is no way to prove intention here, but the reaction to criticism may be at least an indicator of the original intent.

David Stanton:

After all, as Leonard McCoy once said: “I find that evil usually wins, unless good is very very careful”.

From The Omega Glory where Kirk explains to the Yangs that they have lost the meaning of the Constitution. Rather appropriate considering the creationists repeated attempts to the bypass First Amendment.

Over at the Dembski-boosting Pleasurian blog, one may find this thread, useful for the graphic image showing that 1/3 of the EIL’s output is now self-retracted. There’s also a chiding comment.

William Brookfield Wrote:

Richard Wein, Mark Perakh, Jeff Shallit, Wesley Elsberry, Cosma Shalizi, Olle Haggstrom, David Wolpert, Tom Schneider.

These individuals are ID critics who have in the past promptly provided mathematical critiques of Dembski work. After numerous searches I have thus far been unable to find a critique anywhere. If anyone comes across a critique of any of Dembski’s three new papers please let me know, thanks. 8/31/2007 02:14:00 PM

I entered a comment earlier today there:

The “Unacknowledged Costs” paper critiquing Schneider’s “ev” has unacknowledged errors. The conclusions in another of the essays are tainted by reliance on the bogus numbers in the “ev” critique.

Tom Schneider had responses up back near the beginning of August in his usual place for such responses. It doesn’t seem that Brookfield’s search could be described as assiduous.

I’ve been awaiting substantive replies from Dembski on several of my critiques for years. Will Brookfield draw a conclusion about Dembski from that datum?

Wesley R. Elsberry

Let’s see how long it sits in the moderation queue… that either will be a pleasant surprise if short, or should make a nice counterpoint to the criticism on promptness.

Of course, this also raises the possibility that the “errors” were deliberate and that they were hoping that no one would notice.

I think that the posting of the Matlab scripts is evidence against that. If they were aware of error in that place, it seems odd that they would provide those for public scrutiny.

No, I think that they simply were unmotivated to thoroughly check their script’s validity; after all, it was producing numbers that they liked. And when I say “they”, I primarily mean Robert Marks. I suspect that it is Marks who has the Matlab skills, such as they are, of the pair.

There is no way to prove intention here, but the reaction to criticism may be at least an indicator of the original intent.

Yes, absolutely. And in this regard, Marks’s prompt removal of the essay from one website shows far better responsiveness than we have seen from a major IDC advocate in many a year. But that is a ridiculously low bar. That’s why I’ve noted what steps would actually be taken for an error of this sort.

Which document were you quoting to show that references to the faulty conclusions were still in circulation? I know they are in the slides of a keynote presentation, which is where they first caught my eye.

I look forward to seeing a revised and improved paper.

This one, which is still prominently listed on “evolutionaryinformatics.org” as a “Publication”, though with the parenthetic notice of “(in review)”.

David, I think that you may be in for a long wait. The entire premise of the “ev” critique evaporates if one uses a value for “p_s” that is within a centi-dembski (a “dembski” here being an error of 65 orders of magnitude) of the actual value: evolutionary computation is more effective for the “ev” scenario than blind search, and contributes a substantial fraction of the information seen in the result. An honest revision would not support Dembski’s agenda, and therefore I am not expecting Dembski to ever publish a revised version. You may color me surprised if it happens.

Of course, looking at another of the papers that is listed on the EIL site, I noticed that it includes a clear error that I informed Dembski of long ago.

In fact, today is the seventh anniversary of the unregarded notification of that error. This is the standard for unacknowledged errors that Dembski has set. Time will tell as to whether Robert Marks will be an apt pupil…

Happy Anniversary, Wesley.

As Dembski misunderstood Dawkins’ WEASEL problem, he and Marks also misunderstood Schneider’s ev. They assumed that only the last 131 bases were candidates for recognition. If you run the java version of ev (downloadable from Scheider’s site) and you turn off selection, it’s immediately apparent that this assumption is false. All bases in the sequence are subject to recognition, so M&D’s error histogram should go from 0 to 256, not 0 to 131.

IOW, they have more problems to fix than just the bugs in the scripts.

These errors in D&M’s work remind me of an incident many years ago when an irate student wanted to prove to me that his answer on a physics exam was correct and not the complete nonsense anyone looking at it would recognize.

He punched numbers into his calculator (this was an early TI calculator that held only four pending operations) and proceeded to show me that his calculator gave the “correct” answer and that I was wrong for grading him off for his answer.

That the answer itself was obvious nonsense (an impossible number given the problem being solved) was not sufficient to alert this student that something was not right.

Using a calculator (or computer program) without knowing what needs to be considered and checked is inexcusable, especially if major claims are being based on computational results. This is so basic that it should never be overlooked. Results should be checked using data that can be calculated by independent means.

One of the characteristics I have seen in Dembski’s work over the years is his complete lack of comprehension of the underlying physics of what he is attempting to do. He apparently has no sense whatsoever of what makes sense and what is nonsense. He simply takes the answer that satisfies him and seems to think it must be right because it is what his religion tells him is correct.

Thanks for the link, Wesley.

Wrt the chance of seeing an improved paper, color me hopelessly optimistic. Bob Marks has done solid work in the past, I can only wish for him that he avoids the Behe swoon in output.

I commented on UD (before being banned (thanks, DaveScot!)) that Dembski’s research approach should be “what evolution can’t do”, viz. a careful analysis of what resources an evolutionary algorithm needs to solve certain problems. Does speciation need the concept of space and isolation? What does it take to evolve sex?Then show that biological reality falls into that class. If you can’t evolve sex, then you can argue “Male and female He created them.”

In re “ev”, they should give up attacking the evolutionary algorithm. Attack the idea of a perceptron adequately modeling reality, if anything. Marks is a NN expert, they should have brought those insights to bear, not his Matlab skills.

Wait a minute. If a Dembski is an error of 65 orders of magnitude, wouldn’t a centiDembski be an error of 63 orders of magnitude, since it’d take a hundred of them to make a Dembski? Perhaps, since a Dembski is an exponential value, a centiDembski is an error of only 0.65 orders of magnitude (a factor of 4.47 or so). Still pretty big, but that’d make a milliDembski an error of just 16%, and a microDembski would be an error of just 0.015%, getting us into the realm of respectibility (depending on one’s field, of course).

While I agree that Dembski’s work is mathematically and scientifically vacuous, I sometimes wonder whether the best tact is to simply ignore the man. Every time he spews some junk mathematics, it gets a lot more attention than the idea merits.

In short, do these constant debunkings of Dembski’s work help or hurt the cause of good science?

[quote] Perhaps, since a Dembski is an exponential value, a centiDembski is an error of only 0.65 orders of magnitude (a factor of 4.47 or so). [/quote]

That was my intent, but as you note there is some ambiguity to the bare term.

In short, do these constant debunkings of Dembski’s work help or hurt the cause of good science?

We’re in a fix, all right. If Dembski’s ideas are given the attention that they deserve, then Dembski is free to claim, as he all too often does anyway, that it means that his brilliance has baffled his critics into saying nothing. If we give them more attention than they deserve, that is, some attention, then Dembski is free to claim, as he inconsistently does anyway, that critical engagement means that he is on to something big. Richard Wein has already covered this territory nicely in his introduction to his critique of “No Free Lunch”.

I think that when dealing with fringe ideas like IDC that it is important that technical critiques are available for the readers, who may be uncommitted or only partly predisposed to agree with the anti-science side. The professional spin-masters are going to say whatever looks good anyway, but reasonably astute people will be able to see and evaluate the technical criticism for themselves, and not just a dismissal based on an argument from authority. I think that is part of what Randy Olson’s “Flock of Dodos” film was trying to get across, that the good character of scientific discourse is something we throw away whenever a critic simply says that someone is wrong without pointing out an explicit discussion that demonstrates the error.

One of the characteristics I have seen in Dembski’s work over the years is his complete lack of comprehension of the underlying physics of what he is attempting to do. He apparently has no sense whatsoever of what makes sense and what is nonsense. He simply takes the answer that satisfies him and seems to think it must be right because it is what his religion tells him is correct.

Yes, this point is pretty well understood - in order to do a basic sanity check, there’s one characteristic you absolutely must possess. As I read it, the reason M&D did NOT see the manifest nonsense of their result, is that they used their religion, rather than their math, as the context within which the result should make sense. And in that context, it DID make sense. One can only wonder how many runs, using different data and different sets of errors, M&D tried before they got “sensible” religious results.

I’ll repeat here a comment I left on The Austringer:

D&M’s reported value of .00228 for p_S should raise questions in the minds of readers. Why would a targeted evolutionary algorithm require 100 times as many queries as random sampling?

Why indeed? This is symptomatic of people running a “research” program that has a pre-defined desired outcome. When a counter-intuitive (to the rest of the world) result is observed that’s consistent with the preconception, that’s a clear signal to look very very closely at the model. In 15 years of modeling market systems with evolutionary algorithms I’ve run onto a number of such outcomes — results consistent with what I wanted that were too good to be true. In every instance so far they were due to a bug in the model. It’s when data most closely agree with one’s preconceptions that they require the closest scrutiny.

It’s also often symptomatic of ignorance on the part of the people doing the “research.” Not knowing the substantive domain of inquiry (in this case, evolutionary models), they can’t detect when a result is at least superficially weird and therefore requires close examination.

RBH

Wait a minute. If a Dembski is an error of 65 orders of magnitude, wouldn’t a centiDembski be an error of 63 orders of magnitude, since it’d take a hundred of them to make a Dembski?

Perhaps, as Erik Tellgren’s post the other day observes, since we are dealing with Dembski we are obliged to take whatever it was we were talking about and take the negative logarithm, to make it look more like information.

Dembski’s apology:

“ I apparently made a MATLAB error with heathen Tom Schneider’s Ev program “

Golly, the first thing you learn in programming is to properly initialize your variables. With many systems and compilers, variables are not initialized to zero automatically and may actually hold values depending on whatever flotsam and jetsam happen to exist in memory.

I have made this error myself, I must say, but fortunately I was circumspect enough not to publish a paper claiming the Navier-Stokes equations were wrong.

Stuart

They did attempt to initialize the variable in question; they just botched the job and didn’t check it.

Richard Wein has already covered this territory nicely in his introduction to his critique of “No Free Lunch”.

I think that when dealing with fringe ideas like IDC that it is important that technical critiques are available for the readers, who may be uncommitted or only partly predisposed to agree with the anti-science side.

Couldn’t agree more.

It’s absolutely essential to have the technical debunks available for those who seek more info. Speaking from my own experience, I fell into the “uncommitted” category, having never heard of ID and not being a biologist. One of the first things I found while poking around a bit re ID was the Wein-Dembski correspondence, which pretty much dispelled all my doubt about the real content of Dembski’s arguments.

Sending a copy of “No Free Lunch” to Richard Wein may have been about the most productive $50 I’ve spent.

So what you’re saying, Dr. Elsberry, is that

D = |ln(E/R)|/B

Where D is the error in Dembskis,

E is an erroneous value,

R is the correct value,

and B is “Bill’s Constant” of 149.6680310446129694611694445545

If my math is correct.

I would like to thanks all involved in their tenacity in pursuing this. I had completely missed Tom Schneider’s response log, so I’m glad to retroactively see that he recognized D&M’s problems.

My view of Marks output has changed dramatically in the last 24 h.

Yesterday, it become apparent that marks coauthors a paper with Dembski (according to Dembski) that openly uses misdirection on Häggström’s critique of their “active information” methods.

Häggströms critique in short is that DNA sequences trivially shows clustering or the handful of mutations we all have would likely kill us, and an uniform distribution is an illegible model for evolution. (And that in general it is also an illegible prior in cases of large sets.) D&M claim that this observation is an assumption from Häggström.

[On the lighter side, Dembski concedes the efficiency of selection in evolution. “… the random search is 2.9387x1041 per cent worse than partitioned search. Partitioned search contributes an enormous amount of information.”]

Now this.

“Let me explain: when I was a physicist, people would come to me from time to time with problems in mathematics they couldn’t solve. They wanted me to check their numbers for them. But after a while I learned not to waste my time checking the numbers – because the numbers were almost always right. However, if I checked the assumptions, they were almost always wrong.”

Eliyahu M. Goldratt, “The Goal”

Oops, a misquote. It should be “… the random search is 2.9387x1041 per cent worse than partitioned search. Partitioned search contributes an enormous amount of information.”

On another note, the reason why I personally didn’t react to D&M claim of ev making a worse search than random was that they (in fig.2 IIRC) used a wider parameter space than ev is designed for in their comparison. Schneider mentions that a constricted space is used when simulating iid of mutations.

So I assumed that it was a non sequitur claim based on a consequence of their non-realistic method, not their mistakes. Never assume without testing the assumption. :-P

[Yay. I think PT has fixed the preview script parsing UTF-8 characters erroneously, in all text boxes. Checking: åäö.

Thanks, Reed!]

Sending a copy of “No Free Lunch” to Richard Wein may have been about the most productive $50 I’ve spent.

So what you’re saying is I should comp you.

Almost. The preview parsing leaves UTF-8 characters, but now the final parsing in Submit doesn’t get, or changes, the text in the name box.

/Torbjörn

Dave W.,

Yes, that appears to be an excellent way to render in excruciating detail a moment of my flippant rhetoric.

I may have to add it to my Finite Improbability Calculator.

There is a discrepancy between the result which Dembski reports for his example calculation of an M/N ratio on p.297 and what the Finite Improbability Calculator reports. Plug in symbols=30, length=1000, identity=0.2 and the result comes out as 5.555117e-223, whereas Dembski reports 10^-288, or a factor of 10^-65 off. Jeff Shallit noted this error in Dembski’s text some time back.

See, we have the same problem as in some ScienceBlogs, the name box seems to put out ISO-8859-1 code for mysterious reasons. The solution for me is to use that in the browser, then everything looks OK. But UTF-8 which has twice the amount of characters is more web-friendly. :-)

So what you’re saying is I should comp you.

Er, I wasn’t trying to be grasping. But if you think you know of someone who will put in time to do a critique of something if only they got a copy, I’d suggest “passing it forward”.

Then again, the lupine pest at the door could use some discouragement…

:-)

Let me test that once more, I’m not sure it was a ISO ‘ö’ input to my name.

To make this on topic, can we expect that 1065 constitutes an Upper Dembski Bound (UDB), or is his errors tending to the infinite?

Oddly, Torbjorn’s omlaut renders just fine in the “recent comments” box on the front page, but not here in the comments sections. Huh.

Please discuss the encoding issues on the new encoding post.

Reed,

Thanks for your work!

I should have looked up the old (or better, looked for a new encoding post) for my tests, but you know how it is with boys and their new toys…

Torbjorn the Unprintable said:

My view of Marks output has changed dramatically in the last 24 h.

… da da

… da da da

… da

Now this.

Torbjörn, don’t for get the copper sweater. Granted that didn’t involve Marks, just his new sidekick. But a post on copper sweaters is worth remembering anyway.

re: Dave W’s equation

Something struck me when I read the description of the equation. Isn’t 10^150 Dembski’s Universal Probability Bound? Seems like if we set B to 150, that gives additional justification for choosing that constant. Setting B to exactly 150 gives the original error a value of .994 Dembskis, but since Dembski undoubtedly rounded his reported value, I think we’re justified to use only one significant figure in the calculation of his original error.

Bumpage—Last encoding issue fixed.

Speaking from my own experience, I fell into the “uncommitted” category, having never heard of ID and not being a biologist. One of the first things I found while poking around a bit re ID was the Wein-Dembski correspondence, which pretty much dispelled all my doubt about the real content of Dembski’s arguments.

Glad to be of service.

Sending a copy of “No Free Lunch” to Richard Wein may have been about the most productive $50 I’ve spent.

It may have been only $50 to you, but it was hours of laborious debunking to me!

Only joking. You’ve put many more hours into critiquing ID than I have. I took a brief look at the Dembski-Marks papers, and could see that they were basically the same old Dembskian nonsense. That was enough for me, but I’m glad that you and others are still willing to work through the ugly details.

Torbjörn Larsson wrote

My view of Marks output has changed dramatically in the last 24 h.

Dembski is a professional tar baby who contaminates all he touches. It’s not the ID that Dembski pushes that’s the main problem in that respect, it’s the incompetence with which he pushes it. I’m afraid that Marks is allowing himself to slide into professional irrelevance by lending his reputation to that contamination. He’s following in Behe’s footsteps in losing the respect of his peers. And it’s a little sad to see Tom English associating himself with that train wreck.

RBH

“I think that when dealing with fringe ideas like IDC that it is important that technical critiques are available for the readers, who may be uncommitted or only partly predisposed to agree with the anti-science side.”

It’s a good argument, and it is more or less what I expected to hear. Given that many “technical critiques” are already available, I wonder whether making new ones, as opposed to referring people to the record, just feeds the beast…

Dr. Elsberry wrote:

Yes, that appears to be an excellent way to render in excruciating detail a moment of my flippant rhetoric.

We need more metrics, dammit! Now, for example, we can state without hesitation that young-Earth creationists are off by nearly a tenth of a Dembski in their estimation of the age of the universe. Plus, I have an untested (as of yet) hypothesis that the median errors of the IDists are far larger than that of the “classic” YECs - a hypothesis for which the Dembski seems to be perfect.

Oh, and Dembski’s error in “Unacknowledged Information Costs…” would appear to be about 0.2 Dembskis. Given his 1.0-Dembski gaff in NFL, he’s up to an average of 0.6 Dembskis already.

W. Kevin Vicklund wrote:

Something struck me when I read the description of the equation. Isn’t 10^150 Dembski’s Universal Probability Bound? Seems like if we set B to 150, that gives additional justification for choosing that constant. Setting B to exactly 150 gives the original error a value of .994 Dembskis, but since Dembski undoubtedly rounded his reported value, I think we’re justified to use only one significant figure in the calculation of his original error.

I knew that the 149.something figure sounded suspiciously close to some other Dembski-associated number, but I couldn’t put my finger on it. Thanks.

The original value for B I posted was itself rounded a little bit (near the precision limits of Windows calculator), introducing an error in which the answers are about 7.22x10-34 Dembskis too high. Rounding it to 150 makes its answers low by about 40 microDembskis.

I think we might be forgiven for a 40 microdembski slop given that “150” is somewhat easier to recall and apply than 149.6680310446129694611694445545.

Has anyone considered making a second Wikipedia article for Dembski as a unit of measurement and error?

We need an SI abbreviation. I propose Dmb.

Whoops. Wait a second. I just noticed that the original, correct figure used to derive B shouldn’t have been 1065 but instead 5.555117x1065. So B should really be 151.3827505295889703551314588227, and all previous measurements in Dembskis are high by 75 μDmb (for the 149.668… value of B). Using 150 for B will generate 61 μDmb errors.

Of course, one has to consider that Dembski rounded his answer to the nearest order of magnitude, which can introduce an error between 0-1.07 cDmb. If his actual calculated value was about 3.98x10-288, then B=150 is almost exact.

Considered and accepted. And just think, originally I was doing the math with log10, and B was 65. I switched to natural log on a whim, and by sheer coincidence got close to the exponent for the UPB.

…Or maybe it was all designed that way…

I suggest the following as the symbol for dembski units: Δ.

It represents the ‘d’ in dembski. It is already often used to denote distance and perhaps error. And it looks like those hats that teachers put on students who screw up one too many times.

It’s when data most closely agree with one’s preconceptions that they require the closest scrutiny.

My corporeal form would agree. It’s this attitude that separates science from its pseudo-scientific wannabes.

It looks like those hats that teachers put on students who screw up one too many times.

So the abbreviation for “dembski units” would be “duns”.

Today’s Orwellian moment is brought to you by Dr. William Dembski.

Here is part of Dembski’s interview with Mario Lopez, as displayed in the Google cache on Oct. 20, 2007:

CA: Are you evading the tough questions?

WD: Of course not. But tough questions take time to answer, and I have been patiently answering them. I find it interesting now that I have started answering the critics’ questions with full mathematical rigor (see http://web.ecs.baylor.edu/faculty/m[…]cations.html) that they are strangely silent. Jeff Shallit, for instance, when I informed him of some work of mine on the conservation of information told me that he refuse to address it because I had not adequately addressed his previous objections to my work, though the work on conservation of information about which I was informing him was precisely in response to his concerns. Likewise, I’ve interacted with Wolpert. Once I started filling in the mathematical details of my work, however, he fell silent. Perhaps the most striking instance of silence is that of Thomas Schneider, whose article on the evolution of biological information in Nucleic Acids Research (2000) claims to refute my colleague Michael Behe. When Robert Marks and I recently showed that his evolutionary program was equivalent to a neural network and that it works worse than pure chance (http://web.ecs.baylor.edu/faculty/marks/T/ev2.pdf), he too fell silent though in the past he would reply in a day’s time on his own website to any challenge from me. I have found that Darwinists make a habit of staying quiet about problems with their theory and ignore the best criticisms of it.

CA: Are there any major universities supporting the work of ID proponents? If not, why not?

WD: I wouldn’t say that universities as such support ID. They tolerate it if the faculty member doing ID research has tenure. And if they don’t have tenure, the university makes sure that they don’t get tenure (the tenure denial of Guillermo Gonzalez at Iowa State University is latest instance). Why this opposition? Darwinists have been very successful at demonizing anyone who dissents from their materialistic view of evolution. They have essentially established a Stalinist regime over the western academy.

CA: I know about the Biologic Institute and the work of Dr. Minnich. Are there any other laboratories currently doing ID work?

WD: Baylor’s Evolutionary Informatics Lab: www.evolutionaryinformatics.org. I understand another ID lab at Baylor is on the way.

(Emphasis mine)

If you look at the interview as it is reported now at theideacenter.org, the same part reads as follows:

CA: Are you evading the tough questions?

WD: Of course not. But tough questions take time to answer, and I have been patiently answering them. I find it interesting now that I have started answering the critics’ questions with full mathematical rigor (see the publications page at www.EvoInfo.org) that they are largely silent. Jeff Shallit, for instance, when I informed him of some work of mine on the conservation of information told me that he refuse to address it because I had not adequately addressed his previous objections to my work, though the work on conservation of information about which I was informing him was precisely in response to his concerns. Likewise, I’ve interacted with Wolpert. Once I started filling in the mathematical details of my work, however, he fell silent.

CA: Are there any major universities supporting the work of ID proponents? If not, why not?

WD: Previously I would have said that universities don’t so much support ID as tolerate it if the faculty member doing ID research has tenure. But I can’t say that any longer. Robert Marks’s Evolutionary Informatics Lab had a presence on the Baylor server until the work of the lab was linked to ID (there had been anonymous complaints), at which point the Baylor administration went into Marks’s webspace and, without his permission, removed the EIL site from his space on the Baylor server. For the whole sordid story, which gained national media attention and will be featured in the upcoming Ben Stein documentary (www.expelledthemovie.com), go to my blog Uncommon Descent (http://www.uncommondescent.com/inte[…]rmatics-lab/). Mind you, Robert Marks’s title is Distinguished Professor of Electrical and Computer Engineering—he doesn’t just have tenure but he is (or was) a star professor at Baylor. In any case, Marks still remains at his university. Untenured faculty are not so fortunate. In the case of faculty members who support ID and don’t have tenure, most universities make sure that they don’t get tenure (the tenure denial of Guillermo Gonzalez at Iowa State University is latest instance). Why this opposition? Darwinists have been very successful at demonizing anyone who dissents from their materialistic view of evolution. They have essentially established a Stalinist regime over the western academy.

CA: I know about the Biologic Institute and the work of Dr. Minnich. Are there any other laboratories currently doing ID work?

WD: The Evolutionary Informatics Lab: www.EvoInfo.org. I knew of another ID lab that another faculty member at Baylor (not Robert Marks) was intent on starting, but with the witch-hunt against Marks, that’s not going to happen any time soon.

Most interesting is the bolded part, which is missing in the latest version of the interview. Dembski has gone back in time and removed his false claim against Schneider’s ev program and his crowing about Schneider’s lack of response. But hey, that was just street theater anyway.

I suppose it’s Dembski’s prerogative to change what he said, but I’m not impressed with the fact that he excised a false claim without acknowledging that it’s false and noting the excision.

About this Entry

This page contains a single entry by Wesley R. Elsberry published on October 9, 2007 4:26 AM.

Multiple codes in DNA was the previous entry in this blog.

Scientists: Surgeons not out of a Job Anytime Soon—Or Why that Appendicitis Still Hurts. is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter