Validating Designer Discrimination Methods

| 8 Comments

I. Introduction: Designer Discrimination Algorithms

A significant problem in developing a revolutionary new theory is the parallel development of methods and technologies appropriate to testing the theory. As I have said a number of times in response to criticisms of Multiple Designers Theory, the absolutely necessary first step in the MDT research program must be the development and validation of designer discrimination methodologies. In the Introduction to Multiple Designers Theory above I saidthat in developing a design discrimination methodology, MDT has the same task as mainstream ID. First the methodology must be systematized and formalized. Then it must be empirically validated on test materials for which we already know the histories. The first task of MDT is to develop a formalized researcher-independent methodology that, when it is eventually applied to phenomena whose provenance and history we do not know, it can be legitimately expected to reliably tell us something of interest about the phenomena. Mainstream intelligent design has so far avoided that task: there are no validation data at all on its principal design detection methods. MDT, however, has begun the task of validation and calibration of its methodologies.

Syntax Error: not well-formed (invalid token) at line 44, column 120, byte 9681 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

8 Comments

Bravo! Bravo! A most impressive performance.

Tease; no information on the really interesting question: what are the characteristics or features of the candidate objects that are selected for analysis by the DDA. Darn it.

Based on your ‘runs’ so far, have you identified a minimum sample size or algorithmn to determine that size that appears to ‘confirm’ a particular designer?

Richard,

I would advise seeking patent protection for your software methods lest the “mainstream” IDers start stealing your ideas. Of course, you’ll be prevented from obtaining protection in most non-US countries, because of their strict publication requirements. In the US, you have one year from public disclosure to file.

Not that any of those characters would ever do anything which violated one of the Ten Commandments …

Two questions enter my mind:

Question #1. Exactly which statistical problem have you set out to solve? I’m looking for an answer formulated in the same terms as descriptions of learning, classification, generalization, etc. tend be written in. A few examples of this form:

Example A: Given N samples, each sample shall be classified as belonging to exactly one of C categories (with category #17 meaning “designer #17 generated this sample”).

Example B: Given M training samples, with known classifications, and N other samples of unknown classifications, all the unknown classifications should be inferred.

Example C: Given N samples, determine the number of categories they should be classified into. (Or, in plain terms, determine the total number of designers that were involved in generating the samples.)

Question #2: What constraints must be placed on the input for it to be meaningful to apply your algorithm? And how much a priori knowledge about the kind of data is exploited? For example, you reported an application to samples of texts by Darwin and by Dembski. Suppose some of these texts had been typeset using only upper-case letters. Or suppose that some of the texts were adapted to pages with very large margins, so that the ASCII representations contained an excessive number of newline characters. Would you then have to preprocess the texts to get meaningful results?

Or, to make formulate essentially the same question differently, suppose not all of the texts were written in English, but that some of the texts were in German and Chinese. Would you then appeal to your a priori knowledge that the data you’re studying are written human languages, and stipulate that your applications of your algorithms to sample texts of different languages is invalid?

The other point arises of how can the criteria be ‘generalized’ between slightly variant artifacts?

Rilke’s Grand-daughter asked whether minimum samples sizes had been established. That awaits better estimates of the effect sizes one can expect and the variability associated with the scores assigned by the DDA as it is refined.

Erik asked about the specific statistical question MDT attempts to answer, giving three examples. Initially, the task is most similar to Example A, “Given N samples, each sample shall be classified as belonging to exactly one of C categories (with category #17 meaning “designer #17 generated this sample”).” That is the methodology validation stage of affairs – validating and then testing the reliability of the discrimination methods. Later, when some clear hypothesis about the level of analysis appropriate to biology is at least tentatively identified, the questions illustrated by Erik’s Examples B and C comes into play. I do not plan to rush into biological unknowns before the methodology is under good control.

Erik’s second question, about constraints on the inputs, is still in a very early stage of research planning. I plain don’t know yet. I will say that the representational method at least in principle should make appeals to prior knowledge about the language of texts unnecessary. The goal of the methodology development program is to render the role of prior knowledge (at least to the extent that it can’t be mechanically applied) as irrelevant as possible.

Rilke’s Grand-daughter also asked about generalizing the criteria (for classification?) to variant artifacts – apparently meaning variant designs from the same designer. That’s a question yet to be addressed under the ‘reliability’ heading. Validity is the first question, reliability is the second.

RBH

Have you used your method to test whether or not multiple instances from a single designer score differently from the way multiple instances from a pair of designers score? For instance, if you had run D1,1 D1,2 and D1,3 against D1,5 D1,8 and D1,10, (the three smallest numbers in D1 against the three largest numbers in D1) would you get a statistically insignificant difference between the means? Or would it come up as statisically significant, and therefore indicated as the results of different designers?

(It’s been entirely too long since I’ve had a statistical methods course, which is why I haven’t just used your numbers to do it myself. *sheepish*)

Also, what were the products you compared in Table 2?

Calzaer,

Nope. Those are good questions, as are the questions you and RGD raised in the MDT thread, but due to the press of other business-related issues I’ve been on hiatus from MDT stuff for a while. I hope to get back to it in February, when my fond hope is that the pressure on my machinery (both silicon and wetware) will ease a bit.

RBH

About this Entry

This page contains a single entry by Richard B. Hoppe published on September 23, 2004 8:21 PM.

Meyer: Recycling arguments was the previous entry in this blog.

Now here’s a miracle is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter