The Skeptic paper online

| 37 Comments

Syntax Error: not well-formed (invalid token) at line 1, column 656, byte 656 at /usr/local/lib/perl5/site_perl/5.16/mach/XML/Parser.pm line 187.

37 Comments

Many events result from a combination of more than one cause.12 The notion of only three separate causes does not jibe with reality. Consider Dembski’s own favorite case of an archer shooting arrows at a target painted on a wall. According to Dembski, if the archer hits the bull’s-eye, it is a result of design (the archer’s skill) and only of design.

I took archery in High School one semester. There are a lot of factors which Dembski ignores. And, actually, you didn’t bother to include in your critique which would only make Dembski’s arugment even weaker.

For example, fatigue counts, after a while, you get tired and sometimes you loose the arrow before you’re ready. Sometimes you’re hot and sweating and loose the arrow before you’re ready. Every arrow, unless you’re into competition balanced arrows, flies differently and that can make a difference. Sometimes the fletching is damaged or was not applied straight and your arrow won’t fly true. Sometimes you’re just plain lucky because your ability to hit the target is as much a factor of random chance as skill.

Anyway, great article. Great deconstrution. Great explanation.

Mark Claims:

“Dembski’s definition of information is I(E) = -log2 p(E)”

He says he got it from page 127 of Dembski’s book. Well, what does it say on page 127 of Dembski’s book?

“the amount of information in an arbitrary event E as I(E) = -log2p(E)”

Ahhhhh, but there is a subtle sleight of hand here by Mark Perakh. Dembski does not define Information as I(E) but as the AMOUNT of information.

For example, there is a difference between the information in file and the amount of information in the file, usually specified in bytes (8 bit = byte). For example, a text file contains information of 80,000 bits. The AMOUNT of information in the file is not the same as the information in the file. This should be plainly obvious.

Yet Perakh tries to imply that Dembski is confusing the AMOUNT of information with information itself. That is not true as can be seen by Dembski’s own words.

Perakh has thus inaccurate represented Dembski, and set up a strawman argument. Now, let’s see how many critics out there are going to call Perakh on his inaccurate representation???

Salvador

I would like to amend my wording when I said “sleight of hand”. I don’t mean to imply Mark deliberately made such an obviously egregious error. He made a mistake, stated something that was not accurate about Bill Dembksi’s claims.

Mark does not have to agree with Dembski’s claims, but if he wishes to refute Dembski’s claims, it’s a good idea to state them accurately first.

However, it was a mistake nonetheless and something that negates a substantial portion of his argument.

Salvador Cordova

Hi, Sal!

As long as you’re back for your next inevitable comeuppance, are you ready to talk about, um, bamboo? You know the structural grass with which multistory buildings get built?

Ready to–at long last–to start in on Lenny’s very basic questions?

Oh, and how about addressing the enormous gaping holes in Dembski’s logic now on diplay in the “What else to expect from Dembski thread”?

Unfortunately, Sal, it negates nothing. Dembski has no argument whatever. Would you care to deal with the fact that he has

1) never demonstrated the actual existence of CSI in anything?

2) objected to Mark’s failure to deal with his mathematics when it is the applicability of the mathematics which is in question?

3) continued to ignore any and all criticism of his work in favor of sniping attacks at his critics over credentials?

As it stands, Dembski has no credibility to speak on this topic whatever: his ignorance of biology is profound (almost as great as yours) and his inability to understand that his mathematics is irrelevant to his argument is glaringly obvious.

Oh, and Sal, you might also deal with the fact that Dembski mathematics has so little credibility that no one doing actual information science pays any attention to it?

Not a really good endorsement for the “Newton” of information science.

But of course, you would have to display some actual intellectual integrity to deal with these issues. I won’t hold my breath.

Yet Perakh tries to imply that Dembski is confusing the AMOUNT of information with information itself. That is not true as can be seen by Dembski’s own words.

Perakh has thus inaccurate represented Dembski, and set up a strawman argument. Now, let’s see how many critics out there are going to call Perakh on his inaccurate representation???

Right. Like all those scam artists who attempt to pull a fast one when they substitute “heat” for “temperature”, or “velocity” for “speed”. But take us a step further, if you would Mr. Cordova. How does this distinction affect the validity of Dembski’s arguments? Your implication is pretty clear: that this represents a deliberate maneuver to misrepresent Dembski, that any honest ID critic (apparently an oxymoron?) would deplore. To the untrained eye, it looks like a distinction without a difference.

Speaking of ‘distinction without a difference’, I would love to see a writeup about Dembski’s moronic claim that algorithms can introduce CSI, but it’s fake CSI.

BTW,

“Dembski’s definition of information is I(E) = -log2 p(E)”

looks perfectly fine to me as a ‘functional definiton’.

Sal: as the self-professed armor bearer for Sir Bill, surely you can be counted upon to provide a list of favorable reviews of Dembski’s work by qualified academics.

Dembski maintains that average information = entropy. … I see no way to reconcile Dembski’s LCI with the second law of thermodynamics.

Isn’t entropy the opposite of average information - i.e. entropy is a measure of randomness or chaos? So, entropy increases as information decreases and vice-versa. I’m unsure here whether Perakh is confused about what Dembski thinks, if he (Perakh) is confused about what entropy means, or both. Perakh seems perfectly happy with the idea that entropy is information, so I think perhaps he is confused about what entropy means. Given that fact, Perakh’s claim that, “I see no way to reconcile Dembki’s LCI with the second law of thermodynamics.” is false if you say that entropy is the opposite of information. That isn’t to say that Dembski is right. I’m saying that Perakh’s critism of Dembski on this issue is a result of a misunderstanding.

Salvador T. Cordova Wrote:

Yet Perakh tries to imply that Dembski is confusing the AMOUNT of information with information itself.

Actually, Perakh shows that Dembski refers to I(E) as information in one instance and as complexity in another. If two of the three components of CSI are the same thing, it certainly puts the definition of CSI on shaky.

The thing that Perakh does best in this paper is showing that all the allegedly different components of CSI simply boil down to improbability.

I’m saying that Perakh’s critism of Dembski on this issue is a result of a misunderstanding.

I assure you Dr. Perakh has no trouble understanding the concept of “entropy”. What we have here is the kind of confusion that results when someone says “please turn up the air conditioner”. Does he mean increase the cooling power or increase the temperature? In the context of either shivering or sweating, the ambiguity is not significant. Likewise, when I read “information=entropy” what I understand the author to mean is that information is measured in terms of entropy, though anyone with enough background information to understand the term, “entropy”, would understand the implicit negative sign involved.

In other words, implicitness or explicitness of that negative sign does not affect the argument.

Isn’t entropy the opposite of average information - i.e. entropy is a measure of randomness or chaos? So, entropy increases as information decreases and vice-versa. I’m unsure here whether Perakh is confused about what Dembski thinks, if he (Perakh) is confused about what entropy means, or both. Perakh seems perfectly happy with the idea that entropy is information, so I think perhaps he is confused about what entropy means. Given that fact, Perakh’s claim that, “I see no way to reconcile Dembki’s LCI with the second law of thermodynamics.” is false if you say that entropy is the opposite of information. That isn’t to say that Dembski is right. I’m saying that Perakh’s critism of Dembski on this issue is a result of a misunderstanding.

Perakh quotes Dembski directly: “[T]he average information per character in a string is given by entropy H.”

In information theory, this is valid. For example, a repeated string of zeroes has very little information, while 011010010001111000 has considerably more.

I predict that Dembski will not address Perakh’s paper in any detail.

To me it seems the crux of the problem is this:

Information and specification are somewhat opposite concepts. Information refers to a lack of uniformity, while specification refers to limitations. Yet, as Perakh quite convincingly shows, Dembski boils down both concepts to issues of improbability.

Somebody correct me if I’m wrong.

Just a quick aside:

How does Miller-Urey fold into Dembski’s arguments? Do the results fall out of his explanatory filter? Does the basic result (simple molecules + energy -> complex molecules) contradict his mathematical pie-in-sky?

it’s pretty common to use the expression “X” to represent “metric of X”. Barring some error that can be pointed out in formal calculations, this is no worse than the possibility that someone will confuse the phrase “one kilogram of steel” with one kilogram of steel

In other words, implicitness or explicitness of that negative sign does not affect the argument.

But it does because when he says that “in a closed system entropy cannot spontaneously decrease”, is he talking about entropy=information (which is how he’s been talking about entropy in the paper, but isn’t a correct statement about the second law of thermodynamics), or is he talking about entropy=disorder (which is the only way that statement is true)?

When the second law of thermodynamics states that the total entropy of an isolated system can never decrease, it is saying that the total disorder of an isolated system can never decrease, which is saying that the total order of an isolated system can never increase, which is saying that the total information of an isolated system can never increase. It sounds to me like Dembski is pretty much just restating the second law of thermodynamics with different words. I don’t quite understand how Perakh is saying Dembski’s law is in opposition to the second law of thermodynamics. The area where Dembski is wrong is in claiming that information much always be decreasing, even in subsections of a system. In reality, one area can increase in information/decrease in entropy (earth) if another is offseting it (the sun is losing energy).

Mark wrote:

Since Dembski accepts that “entropy = average information” (see above), he must conclude that average information associated with a closed system cannot spontaneously decrease; it can only increase or remain unchanged.

Oh, let’s see what Bill Dembski actually wrote on page 131:

I being the information measure defined in section 3.1

What was this measure of information, eh, the one Mark mangled earlier,

I(E) = -log2P(E)

:-)

An accepted definition of information by the way is :

that which reduces uncertainty

So if one is giving Bill’s work a charitable reading they’ll know when Bill is using the term “information” as a shorthand for “information measure”. It is clear from the context of Bill’s discussion both in his book and other papers when he is speaking of “measures of information” versus information itself.

Let’s look at the definition on page 131 of Bill’s book:

H(a_1,.…a_n) = SUM p_i I(a_i)

this is really describing entropy in terms of a communication channel, not a closed system. Ahem, a communication channel is not a closed system as I’ll later show. Heck, in the Shannon sense, a communication channel is an abstraction, of which physical systems are an approximation of that abstraction.

Further “entropy = average information per character” refers to the average information bearing capacity of each character in a communication, not the amount of CSI in a physical object. Thus Mark is equivocating information measure with CSI. Bad move, and a very egregious error.

To give an illustration, 500 fair coins, in principle, have an total channel entropy of 500, or 500 bits. Using that formula formula by Dembski. 500 distinct coins can form a communication channel of 500 bits.

Similarly 5 fair coins, in principle, have a total channel entropy of 5 bits, and can form a communication channel of 5 bits.

So how is channel entropy (or in some usage channel capacity) increased. By increasing the size of the string or creating more alphabetic symbols.

NOTE: a 1 character string with 32 possible equiprobable symbols (like{0,1,2…31}) has the same channel capacity as 5 coin string with 2 alphabetic symbols (like {0,1}), the math works out the same.

Increasing channel entropy is not the same as increasing CSI and vice-versa. How could I increase CSI in 500 coins that I find on the floor which are randomly flipped. I could turn all the coins heads, now the 500 coins exhibit CSI, I’ve not changed the channel entropy but I’ve increased CSI. How could increase channel entropy without increasing CSI? Add more coins.

Inefficient modems have low channel entropy and super efficient modems have hi channel entropy. The ability to raise entropy in these situation is goverened by the ability to change probability distributions. And in regard to a communications channel one can intelligently affect the amount of channel entropy.

Mention of Channel Entropy in DNA

Mention of Channel Entropy in Visual Communication

Dembski notes:

this information-theoretic entropy measure from statistical mechanics is mathematically identical to the Maxwell-Boltzman-Gibbs entropy from statistical mechanics

However, this mathematicall identity does not give the license to equivocate the behavior of entropy in a closed thermodynamic system with behavior of entropy in an open, abstractly defined communication channel. Neither does it give one license to equivocate the meaning of “information measure” with CSI, as Mark has done. Perhaps Dembski should ask Skeptic magazine for a retraction.

What I’m covering is basic stuff by the way for Electrical Engineers specializing in communications. The information engineers know between Mark and I who is telling it like it is. Oh, perhaps Mark might now reconsider calling Electrical Engineers “lickspittles”, “shychophants”, or “armor-bearers”.…

Oh, by all means let’s talk about EE. How about we talk about IEEE. Specifically, the upcoming IT conferences. I believe you’ve ignored this comment of mine a few times. It’s obvious why.

Comment #43211

Posted by steve on August 15, 2005 11:33 PM (e) (s)

Sal, maybe you can explain something else. I’m looking at the website for the 2005 IEEE International Symposium for Information Theory. It will be held in Adelaide, Australia. This is The upcoming conference for IT scientists. Now, I was confused when I saw the list of plenary speakers:

* Richard Blahut (Shannon Lecturer) * P. R. Kumar * David MacKay * Benjamin Schumacher * Terry Speed

Isn’t that weird? No William Dembski. The Isaac Newton of Information Theory, not speaking at the big IT conference? That’s strange. Maybe he’s presenting a paper, though. Is Dembski presenting a paper at the conference? Like you say, Darwin should have known Information Theory to study evolution. Dembski is your big supposed Information Theory guru about evolution. So he should be there, right?

I know. He might have a scheduling conflict. You see, the other big thing going on in September for IT scientists is the workshop in New Zealand. Specifically, the IEEE ITSOC Information Theory Workshop 2005 on Coding and Complexity. Coding and complexity, that’s right up Dembski’s alley, if he’s such an expert at this stuff, isn’t that true? Take a look at the topics covered at the workshop: Algorithmic information theory; channel coding; coded modulation; complexity, information and entropy; complexity measures; convolutional coding; error-correcting codes; information theory and statistics; iterative decoding; LDPC codes; quantum information theory; quantum-theoretical aspects of coding; randomness and pseudo-randomness; relationships between codes and complexity; rate distortion theory; soft-decision decoding; source coding; source-channel coding; spreading sequences and CDMA; turbo codes. See that? “Algorithmic information Theory”—“information theory and statistics”—“relationships between codes and complexity”—that’s exactly what you think Dembski is such an expert at. So is he going to be teaching those sections of the workshop? Or will they be discussing any of Dembski’s results there? I mean, it can’t be true that international conferences on Information Theory would fail to discuss revolutionary new results in IT. So if they’re not talking about Dembski, why not?

”… abstractly defined communication channel. Neither does it give one license to equivocate the meaning of “information measure” with CSI, as Mark has done.”

er, you amaze me. don’t you see that the creation of an abstract definition that isn’t based on any realistic or supported basic assumptions is EXACTLY what Dembski has done?

Besides which, you can’t use your current missives to try and claim innocence to the reasons why the eptithets listed are rather appropriately hurled in your direction. anybody can simply look back at your unnending and mindless support of someone who has so often been proven wrong, has lied (well documented and you know it), and rarely if ever actually addresses legitimate criticisms of his “work” to see where the epithets came from to begin with.

do you now deny your past behavior? I have not seen the epithets used loosely, even around PT. are you even able to look at yourself and see how odd your behavior is?

you still make the best poster boy i could imagine to best describe the incomprehensible behavior exhibited by those who claim to be ID “supporters”. you constantly end up being the best argument against ID ever being viewed by real scientists as anything but crankish dogma used to mask the ulterior desires of the terminably delusional.

keep it up, i guess, tho I do pity you like i did JAD.

Oh, perhaps Mark might now reconsider calling Electrical Engineers “lickspittles”, “shychophants”, or “armor-bearers”.…

You’re really lying about people lately, Salvador. Mark did not call EEs sycophants. He called you and Koons sycophants, because that’s what you are, Chester. He did not call EEs “armor-bearers”, he called you Dembski’s armor-bearer. First you tell the “impeccable” lie about Jason, now this? Sycophancy is tolerated here, but lying may get you banned.

Or they may not. You are driving home the point that IDers are either stupid or dishonest, or both.

steve, I think you misunderstood Salvador (as do we all, mostly due to his battles with his own communicative entropy). When he said “reconsider calling Electrical Engineers…,” he meant “think about broadening your epithets to include Electrical Engineers”–an attempt at a rejoinder, not a claim about past action.

Re: comment 43869 by BC. The statement “average information = entropy” is found in Shannon’s seminal paper that started information theory and Dembski has taken it directly from Shannon. It is a commonly accepted definition that directly follows from Shannon’s formula for entropy which is in fact equivalent to Boltzmann-Gibbs equation except for a constant. Since the comment’s writer using the moniker BC seems to be a little uncertain regarding this point, perhaps he (she) would be interested to look up this where some of the related points are discussed.

Mr. Cordova,

I have noticed that other forms of creationism, such as Young Earth and Flat Earth have testable hypothesis. Sure, they were proven wrong, but hypothesis none the less.

With Young Earth, the earth was between 6,000 & 10,000 years old. A little bit of radiometric dating, and “poof” that was thrown in the rubbish bin.

With Flat Earth, just some basics of geometry. All we had to do was remember what Pythogoras figured out 2,500 years ago. And if we didn’t remember that, Ptolemy’s 8-volume Geometry of the World clearly indicated a curved (round) earth. Workig off those works, in 1492, Columbus proved, to most, that the Earth was round. And, if that wasn’t enough, we had Magellan’s expedition circumnavigate the globe in the early 1500’s.

But why is it that Intelligent Design predicts nothing and formulates no testable hypothesis? The very basics of any scientific theory, as I have been taught many, many, many times, that a scientific theory has to predict something that is testable. And if it doesn’t, it’s not science, but philosophy.

So, why are you not spending your time working productively on elevating “Intelligent Design” from philosophy to science instead of trolling message boards? Your behavior is unproductive to the advancement of your theory and smacks of missionary work.

What does Intelligent Design predict and how do we test it? Can you answer that? Can you even point to some time in the near future that someone might actually do that very basic act of science?

Because until ID has a theory and a testable hypothesis, it is just a philosophy. And whether you dress that pig up in circular mathmatical arguments or continue with the pernacious mis-represention of the Theory and Fact of Evolution while trolling message boards or going to religious conferences, you’ve still got a pig.

Oops. Mea culpa. Having reviewed Dr. Perakh’s critique, I see that the positive/negative sign in the entropy=information statement is, indeed, important to the argument.

Apologies to BC and Salvador Cordova. I should either read before I write or stay out of math discussions altogether.

But I probably won’t. Sal, how’s that list of positive academic reviews of Dembski’s oeuvre coming along?

Posted by Russell on August 19, 2005 08:30 AM (e) (s)

Oops. Mea culpa. Having reviewed Dr. Perakh’s critique, I see that the positive/negative sign in the entropy=information statement is, indeed, important to the argument.

Obviously, someone doesn’t know the Feynman Rule. +200 Cool Points for anyone who knows what that means w/r/t negative signs.

Anyone notice how Salvador’s such a fucking beggar, that he’s always asking for “charitable” interpretations of his hero’s blather?

As it stands, Dembski has no credibility to speak on this topic whatever: his ignorance of biology is profound (almost as great as yours) and his inability to understand that his mathematics is irrelevant to his argument is glaringly obvious.

Wilderlifer, why do you let your feelings get hurt so easily? Dry those eyes and maybe you’ll be able to make better sense of what you’re reading.

Perakh, could you please respond to Sal’s critique.

I liked your book very much and your on-line articles. But I think Sal has a point. Could you please explain your intial point or at least clarify. Thank you.

Dear “Interested Reader”:

Given the very large number of comments to many threads on PT, I usually do not read all of them, more so because many of them are off-topic. Among the comments in this thread I have not read are those by Salvador Cordova. From the previous experience I knew there was a very slim chance anything he said would justify time spent on reading his rants.

Since I let Cordova’s comments pass without reading, naturally I did not respond to them. In my view the usual lack of substance in Cordova’s comments speaks for itself anyway. Another reason for not engaging in answering Cordova’s comments is the expectation that regardless of how well substantiated a rebuttal of his comments may be, he most probably will post in response more of verbose rants, prolonging an unnecessary discussion, which seems to be his passion.

However, since you request that I clarify the matter, I looked up Cordova’s comments in the initial part of this thread. As expected, his comment wherein he claimed to have found an error in my post turned out to be preposterous. Quoting Dembski, I reproduced Dembski’s definition of information as I=-log (P) where log is to the base of 2. In Cordova’s opinion I committed a grave mistake (either as a deliberate “sleight of hand” or as an inadvertent mistake) because in Dembski’s book the quantity I is referred to not as “information” but as an “amount of information,” which, asserts Cordova, is a substantial difference. In a subsequent comment, Cordova referred to his original comment in question, asserting that I “mangled” this matter.

Such comments make one shrug.

Just a few examples.

Look up the article “Information Theory” in the Van Nostrand’s Scientific Encyclopedia. It was written by Professor of Purdue University, a renowned expert in Information Theory George R. Copper. On page 1355 (fifth edition) Cooper gives the same expression I=-log(P). Nowhere does Cooper use the expression “amount of information.” When first introducing this quantity, Cooper refers to it as “informational contents.” Continuing, Cooper refers to that quantity as simply “information,” (as in the expression “units of information”). Perhaps Cordova should repudiate Cooper for an improper use of terms — given Cordova’s amusing self-confidence, such a repudiation would be in line with his ridiculous “critique” of Elsberry-Shallit’s paper and of my essay that is referred to in this thread.

Look up the well known standard textbook on information theory by Richard E. Blahut. (“Principles and Practice of Information Theory,”Addison-Wesley, 1990 edition). On page 55 Blahut introduces the same formula I=-log (P) and refers to it as “amount of information.” However just two lines further down the page he refers to the same quantity as simply “information.” These two terms are interchangeable without causing any confusion (except in Cordova’ mind?).

In the textbook on information theory by Robert M. Gray (“Entropy and Information Theory,” 1991 edition) the expression “amount of information” is not used, while the same quantity I is referred to as “self-information.” In many other papers and books, too numerous to be listed here, the same quantity is often referred to as “surprisal.” Hence, while the term “amount of information” is legitimate, it is not the only choice of a term for the quantity I. It can be referred to as simply “information” equally legitimately if it is interpreted in a quantitative sense. In my essay it has been stated directly in the text that the term “information” was indeed used in a quantitative sense, i.e. tantamount to “amount of information.” The “error” in using the term “information” instead of “amount of information” exists only in Cordova’s imagination and has perhaps been caused by Cordova’s overarching need to find errors in any critique of his master Dembski, at any cost, even where there is none.

On the other hand, just a couple of pages after introducing the “amount of information” I, Dembski refers to the same I as … “complexity”! But of course, according to Cordova, Dembski’s inconsistencies should always be approached “charitably” and Dembski, if we follow Cordova’s charitable approach, never commits errors.

Having read these two comments by Cordova, I did not feel I needed to read the rest of his rants. I have no intention to curtail in any way Cordova’s freedom to post here any comments of his choice — let him expose himself to readers. I have no intention, either, to engage in any further discussion of Cordova’s comments. If you, “Interested Reader,” insist on some additional clarification, please email to me personally outside this thread which has become overloaded with comments and with every passing week attracts less public attention anyway. Cheers, Mark Perakh

Mark wrote:

I did not feel I needed to read the rest of his rants. I have no intention to curtail in any way Cordova’s freedom to post here any comments of his choice

I respect you for that.

Mark wrote:

Look up the well known standard textbook on information theory by Richard E. Blahut. (“Principles and Practice of Information Theory,”Addison-Wesley, 1990 edition). On page 55 Blahut introduces the same formula I=-log (P) and refers to it as “amount of information.” However just two lines further down the page he refers to the same quantity as simply “information.” These two terms are interchangeable without causing any confusion (except in Cordova’ mind?).

“information” can be “information” or the shorthand for the “amount of information”. Mark has only affirmed that Dembski is using a common shorthand.

The information theorists know that there is difference between the amount of information in a file and the information itself in the file. If we take the assertion that “amount of information” is the same as “information” to its logical conclusion, then any two files of equal length have the same identical information. That is clearly wrong.

Most readers would not appreciate such subtlties, but rather Mark’s essay exploits the fact that readers may be unfamiliar with these points.

Now, if Mark really knew that Dembski could distinguish between the amount of information and “that which reduces uncertainty” (the definition of information), yet represented his work as confusing the two, then that was a willful choice to inaccurately portray what Dembski wrote. Dembski knows the difference between the two, so do I, and so does Mark. But not all the readers of you article in Sketic magazine realize that.

Dembski uses the phrase “reduces the reference class of possibilities” to define conceptual and physical information.

That definition of those classes of information is not the same as the “amount of information” which is

I(E) = -log2(P(E))

for example in a string of 10 coins, the information in them is could be represented as

“H H H H H T T T T T”

and the amount of information is

I(E_1) = 10 bits

another string of ten coins could have the following information

“T T H H H T H T H T”

I(E_2) = 10 bits

I(E_1) = I(E_2) = 10 bits

does that mean

“H H H H H T T T T T” = “T T H H H T H T H T”

No way!

Had Mark provided the verbatim definition in his essay where Dembski uses the phrase “reduces the reference class of possibilities”, perhaps it would have been a more accurate portrayal of Dembsk’s work. But he did not do that. Why is that? It would have clarified the fact that there is a difference between the “information” in the sense of the actual informaiton and “information” in the sense of “amount of information”.

And by the way, Mark didn’t address the gaffe on channel entropy, and his inaccurate claim that “the term conceptual information seems to coincide with the meaning of that text.”

I point the readers to my rendering of Dembski’s work: Discussion of Perakh’s Article

I’m not saying they should agree with Dembski’s claims, I’m only pointing out you didn’t represent Dembski accurately or charitably. The readers can decide whether my portrayal of Dembski’s work is more accurate then Mark’s.

Salvador

Mark Perakh Wrote:

It can be referred to as simply “information” equally legitimately if it is interpreted in a quantitative sense. In my essay it has been stated directly in the text that the term “information” was indeed used in a quantitative sense, i.e. tantamount to “amount of information.”

A common phrase like “the average information in …” is clearly quantitative; it makes no sense to interpret “information” in that phrase as a pattern like “H H H H H T T T T”. In other words, there’s no (amount of) information in Sal’s posts.

BTW, “average information” gets 24,500 google hits (a bit more that Dembski got for his alleged Schopenhauer quote).

for example in a string of 10 coins, the information in them is could be represented as

“H H H H H T T T T T”

and the amount of information is

I(E_1) = 10 bits

I suspect Sal doesn’t know about or doesn’t understand compression algorithms, eg as in zipfiles, to be claiming so naively that the two strings have the same amount of information. :-D

Salvador, I’ve been jumping around between here(PT), ARN, and UncommonDescent to read everything you have written in the last two weeks. I think I can almost wrap my brain around this idea but I did not know the last question I needed to ask to complete the understanding. With what you wrote above you have given me that.

Your two strings of coins above:

“H H H H H T T T T T” = “T T H H H T H T H T”;

have identical amounts of information present but the information is different. I see that. Using your shoebox example the first string could have been laid down by a human being in a row across the centerline of the box and the second string is what happens when the box is shaken violently. What I want to know is how do you quantify the actual information in the two strings as opposed to just the amount of information? What do you do that distinguishes the two strings? Is this where specification comes in? And is this what Dembski’s big breakthrough is about. Can he actually distinguish between the two strings with his mathematics? There is a quote from some scientist about how until you can put a number on something you don’t really know anything about it, so I pretty much don’t think one has something unless it can be described with numbers. To me numbers take the subjectivity out of existence and put the objectivity into it.

Instead of using random coin flippings let’s use a real world example and take this jpg of a bacterial flagellum and some secretory systems: http://www.pandasthumb.org/archives[…]hea_flag.jpg

One of the proteins in that picture is PilQ*. The amino acid sequence for it is:

MLEESAVTRGKWMLAAAWAVVLVGARVHGAELNTLRGLDVSRTGSGAQVV50 VTGTRPPTFTVFRLSGPERLVVDLSSADATGIKGHHEGSGPVSGVVASQF100 SDQRASVGRVLLALDKASQYDVRADGNRVVISVDGTSQSVDAKRAETPAR150 TERMTASVEAKPHPVAAQAPAKVVKAESAAVPKAALPENVVAAEADEREV200 SNPAQHITAMSFADDTLSIRADGDIARYEVLELADPPRLAVDLFGVGLAT250 RAPRVKSGALRDVRVGAHADKVRLVLDVRGTMPAYRVDRANRGLEVVLGR300 AVARTWRRPLRPRAVVASVAEVEPLRQTPVKSDASPVVEVKDVRFEESSS350 GGRIVMKLSGTSGWKVDRPDPRSAVLTLDNARLPKKFERSLDTSALDTPV400 KMISAFSVPGAGGKVRLVVAADGAIEEKVSQSAGTLSWRLDVKGVKTEEV450 AVAQRTAGFTTEAPAYAAEGAPQQARYRGKRVSFEFKDIDIQNLLRVIAE500 ISKKNIVVADDVSGKVTIRLRNVPWDQALDLVLRTKALGKEEFGNIIRIA550 PLKTLEEEARLRQERKKSLQQQEDLMVNLLPVNYAVAADMAARVKDVLSE600 RGSVTVDQRTNVLIVKDVRSNTERARSLVRSLDTQTPQVLIESRIVEANT650 SFSRSLGVQWGGQARAGQATGNSTGLIFPNNLAVTGGVTGTGAGLPDNPN700 FAVNLPTGTGQGVGGAMGFTFGSAGGALQLNLRLSAAENEGSVKTISAPK750 VTTLDNNTARINQGVSIPFSQTSAQGVNTTFVEARLSLEVTPHITQDGSV800 LMSINASNNQPDPSSTGANGQPSIQRKEANTQVLVKDGDTTVIGGIYVRR850 GATQVNSVPFLSRIPVLGLLFKNNSETDTRQELLIFITPRILNRQTIAQT900 L901 (I presume that is standard FASTA format but I didn’t have time to go over the whole website to doublecheck that is what they were using)

The DNA sequence which codes for that amino acid sequence is:

ATG CTA GAA GAG AGC GCT GTG ACA CGC GGA AAA TGG ATG TTA GCA 15 GCT GCC TGG GCG GTT GTC CTC GTC GGA GCG CGA GTG CAC GGG GCA 30 GAA CTG AAC ACG CTT AGG GGC TTG GAC GTA AGT AGA ACC GGC TCA 45 GGT GCC CAA GTA GTT GTT ACT GGA ACC CGA CCG CCA ACA TTT ACG 60 GTA TTC AGA CTC TCG GGA CCC GAG AGG CTG GTG GTC GAC CTA TCT 75 AGC GCC GAT GCA ACA GGC ATA AAA GGC CAC CAT GAA GGG AGT GGT 90 CCT GTC TCC GGG GTG GTA GCG TCA CAA TTC TCC GAC CAA CGT GCT 105 AGT GTG GGG AGG GTG CTC CTT GCA CTA GAT AAA GCT AGT CAG TAC 120 GAT GTT AGG GCC GAC GGA AAC CGC GTA GTT ATA TCG GTC GAC GGC 135 ACG TCT CAG TCA GTG GAC GCG AAA AGA GCA GAG ACC CCT GCT CGA 150 ACA GAG AGA ATG ACT GCT AGC GTT GAG GCC AAG CCA CAC CCG GTC 165 GCT GCC CAA GCA CCA GCC AAA GTG GTA AAG GCG GAA AGC GCA GCG 180 GTC CCC AAG GCC GCA CTG CCC GAG AAT GTA GTC GCG GCA GAA GCG 195 GAT GAA CGG GAA GTA TCC AAT CCA GCA CAG CAT ATT ACA GCC ATG 210 AGT TTT GCG GAC GAT ACT CTA TCA ATA CGG GCT GAT GGT GAT ATC 225 GCC CGA TAT GAG GTA TTG GAA CTA GCG GAT CCC CCT AGG CTT GCG 240 GTA GAC TTG TTC GGG GTG GGA CTC GCA ACC CGT GCA CCC CGA GTC 255 AAG TCT GGT GCC TTA CGC GAC GTT CGC GTG GGC GCT CAC GCT GAC 270 AAG GTA AGG CTG GTG CTC GAC GTA CGA GGA ACA ATG CCG GCA TAC 285 AGA GTC GAC CGC GCA AAC CGT GGC CTA GAG GTT GTG TTA GGG AGA 300 GCC GTT GCT AGG ACC TGG AGA CGG CCA CTG CGG CCA AGG GCT GTC 315 GTT GCG AGC GTT GCC GAA GTC GAA CCC CTT CGT CAA ACG CCT GTG 330 AAA TCG GAT GCG TCA CCG GTA GTC GAG GTA AAA GAT GTC AGA TTC 345 GAG GAA AGT AGC TCC GGT GGG AGA ATC GTA ATG AAA CTC TCT GGC 360 ACG AGT GGA TGG AAA GTA GAC CGT CCA GAT CCC CGG TCG GCC GTT 375 CTC ACG TTG GAC AAC GCC CGA CTG CCG AAG AAA TTT GAA AGA AGT 390 CTG GAC ACC TCA GCC CTT GAT ACA CCA GTC AAG ATG ATC TCC GCT 405 TTT TCT GTG CCT GGC GCT GGG GGT AAG GTA CGA CTT GTT GTC GCG 420 GCT GAT GGG GCC ATA GAG GAA AAG GTG AGC CAA TCA GCC GGA ACT 435 TTG TCC TGG CGC CTA GAC GTC AAG GGC GTC AAA ACT GAG GAA GTT 450 GCT GTT GCG CAG CGT ACA GCG GGT TTT ACC ACG GAA GCA CCG GCG 465 TAT GCC GCT GAG GGG GCA CCC CAA CAG GCA AGA TAC CGC GGA AAA 480 CGC GTA AGC TTC GAA TTC AAG GAC ATC GAT ATT CAG AAT CTA TTA 495 AGG GTA ATT GCA GAG ATT TCG AAG AAA AAC ATA GTA GTG GCA GAC 510 GAT GTG AGC GGC AAA GTC ACC ATA AGG CTT CGG AAT GTT CCT TGG 525 GAC CAA GCG CTG GAT CTC GTG TTA CGA ACA AAG GCG CTA GGA AAA 540 GAA GAG TTC GGT AAC ATT ATC AGG ATA GCA CCA TTG AAA ACT CTG 555 GAA GAG GAA GCT AGG TTG CGT CAG GAA CGA AAG AAA AGT CTG CAG 570 CAA CAG GAA GAC CTT ATG GTG AAC TTA CTT CCC GTA AAT TAC GCG 585 GTA GCT GCG GAT ATG GCT GCG CGC GTC AAG GAC GTC CTG TCC GAG 600 CGG GGC AGC GTT ACC GTG GAT CAA AGA ACT AAC GTG TTA ATC GTT 615 AAA GAC GTA AGG TCC AAT ACT GAA CGA GCA CGT AGC CTA GTT AGA 630 TCT TTA GAC ACC CAG ACA CCT CAG GTG CTG ATA GAG TCG CGG ATT 645 GTG GAA GCT AAC ACC TCT TTT AGT CGC TCA CTA GGG GTA CAA TGG 660 GGG GGT CAA GCG AGG GCG GGA CAA GCA ACC GGC AAT AGC ACA GGC 675 CTT ATA TTT CCA AAC AAT TTG GCC GTT ACT GGC GGT GTC ACA GGA 690 ACA GGA GCC GGA CTA CCT GAT AAC CCA AAC TTC GCA GTT AAT TTA 705 CCC ACC GGG ACG GGC CAG GGT GTA GGA GGT GCT ATG GGG TTC ACC 720 TTT GGG AGT GCA GGG GGA GCA CTC CAG CTT AAC CTC CGA TTG TCG 735 GCA GCC GAA AAC GAG GGC TCC GTC AAG ACG ATA TCA GCC CCG AAA 750 GTA ACA ACT CTC GAT AAT AAC ACG GCC CGC ATC AAT CAA GGT GTC 765 TCG ATC CCG TTC AGC CAA ACT AGT GCC CAG GGA GTG AAT ACG ACA 780 TTC GTA GAG GCG AGA CTA TCT CTC GAG GTT ACG CCC CAC ATT ACG 795 CAA GAC GGT TCA GTC TTA ATG AGC ATT AAC GCA AGC AAC AAT CAG 810 CCA GAT CCG TCG AGT ACG GGA GCT AAT GGG CAA CCC TCT ATA CAA 825 AGG AAA GAA GCC AAC ACC CAG GTT CTC GTG AAA GAT GGC GAC ACA 840 ACT GTC ATA GGG GGT ATA TAC GTG CGC CGT GGC GCA ACC CAA GTA 855 AAC TCC GTC CCA TTC TTG AGT CGG ATT CCC GTA CTT GGA CTA CTG 870 TTT AAG AAC AAT TCA GAG ACA GAC ACA AGA CAG GAA CTG CTC ATT 885 TTC ATC ACT CCT CGA ATC CTA AAT AGA CAG ACG ATC GCG CAA ACC 900 CTT901

Can you show me what the math Dembski uses has to say about these strings? I have other questions but I think this is all I need to get started.

By the way I have ordered NFL from my bookstore and will read it as soon as it arrives. Is it sufficient and stand-alone all by itself or should I read any other of his books first or afterwards.

Sincerely, Paul

*I chose this protein because I was able to find it quickly. Sorry it is so long.

About this Entry

This page contains a single entry by Mark Perakh published on August 18, 2005 2:03 PM.

Ten Questions to Ask Your History Teacher about Revolutionism was the previous entry in this blog.

New CSICOP Column is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter