Those pathetic pesky details again

| 74 Comments

Dembski has once again shown the scientific vacuity of Intelligent Design.

On UncommonDescent Dembski has a posting titled Just say NO to Darwinian just-so stories

Dembski Wrote:

I guess that’s what happens when you assume that sequence similarity automatically means a common ancestry (of the gene). A more likely scenario is that both cells require a protein with the same function so they have a similar sequence by design.

Once again, an ID perspective seems much closer to reality than the Darwinian (Lamarckian?) just-so stories.

I am not going to argue whether or not the proposed hypothesis is accurate, what I am going to do is compare the science hypothesis with Dembski’s claim

So let’s compare: The paper provides a tentative hypothesis based on scientific data. Dembski instead shows why ID is scientifically vacuous. ‘The designer wanted it so’. How Dembski has established that such a scenario is more likely is beyond me. I guess the math is too complicated to share…

Dembski’s double standard has been well documented, on the one hand he expects science to provide sufficiently detailed pathways, on the other hand, ID does not require any such ‘pathetic level of detail’. Strangely enough, the pathetic level of detail was Dembski’s own requirement.

Dembski: Wrote:

As for your example, I’m not going to take the bait. You’re asking me to play a game: “Provide as much detail in terms of possible causal mechanisms for your ID position as I do for my Darwinian position.” ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories. If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots. True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”

Source.

In the same thread, Dembski is educated by Deanne Taylor on the concept of scale free networks. Enjoy…

It must be comforting to ‘know’ that an Intelligent Designer (read God) is taking care of His Creation. But to call this science… Bizarre… But understandably if you realize that Dembski’s strength may be in apologetics.

Dembski could have benefited from Darwin’s wise words

Nothing can be more hopeless than to attempt to explain th[e] similarity of pattern in members of the same class, by utility or by the doctrine of final causes. The hopelessness of the attempt has been expressly admitted by Owen in his most interesting work on the ‘Nature of Limbs.’ On the ordinary view of the independent creation of each being, we can only say that so it is;–that it has so pleased the Creator to construct each animal and plant. [On the Origin of Species, first edition, 1859, Chapter XIII, p 435]

It is so easy to hide our ignorance under such expressions as the “plan of creation,” “unity of design,” ., and to think that we give an explanation when we only restate a fact. [On the Origin of Species, Chapter XIV, p 482]

Btw anyone has an idea who TJ is? As in (TJ: it’s actually a fungus)?

74 Comments

Why does Dempski throw Lamarckian evolution into the mix? Does he think that is still an enemy of creationism? Or is he just worried that any form of evolution is evil? Perhaps he thinks the referenced article proposes a Lamarckian mechanism. Or perhaps he wants to make a point that “Intelligent Design” creationism trumps any rational attempt to explain the world, even attempts that have be shown wrong for over 100 years. Oh, I forgot he wants to takes us back over 200 years, even onto the middle ages.

Never try to second guess what motivates Dembski. Although he seems to find much solace in apologetics. Somehow that has never surprised me.

H’mmm, from reading this:

Source.

Looks as though Billy Dembski is trying to get others to do his work for him again.

Maybe he is testing ideas for another book.

Dembski’s BEST line in that whole thread…

after Yersinia does a decent job of detailing exactly how evolutionary models predict and explain parts of the immune system, dembski says:

By your great mass of words and facts you’ve lost the train of the argument

ROFLMAO!

how on earth is anyone supposed to take him seriously when he says crap like that???

…It sounds so much like Homer Simpson’s

“Facts? pphhht! Facts are meaningless. You could use facts to prove anything that’s even remotely true!!”

Fascinating reference. Dembski’s follies seem to be never ending

so… Iscid is where Dembski, Nelson et. al. hang out these days?

I can’t ever remember a thread in PT history where Mike Gene, Dembski, AND Paul Nelson all posted in the same thread!

hmm. in reading these “homer moments”, i find myself increasingly seeing the similarity and usefulness of these quotes when applied to IDiots.

Here, see if you can find some you like:

http://www.angelfire.com/home/pearl[…]quotes1.html

As originally posted by Dembski, via Sir_Toejam:

By your great mass of words and facts you’ve lost the train of the argument…

Ah, this must be the dreaded fallacy of supporting an argument by presenting evidence.

I understand Dembski and his ilk have shown a penchant for complaining about “literature bombing” in the past - probably another instance of the same courageous ID tactic rearing its head.

You have to understand that for an ID filter to succeed the less information the better. Who wants to confuse the issue with actual details?

It is said that if a lie is repeated often enough and loudly enough, people will come to believe it. That isn’t necessarily so.

A real distortion may never be believed fully by anyone, no matter how often or loudly it is proclaimed, but for a misrepresentation to be effective, it does not need to be believed in every detail. It is enough that it leaves behind an impression. People will think that if anyone bothers to promote such a lie, there must be a kernel of truth in it.

The same goes for exaggeration and false implications. Distort the truth and people will think it has some basis in fact. Take a truth and phrase it in such a way that it looks suspicious, or juxtapose it with an acknowledged “gap”, and the mind will be tempted to draw all sorts of ill-founded conclusions.

Is it possible that our species are ‘hardwired’ (a la Levi Strauss) to believe in anything which provides us ‘hope’ for immortality? The task of Enlightenment is daunting to be sure. Nevertheless, it must be out task.

Hi Pim

Since you post so many articles to Panda’s Thumb I was wondering who you are, what you do, etc.

Imagine my surprise that you, probably the most prolific author on Panda’s Thumb, aren’t listed as contributor:

http://www.pandasthumb.org/archives[…]_of_the.html

Why aren’t you on the list of contributors?

“Hi Pim

Since you post so many articles to Panda’s Thumb I was wondering who you are, what you do, etc.

Imagine my surprise that you, probably the most prolific author on Panda’s Thumb, aren’t listed as contributor:

http://www.pandasthumb.org/archives/2004/03/the_…

Why aren’t you on the list of contributors?”

Is it for “the ememies if God” Op Ed you’re writing?

Dembski:

A more likely scenario is that both cells require a protein with the same function so they have a similar sequence by design.

This is one of the many cases in which I wonder if Dembski is really that ignorant of biology, or if he knows he’s lying. If both cells “require” identical proteins, the sequence encoding it could still vary a great deal, since there are multiple triplets that encode the same amino acid. For that matter, there are many amino acids that can be substituted while preserving the same conformation and often preserving the function as well.

So you certainly cannot conclude that the requirement of identical function implies that there must be sequence similarity. Identical proteins–let alone different ones with the same function–could differ in a base in nearly every encoding triplet, giving about 2/3 similarity, which is far less than what we observe.

Design by itself explains nothing. So how do you explain the sequence similarity except by common descent?

Well, I really hope Dembski wouldn’t say it’s a coincidence, because that would undermine most of his probabilistic arguments allegedly against evolution. One thing I actually agree with Dembski about is that if two sufficiently large informatic objects (e.g. strings) are very similar, then chance is not a compelling explanation for their similarity (this is commonly accepted statistical induction and can be formalized, for instance, with p-values).

I suppose you could hypothesize that the designer maintains a sequence library that contains the encodings of proteins of various functions as they’re found, so that even with the wide latitude for varying sequence while preserving function, we see sequences that are more similar than we expect based on functional similarity alone.

That hypothesis is pretty silly, but without it, you cannot explain sequence similarity with design, whereas you can explain it with common descent. That makes common descent a theory with far greater explanatory power. And ss soon as you start adding hypotheses like the above, you can explain away any experimental data at all. In short, ID either explains nothing or explains everything. Thus, it is not science.

Sir_Toejam Wrote:

hmm. in reading these “homer moments”, i find myself increasingly seeing the similarity and usefulness of these quotes when applied to IDiots.

Here, see if you can find some you like:

http://www.angelfire.com/home/pearl[…]quotes1.html

This one seems rather appropriate:

Lord help me, I’m just not that bright.

Dembski wrote:

I guess that’s what happens when you assume that sequence similarity automatically means a common ancestry (of the gene). A more likely scenario is that both cells require a protein with the same function so they have a similar sequence by design.

haven’t heard that one before.

Claim CI141: Similarities in anatomy and DNA sequences simply reflect the fact that the organisms had the same designer. Source: Sarfati, Jonathan, 2002. Refuting Evolution 2. Master Books, chap. 6. http://www.answersingenesis.org/Hom[…]chapter6.asp Response:

1. Different forms also (it is claimed) come from the same designer, so similar forms are not evidence of a common designer. Evidence for a designer must begin by specifying (before the fact) what is expected from the designer. When do we expect similar forms, and when do we expect different forms? “Intelligent design” theory will not answer that. Evolution theory has made that prediction, and the pattern of similarities and differences that we observe accords with what evolution predicts.

2. There are similarities that cannot rationally be attributed to design. For example, an endogenous retroviral element (ERV) is a retrovirus (a parasite) that has become part of the genome. There are several kinds of ERVs, and they can insert themselves at random locations. Humans and chimps have thousands of such ERVs in common – the same type of ERV at the same location in the genome (D. M. Taylor 2003).

PaulC, thanks for your comment. Dembski made a ‘more likely’ argument and I pointed out that the calculations are missing. Your comments help to address the ‘more likely’ argument. The abstract mentions that 19% of the amino-acids in Human ATP were identical to Neurocrassa. So first there is the issue of sequence similarity and due to the degeneracy of the code, a particular sequence can contain significant variation and still encode for the same amino acid sequence. Then it seems that there may be essential parts of a sequence which code for the folding and structure behavior of the protein and may encode the ‘function’. As I understand recent research has shown that even in cases with limited sequence similarity, three dimensional structures can still be conserved. If evolution is all about function, the we may find instances where sequence similarity may be minimal but functionally there may still be a similarity. I wonder, if I am correct here, how much of convergence could be actually due to loss of genetic evidence while the actually function was maintained? Let me rephrase this for those who have an opinion or those who have an answer: A particular sequence is found in X another sequence is found in Y both seem to serve a similar purpose (lets see anti-freeze protein). the sequence in X and Y share no statistical similarity, but do share functionaly similarity. How does one distinguish between convergence and divergence here? Could convergence in many instances be an example of information lost? Or am I babbling, a low grade fever and many days of fighting the flu can do that to you :-)

Steve s, thanks for the reference. I should check in with Talk.Origins more often. The issue is that from an apologetics perspective the statement ‘a designer makes more sense’ can appear to be quite powerful but from a scientific perspective “a design explanation is more likely” requires some supporting data.

Dembski Wrote:

A more likely scenario is that both cells require a protein with the same function so they have a similar sequence by design.

Requiring the same function would explain why the sequence similarity, although not complete (19% if I read the abstract correctly), exists. Perhaps Dembski is using the term ‘design’ in a manner to indicate the possibility of both a natural as well as a supernatural designer. Since evolution is all about function, it should not come as a surprise that the “natural designer” of variation and selection would preserve function not sequence similarity.

How would one “explain” the functional similarity but not the sequence similarity from a design perspective? I think I may just have done this. But the conclusion is ‘natural designer’

Remember that the step from design to designer is inductive and as Wesley Elsberry has long pointed out this step cannot exclude natural designers. This may be yet another case where natural designers seem to trump.

Wesley Wrote:

The apparent, but unstated, logic behind the move from design to agency can be given as follows:

1. There exists an attribute in common of some subset of objects known to be designed by an intelligent agent. 2. This attribute is never found in objects known not to be designed by an intelligent agent. 3. The attribute encapsulates the property of directed contingency or choice. 4. For all objects, if this attribute is found in an object, then we may conclude that the object was designed by an intelligent agent.

This is an inductive argument. Notice that by the second step, one must eliminate from consideration precisely those biological phenomena which Dembski wishes to categorize. In order to conclude intelligent agency for biological examples, the possibility that intelligent agency is not operative is excluded a priori. One large problem is that directed contingency or choice is not solely an attribute of events due to the intervention of an intelligent agent. The “actualization-exclusion-specification” triad mentioned above also fits natural selection rather precisely. One might thus conclude that Dembski’s argument establishes that natural selection can be recognized as an intelligent agent.

I could speculate as to why a supernatural designer would have preserved function but not sequence but there is little that would help one guide the argument here? One could equally well argue that the designer had forgotten his original recipe and had to derive one from scratch, one could argue that it was designed by a different designer than the original. There is too little to constrain the imagination here.

I am not sure why I am not on the list of contributors, I honestly do not care. Compared to people on the list, I am mere amateur interested in the issue of evolution and creationism from an early exposure to the Young Earth Creationist version of Christianity. Imagine my dismay that despite my physics background, I had accepted the claims about radiometric dating without double checking… Dalrymple to the rescue… Talkorigins to the rescue… Science to the rescue.

I am very interested in the topic of evolution from the perspective of evolvability, scale free networks, and got interested in the ID claims early on. Studied the claims, bought countless books, read countless articles and was in the end not impressed. Then the Wedge Document.

As a Christian I am struggling to reconcile my faith. As a scientists I am struggling to reconcile my science. Make sense? I consider ID mostly scientifically vacuous as it really makes no scientific contributions beyond the trivial and is inherently based on a gap argument. I consider ID to be religiously unnecessary and dangerous as it unnecessarily exposes issues of faith to scientific (dis)proof and because it seems to make faith too easy. As I Christian I believe that I come to God on faith alone.

I read a lot about evolution, come across fascinating new research, try to understand the concepts of evolvability, scale free networks and other concepts as to how they help understand why evolution has been so succesful and I like to share my findings and thoughts.

The last few days I have been home sick enough to spend most of my day reading and writing. Which reminds me that I should not neglect my family. But science can be so much fun. I love reading Carl Zimmer’s blog the Loom which brings biological issues to the layperson. I am a long time fan and avid reader of Wesley’s work and try to read all that I can on the issues, from both sides. Hope this helps?

Posted by PvM on February 12, 2006 02:06 PM (e)

I am not sure why I am not on the list of contributors, I honestly do not care. Compared to people on the list, I am mere amateur interested in the issue of evolution and creationism from an early exposure to the Young Earth Creationist version of Christianity. Imagine my dismay that despite my physics background, I had accepted the claims about radiometric dating without double checking…

I think you will probably find this interesting.

For more info on scale-free networks and evolvability see the works by:

What is evolvability

…evolution tunes the content and frequency of genetic variation to enhance its evolvability. Genetic evolution is not random or entirely blind. Genetic systems are like nervous systems and brains—they have been structured and organised by evolution to enhance their ability to discover effective adaptations.

1. Marc Toussaint (neutrality, evolvability, NFL theorems) 2. Peter Schuster (RNA, scale free networks, evolvability) 3. Wagner, Altenberg (Evolvability) 4. Santa Fe Institute especially Walter Fontana 5. Ruse, Ayala on teleology in nature

See for instance Evolvability by Kirschner and Gerhart what evolvability is all about.

PaulC Wrote:

So you certainly cannot conclude that the requirement of identical function implies that there must be sequence similarity. Identical proteins—let alone different ones with the same function—could differ in a base in nearly every encoding triplet, giving about 2/3 similarity, which is far less than what we observe.

Design by itself explains nothing. So how do you explain the sequence similarity except by common descent?

Well, you could argue that one triplet is preferable to another on grounds other than what it codes for. For instance, maybe it transcribes slightly faster, or is slightly more resistant to damage from common mutagens?

To be actual science you’d have to hypothesize a certain advantage and then verify it experimentally, which of course Dembski would never bother to do. But as usual, ID has a way to “explain” this without having to do any work at all.

PvM:

I could speculate as to why a supernatural designer would have preserved function but not sequence but there is little that would help one guide the argument here?

Just so it’s clear, I was asking the opposite question: how does design explain the preservation of sequence that is not necessary to preserve function. I was not sure how much really is preserved, but a quick good search found an example of a mouse/human gene alignment. See for example: http://bmc.ub.uni-potsdam.de/1471-2[…]-2-12/F5.htm You can see that longer sequences are preserved than you would need to preserve the encoded protein.

If you assume that mice and humans share an ancestor, then you have a simple way of explaining all that similarity. If you try to infer backwards from function as Dembski suggests, then you have no way of explaining why elements with no connection to function agree anyway.

Finally, the question of preserved function with different sequence is equally valid, and it is explained by mutation.

The funny thing is that the lay person and creationist tends to fixate on the idea that evolution “equals” random mutation. I would argue (caveat: I’m not a biologist) that 99.9% (may be add some 9s) of evolution is the predictable preservation of genes; the small amount of random variation is necessary, but it is actually the replication of pattern that makes evolution such a powerful mechanism. That DNA shows replication of patterns that do not affect function is strong evidence for common descent, and ID has no compelling explanation for it whatsoever.

“quick good search” should read “quick google search”

Thanks to Stephen Elliott for the link about vision. Yes, another interest of mine is Pax6, homology of vision, evolution of the eye (Nilsson et al). Are you trying to get me into trouble with my wife? :-)

Well, you could argue that one triplet is preferable to another on grounds other than what it codes for. For instance, maybe it transcribes slightly faster, or is slightly more resistant to damage from common mutagens?

In fact, it seems well known that neutrality may mean similar coding but the potential for variation can be quite different. Evolvability and neutrality are intrically interwoven here. So yes, while the sequence may code for the same amino acids, one sequence may be ‘more evolvable’ than the other. Sounds weird eh… Just wait until I tell you that neutrality is selectable… and essential for evolvability…

Actually, I thought you would have been more interested in a working evolutionary biologist who had this to say.

The Origins of Life–and a Career

Briscoe credits her parents with instilling in her a deep love of reading. But, she says, it was religion–her catechism studies especially–that provoked her interest in science. “I was raised Catholic, so I became interested in ethics, philosophy, and origins of the universe, and that kind of spun into my current interests in natural sciences.” …

Not that the article on colour vision for butterflies is not interesting.

BTW. She seems to have been involved in more papers than the entire DI.

http://www.faculty.uci.edu/profile.[…]ulty_id=5288

and only graduated in 1998? Not bad. Thanks for the reference. Somehow I was thrown off by finding you referencing another interest of mine…

Cool paper on Opsins and gene duplication freely available.

Syvanen Wrote:

The hypothesis that Demski is attacking is very very weak to begin with. The authors are suggesting that a horizontal gene transfer event from fungi to the mammalian lineage occurred based on the fact that the Neurospora and human enzymes share 19% amino acid identity. This would not be unexpected assuming neutral divergence from the last common ancestor.

You are right, the hypothesis does not seem to have gained much interest and as such that it is mentioned in a biology textbook may be what caused the unnamed biologist to send an email to Dembski. As I said, I am not addressing the hypothesis but rather the claim that ID explains the observations “better”.

The hypothesis that Demski is attacking is very very weak to begin with. The authors are suggesting that a horizontal gene transfer event from fungi to the mammalian lineage occurred based on the fact that the Neurospora and human enzymes share 19% amino acid identity. This would not be unexpected assuming neutral divergence from the last common ancestor.

I believe the identity shared is a sequence of amino acids. There is also supposed to be a highly conserved sequence.

In any case, I do think that Dembski’s criticism of the hypothesis seems correct. But there are better “just-so stories” that can be thrown about as hypotheses, and may be testable. Like having a symbiotic relationship with the fungus, then a horizontal transfer into gonads or the germ cells. Or it may have been earlier, as most horizontal transfers in the human genome appear to have been, and some chordates developed parietal cells, while others did not (either by chance or because they didn’t need the parietal cells).

Well, really, who knows? We won’t always get answers, but we’ll never get answers about the unknown designer, and we must be content with what science can investigate. I guess the real issue is what PvM brought up in his posting, which is that we can actually do something with real scientific hypotheses, like reject Okabe’s. How are we to test Dembski’s “hypothesis”? Are we going to say, “God couldn’t have done it”? Or can we say that an unknown alien designer, of unknown abilities, could or could not have used the fungal genetic sequence to make vertebrate parietal cells? The whole problem with Dembski’s “hypothesis” is that it isn’t a hypothesis at all, since we can’t say that God, or even an unknown physical being, could not have just done this or that.

Conserved and identical sequences means something in evolutionary theory, in other words. Problems can appear around conserved sequences, while indeed, “God used the same sequence in both organisms for similar purposes” does fit the evidence every bit as well as Dembski says. It’s just that every other conceivable outcome also fits the evidence to a T. Which means that we can’t do science, we can’t even have problems, and we can’t find out anything as to why (in the lesser sense of why) the genes are similar in fungi and in bacteria.

Dembski fails to explain divergence of genetic material and “design”, and he fails to explain similarity and identity where these are to be found. We can do something with genetic similarities occurring in highly divergent organisms, and this is because we put close similarity into a context of overall dissimilarity (relative to organisms with little divergence). We can follow the trail, IOW, and discover genetic sources for the genes which are more similar than expected. Thus we can do science.

As has been noted often previously, this is how we detect borrowing of material in human writings. Most IDists don’t have any problem with normal practices of determining sources of similar and identical material until one gets to “macroevolution”. It’s another reason why we know they’re pseudoscientists, in addition to the one where their “hypotheses” simply assume that everything is designed, thus everything is as it is because it is the proper (but apparently not best) design in the mind of the unknown (?) designer.

Glen D http://tinyurl.com/b8ykm

jeffw Wrote:

Methinks that assumes a Designer who’s got a deadline to meet and can’t afford to waste time bug-hunting after a find-and-replace on His genetic code…pretty theologically specific!

Neutral changes != Bug hunting.

Right, but that’s not what I meant. I meant that the software engineer’s “If it ain’t broke, don’t fix it” maxim is based on mortal fallibility–when you swap in new code you risk introducing new bugs along with it, so you should only do so when the payoff is a serious functional improvement. Doesn’t seem to me like your average omnipotent omniscient Unnamed Designer would really need to worry about that issue.

Is it possible that our species are ‘hardwired’ (a la Levi Strauss) to believe in anything which provides us ‘hope’ for immortality?

I think you mean Leo Strauss. But yes, ID is more useful in the study of jeans than genes.

No, he meant Claude Levi-Strauss.

Posted by whoever on February 13, 2006 05:30 AM (e)

pim

It didn’t help me much. I found you listed as a contributing author on three papers in the journal of physical oceanography between 1989 and 1995. Nothing before or since.

What is it you’ve been doing the past 10 years?

I find it very odd the major contributor to Panda’s Thumb has no biographical data listed among the dozen other contributors who have.

It’s almost like your CV is being hidden for some reason.

I did however find your name in 7,300 places around the same time as those oceanographic papers.

http://groups.google.com/groups?hl=en&q=%22p

It seems your career highlights are all in usenet flame wars.

You seem to be the designated disposable attack dog for all these reputable scientists, Pimmy boy. Their embarrassed to have anyone know your history and lack of accomplishment but need someone to say the nasty things propriety keeps them from saying for themselves.

Does that about sum up the situation?

LOL.

Who are you? Are you so embarrassed at yourself that you hide behind a net “handle”?

With what authority do you speak?

it must be acknowledged that the history of mathematics is full of incomplete proofs taken seriously, including significant claims given without backup.

Of course they are taken seriously, but that doesn’t mean that they are accepted credulously. The same goes for empirical claims, such as for cold fusion.

There is also the notion of “zero-knowledge” proof. Rather bizarrely, one can devise protocols that convince skeptics that you in fact do have a proof (to any probability they wish to test you to), yet yield absolutely no content about what is in the proof!

Wrong. “Zero-knowledge proofs” have nothing to do with you having a proof. Rather, you “prove” to someone that a certain statement is true, without revealing any information other than the veracity of the statement. But the so-called “proof” is not a mathematical proof, since there remains a non-zero probability that the statement is false.

So far as I know, no mathematician has taken up this practice.

The fact that “zero knowledge proofs” aren’t actually proofs might have something to do with that.

Popper's Ghost Wrote:
William E Emba Wrote:

There is also the notion of “zero-knowledge” proof. Rather bizarrely, one can devise protocols that convince skeptics that you in fact do have a proof (to any probability they wish to test you to), yet yield absolutely no content about what is in the proof!

Wrong. “Zero-knowledge proofs” have nothing to do with you having a proof.

Sure they do. Read what I wrote: you can convince skeptics, to any probability they insist on, that you do have a proof. The usual way is to publish the proof, but an alternative way is via zero-knowledge protocols.

As a simple example, I could convince skeptics that I have a proof that P=NP by putting up a webpage that takes all challenges and succeeds without exception over several years of hostile testing.

In general, any mathematical proof can be converted into a zero-knowledge protocol of this form, although it is usually less obvious than the P=NP example

As a simple example, I could convince skeptics that I have a proof that P=NP by putting up a webpage that takes all challenges and succeeds without exception over several years of hostile testing.

Not to nitpick, but I could think of some alternatives that might seem reasonable to me since I’m very inclined to think P!=NP.

One is that nobody had sent you a hard instance of an NP complete problem. It would strain credulity for this situation to go on indefinitely, but it is true that for many NP hard problems, most instances chosen at random turn out to be solvable by polynomial time algorithms. This is why it’s dangerous to make up your own encryption system based on what you believe to be an intractable problem. I think there are some NP complete problems that don’t suffer from this, but I’m not sure how you prove it.

Another one is that you’re not using a Turing-equivalent computer to solve the problems. Scientifically speaking, you only need to be using a computer based on natural principles, and there’s no a priori reason to think that the universe is composed of discrete deterministic computing elements. The reason there’s so much interest in quantum computing now is that it appears to allow you to do a lot more computation at once than an embedding of a deterministic machine. While it is not as powerful as a non-deterministic Turing machine, that doesn’t rule out systems that are.

A long time ago, I used to think about this for fun and I can think of two models that give you more power than a deterministic TM, though I won’t speculate on the plausibility of the physics.

The first is simply a device to send information into the past. In that case, suppose you want to satisfy a 3-SAT instance of k variables. First, you read a proposed solution as a series of k bits from the future. Then you see if it satisfies your boolean proposition. If it does, send the same bit string back into the past. If it fails, send a different one (e.g. interpret it as an integer and add 1 mod 2^k). The only non-contradictory outcome is that you have received a satisfying solution from the future (I’m not sure what happens if there is no solution but you could allow a small probability of exit with presumed failure). I could imagine that if this were possible, the actual computation would involve finding an equilibrium of some sort, but it would not require time as we understand it.

The second would assume that the “many worlds” model of quantum events is true and (implausibly) that you could communicate between worlds resulting from quantum events. In that case, you simply use quantum events to construct your NP problem solution. If the problem is solvable, then some series of quantum outcomes solves it. Then you just use your handy inter-world communicator to combine the outcomes. The best part about this is that you ought to be able to use this technique itself to find a functioning blueprint of such a communicator. Just collect a series of quantum outcomes and if the resulting bits look like the design of a machine, then build it and see if you can use it to communicate with other universes. :) (So why aren’t we doing it?)

You might counter that the above ideas are science-fiction silly. That’s true, but they can be formalized rigorously as computational models. I’m not sure where they fall on the silly scale as compared to the conjecture that P=NP. Both ideas may be provably false given our knowledge of physics, but our knowledge of physics does not rule out the possibility of constructing a computer more powerful than a Turing machine.

PaulC Wrote:

As a simple example, I could convince skeptics that I have a proof that P=NP by putting up a webpage that takes all challenges and succeeds without exception over several years of hostile testing.

Not to nitpick, but I could think of some alternatives that might seem reasonable to me since I’m very inclined to think P!=NP.

As I said: “any mathematical proof can be converted into a zero-knowledge protocol of this form”. In general, you can always embed your proof into a deduction graph, and then in a zero-knowledge manner convince the world that you know of an appropriate path in the graph. I chose P=NP as an example simply because there is an intuitive self-evident protocol.

Some of this could get rather bizarre. If I know of a counterexample to Goldbach’s conjecture, i.e., an even number greater than 2 that is not the sum of two primes, I could convince skeptics of this fact and never divulge its value. If I had a proof that there were exactly a certain number of counterexamples, I could convince skeptics that I provably knew the precise number of counterexamples, without telling them how many!

In general, you can always embed your proof into a deduction graph, and then in a zero-knowledge manner convince the world that you know of an appropriate path in the graph. I chose P=NP as an example simply because there is an intuitive self-evident protocol.

I accept the concept of zero-knowledge proofs as cryptographic protocols, but your thought experiment (roughly: send me intractable problems that I solve without telling you how; therefore you eventually conclude I have a polynomial time algorithm) is only a proof of P=NP is you make particular assumptions about what kind of computational model can be embedded in a physical system. My point was that there’s no a priori reason to assume that the most powerful computer we can realize in nature is a deterministic Turing machine.

There might, for instance, be some physical device as powerful as a non-deterministic Turing machine (one based on sending information back in time as I suggested). Thus, the fact that I could solve the NP-complete problems quickly might not mean I have a polynomial time algorithm for solving them. I might have a physical realization of a more powerful computational model.

This objection goes away if you restrict the protocol in some way that I at least agree on some particular computational hardware that I will use to solve the problems; then you can also measure the number of computational steps used to solve it. Then you’d also have to verify that this is the only means I’m using to solve the instances of the problem.

Actually I came up with a third objection: namely, I don’t have a polynomial time algorithm, but a superpolynomial time algorithm with a small exponent (e.g. 2^(n^(1/5)). To rule that out, you’d need to keep giving me bigger and bigger instances until you were satisfied that you had bounded the asymptotic complexity. But even then, you could always propose a superpolynomial complexity consistent with the experimental data.

Under the circumstances, I’d probably stay skeptical until a proof was published. There’s usually a lot of room for smoke and mirrors in these kinds of restricted challenge tests for new technologies.

Methinks that assumes a Designer who’s got a deadline to meet and can’t afford to waste time bug-hunting after a find-and-replace on His genetic code…pretty theologically specific!

Actually, that’s a pretty valid assumption: all those bipeds with free will running around in his universe with scissors and nukes wouldn’t leave him much time for tinkering.

PaulC Wrote:

In general, you can always embed your proof into a deduction graph, and then in a zero-knowledge manner convince the world that you know of an appropriate path in the graph. I chose P=NP as an example simply because there is an intuitive self-evident protocol.

I accept the concept of zero-knowledge proofs as cryptographic protocols, but your thought experiment (roughly: send me intractable problems that I solve without telling you how; therefore you eventually conclude I have a polynomial time algorithm) is only a proof of P=NP is you make particular assumptions about what kind of computational model can be embedded in a physical system.

Not at all. It’s a protocol for convincing you that I really do have a proof, while not revealing a single step of the proof. I mentioned the P=NP example only to give something quick and dirty and intuitive, nothing more.

My point was that there’s no a priori reason to assume that the most powerful computer we can realize in nature is a deterministic Turing machine.

Which is ultimately irrelevant: if those methods exist, then zero-knowledge protocols, and cryptography in general, are dead.

Actually I came up with a third objection: namely, I don’t have a polynomial time algorithm, but a superpolynomial time algorithm with a small exponent (e.g. 2^(n^(1/5)). To rule that out, you’d need to keep giving me bigger and bigger instances until you were satisfied that you had bounded the asymptotic complexity. But even then, you could always propose a superpolynomial complexity consistent with the experimental data.

Again, you are completely missing my point: it was nothing but a quick and dirty example. The actual rigorous protocol involves embedding a putative proof that P=NP into an appropriate deduction graph, and supplying answers to challenges based on that graph. If I only have a proof of P almost NP, I will blatantly lose under the P=NP protocol. To add insult to injury, I will not even be able to convince you that I came close.

I mentioned the P=NP example only to give something quick and dirty and intuitive, nothing more.

OK, I accept that. My point was that your initial example, as stated, would be a poor substitute for a mathematical proof, for a variety of reasons that I outlined.

An actual zero-knowledge proof using a rigorous protocol would, presumably, be a reasonable substitute for an actual proof up to a high probability (though my knowledge of zero-knowledge proofs are pretty limited). I wasn’t disputing that.

In short, I was not criticizing zero-knowledge proofs, but rather commenting on the “quick and dirty” nature of your example. It wasn’t obvious from your original comment that you were offering it as sort of a rough analogy rather than an informal description of a process that you were claiming to be isomorphic in a rigorous sense to zero-knowledge proof.

JEEEEEEBUS!!

how many times do the creationists fly this CANARD?!

this ‘argument’ has been so thoroughly refuted that it’s tragicomic to see it floated again.

one more time:

http://www.talkorigins.org/faqs/molgen/

About this Entry

This page contains a single entry by PvM published on February 11, 2006 11:55 PM.

Bad Philip Johnson Quote was the previous entry in this blog.

Confessions of a Darwinist is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter