A little knowledge…

| 77 Comments

Over at Uncommon Descent, the poster Pav has a post entitled “Programmers Only Need Apply”. In it, they note, but fail to discuss this paper, Xue W, et al., Senescence and tumour clearance is triggered by p53 restoration in murine liver carcinomas. Nature. 2007 445; 656-660. What gets the poster excited is not the finding that restoration of the protein p53 can stop tumor growth (which, amusingly, drives yet another nail in the coffin of Discovery Institute Fellow Jonathan Wells’s non-mutational model of cancer ), but that the authors use the word “program” to describe the cellular senescence pathway activated by p53.

The use of the word “program” highlights that proponents of NDE have an even sterner task at hand: explaining how the logical loop of a “program” can be built up using NDE mechanisms. There is a ring of “irreducibility” to the idea of a “program”, since each part of a “program” is indispensable and likewise an integral part of the program’s intended output. Genetics is looking everyday to be more and more like an exercise in computer programming–just as IDists have predicted.

Uh, guys, the use of the word “program” is a convenient analogy, we use the term “program” to help us grasp the timing of activation of the cell death and senescence pathways, but they aren’t human programs. An instructive example comes from John R. Searle:

Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. (‘What else could it be?’) I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer.

Get that DI folks? It’s a metaphor, not an actual computer program. The fact that PaV get so exercised over the term “program” is a bit puzzling, The program metaphor is used extensively in biology, for developmental programs to programmed cell death (which p53 plays a key role in as well). In fact, the term “programmed cell death” has been around since 1964, so one is forced to conclude that PaV doesn’t know very much about the biology of cell death (or possibly biology at all).

Certainly, PaV goes on to claim that the cell p53 cell senescence pathways must be irreducibly complex, because Xue et al used the term “program” in their abstract. Would it be too much to ask PaV to actually look up the literature on cell senescence and cell death, rather than pontificate on the basis of an abstract?

A little reading would show that the p53 pathway cannot be IC, as when you knock out p53, other systems take over. P53 is the major, but not the only, gatekeeper of cell senescence. It certainly makes the system more fragile, but knocking out p53 doesn’t make the system fail completely, which is Behe’s criteria for irreducible complexity. Indeed, there is quite a large literature on the origin and evolution of the programmed cell death pathway.

So, the home message folks; don’t build an elaborate scenario on the basis of a metaphor, learn a bit of the biology of the system instead.

PS. I was amused by this statement:

Behe and Snoke’s paper shows the huge improbability of placing two amino acids side-by-side via gene duplication and random mutation.

Actually, it shows that even in the complete absence of selection, binding sites such as the DPG binding site in haemoglobin can evolve in quite reasonable time frames , and the bacterial populations in a bucket of soil will do it much faster. Nice own goal there.

77 Comments

so one is forced to conclude that PaV doesn’t know very much about the biology of cell death (or possibly biology at all).

PaV definitely knows nothing about biology at all. There is not a single reasonable PaV post at UD. E.g., in a recent post he messed up the changes of inversion frequencies (inversions present in a given Drosophila species) under different climatic conditions with the inversion rate (frequency of occurrence of new inversions). However, this is the normal “quality” of biological knowledge displayed at UD. Just remember DaveScot’s 1n Jesus speculations and what he wrote about translocations. Recently, Dave experienced occasional sound cognitions that he interpreted as Taxonomy, The Neutral Theory, the Molecular Clock and “Survival of the Fittest” exploding. If your IQ is north of 150 there is of course no need to verify such claims. Don’t they have a single biologist over there who could prevent these guys from posting such bullshit? Dembski either doesn’t care or his biological knowledge is not better. BTW, over at overwhelmingevidence training camp quizzlestick is collecting stupidity points by “proving” that ID articles have been published in peer reviewed journals with this example. Thus, we are not only observing a lack of biological knowledge but absence of common sense.

Dave had visions again: This time he noticed Mendelian Genetics and the Genetic Code exploding. Soon there’ll be nothing left from common biological knowledge.

THE STUPID! IT BURNS!

I mean, geez, sparc, did you have to do that! I felt my eyeballs melting from the concentrated idiocy. Couldn’t they at least put a minute fraction of effort into actual science, instead of this concentrated nonsense. It makes the UFO fanatics seem logical.

And Dave Scott, yeah, acording to him every new discovery is the detah of some aspect of biology. Pity it wasn’t predicted by any ID types, and it’s just another “materialistic” genetic mechanisms that follows the central dogma. Sheesh!

There is a ring of “irreducibility” to the idea of a “program”, since each part of a “program” is indispensable and likewise an integral part of the program’s intended output. Genetics is looking everyday to be more and more like an exercise in computer programming—just as IDists have predicted.

How can a person who has access to a computer–the Internet, even–claim that “each part of a program is indispensable?”

Look, here’s a non-irreducibly complex program.

function y=addoneto(x) y=x+1 y=y+0 end

Can you find the part which is not indispensable? Look carefully now.

Look, here’s a non-irreducibly complex program.

function y=addoneto(x) y=x+1 y=y+0 end

Ah, but any random mutation to that program can’t make it add one to x any better! It can only make it WORSE! You’ve just DISPROVED DARWINISM!

Sorry. I’ve just been reading the comments on the link sparc provided and I think I’ve lost half my upper brain function. If the original post was bad, those comments are… words fail me.

In British soap opera, a genre as ritualistic and stylised as Noh, there’s a popular set piece. A bloke and his girlfriend are in a pub, and another bloke starts giving him some lip. First bloke gets upset, and squares up to land a punch on his antagonist. “Leave it, Dave,” says his girl. “‘E’s not worth it”. There’s then either a bit of a barney followed by police, or glares, readjustment of jackets and a storming out.

It’s astonishing how often ID arguments make me think of this, these days.

R

That whole post can be summed up as “Heh! The evolutionists said ‘program’!”

…from that post:

“This is just very good programming.

Go God!”

(reposted from AtBC )

“Programmers Only Need Apply”

So, PaV, the scientists who made this discovery weren’t qualified to make this discovery, and they should have stepped out of the way for the likes of you? No thanks.

“Just as IDists have predicted”

Oh, did you predict this discovery about RNAi? Did you publish that prediction somewhere? Is it in your pathetic little journal? Not only has that ‘journal’ failed to publish its last 5 issues, but nobody can even tell me if there’s a new one coming out ever.

“Behe and Snoke’s paper shows the huge improbability of placing two amino acids side-by-side via gene duplication and random mutation.”

-yeah, as long as the earth is 10,000 years old and the size of a pickle barrel. If you use the real Earth, his numbers come out, shall we say, a bit differently:

Based on the math presented there [in Behe & Snoke], it appears that this sort of mutation combination could arise about 10^14 times a year, or something like 100 trillion times a year.

http://www.pandasthumb.org/archives[…]omment-53184 http://www.pandasthumb.org/archives[…]f_beh_1.html http://scienceblogs.com/dispatches/[…]ible_com.php

PaV continues the tard in the comments:

Rule #2: Every evolutionary biologist must take a class in computer programming.

Rule #3: Every IDer has to drive to their local community college and take Algebra 101, Biology 101, and Genetics 101. Utterly uneducated creationism is boring. Get some book learnin and bring a little meat to the dinner table.

(waits for someone to check the approval queue…)

Yes, but we know that computer programs can in fact be irreducibly complex. A couple of lifetimes ago I submitted a short job to the university computer center. Unburst from the one page of output that was mine was the entire 100 pages or so of the university’s Cobol program for printing payroll checks.

Naturally, I browsed through this to see what real Cobol looked like. There was one page of code, with comments at the top:

“This routine is never executed because it is no longer called by any other routine. However, when I delete this code, the program no longer works.”

The comment was signed and dated, as one might expect, by the maintenance programmer.

Come to think of it, the whole history of software engineering has been the result of realizing that all programs eventually become irreducibly complex through the normal evolutionary path we call the software lifecycle.

I just realized I have about a dozen journal articles I should write on this topic, viz. irreducible complexity evolving from unintelligent design. But I am not sure off the top of my head that I know which side of the PT argument I’m going to be on by the time I finish these masterpieces.

Steve S Wrote:

(waits for someone to check the approval queue…)

Can take a while as I snooze away on the other side of the world.

Re “This routine is never executed because it is no longer called by any other routine. However, when I delete this code, the program no longer works.”

Maybe the module defined something - a variable or a smaller function - that does get used somewhere?

Maybe the code generated for the module inserts space between two other things that don’t work without the spacing?

Maybe I’m getting a bit off topic with these speculations?

Bye now. Also exit return and logoff.

Henry

Overwhelming Evidence now has a wiki.

http://en.wikipedia.org/wiki/Overwh[…]ing_Evidence

Perhaps something about their references to non-journal articles as examples of journal articles needs to be included?

I think that a lot of these folks are very literal thinkers. (This is my own opinion based on my own observations). PaV took the word “program” literally and is so literal minded it seems he can not even conceive of the idea of a metaphor. They also tend to think highly of technicalities and will hold them up as if they are real evidence or even feel that a technicality will trump real evidence. It’s about winning the game more than trying to understand anything. At first I thought that literal thinking was always a sign of unintelligence but now I think it’s more like a learning disability.

Henry J Wrote:

Re “This routine is never executed because it is no longer called by any other routine. However, when I delete this code, the program no longer works.”

Maybe the module defined something - a variable or a smaller function - that does get used somewhere?

Maybe the code generated for the module inserts space between two other things that don’t work without the spacing?

Maybe I’m getting a bit off topic with these speculations?

Bye now. Also exit return and logoff.

Henry

Quite likely there was a bug in another area of the program that overwrote the memory that the routine occupied (ie a buffer overrun). With it removed, it may have overwritten something important instead. Weirder problems have happened.

There is a ring of “irreducibility” to the idea of a “program”, since each part of a “program” is indispensable and likewise an integral part of the program’s intended output.

If every part of Windows XP is indispensable, then why weren’t they all in Windows 2000, or Window NT, etc.?

It seems that the IDiots are morons, too.

That’s easy, PG. Because they share a common designer.

That’s easy, PG. Because they share a common designer.

Bill Gates is the Intelligent Designer?

well, that WOULD explain a lot.

damn buggy crap.

That’s easy, PG. Because they share a common designer.

And how exactly would that make their parts indispensable? Youre response is a complete non sequitur, moron.

BTW, if I mistook Wheels for a creationist troll but he isn’t one, my bad, but his response is still absurd.

I was parodying the standard Creationist explanation for why there are many variations among similar living things, or why so much of the genome is common to all modern life, *ahem* imbecile. :)

Gee PG you should get that thing checked it seems to go off at the slightest touch, I had a shotgun like that once, damn near had a Cheney moment.

Yes, but we know that computer programs can in fact be irreducibly complex.

Your anecdote certainly doesn’t demonstrate it. All it shows is that (if the comment is even true, while it may not be) there is some code that can’t be removed without the program failing; that certainly doesn’t mean that no part of the code can be removed – for all we know, even that routine could be removed as long as one of its statements (probably a data declaration) is retained. And if some code is removed and the program fails to function as intended, so what? Unless it loops indefinitely, it still produces some output, even if that output is the empty string. And even if there did exist an “irreducibly complex” program somewhere, what of it?

Come to think of it, the whole history of software engineering has been the result of realizing that all programs eventually become irreducibly complex through the normal evolutionary path we call the software lifecycle.

Uh, no, it hasn’t. You seem to have confused inscrutability with irreducible complexity, but even then, there’s a lot more to the history of software engineering that making code comprehendible.

I was parodying the standard Creationist explanation for why there are many variations among similar living things, or why so much of the genome is common to all modern life, *ahem* imbecile.

Even as a parody it missed the point, cretin.

Gee PG you should get that thing checked it seems to go off at the slightest touch

Not really; “Wheels” is similar to moniker of a creationist troll who has posted here, and his comment was incredibly dumb, even as parody, because there being a single designer wouldn’t be any reason at all why parts would be indispensable.

so one is forced to conclude that PaV doesn’t know very much about the biology of cell death (or possibly biology at all).

He certainly knows next to nothing about biology (he argued that frameshift mutations require 2 mutations… right next to one another). On the other hand, I believe he actually has a biology degree, which makes him UD’s resident academic in biological sciences. Ouch.

Alright, calm down every one and stop the name calling.

Popper's Ghost Wrote:

Even as a parody it missed the point, cretin.

Well if you’ll take a breather for a moment from your leg-lifting insult campaign, I’ll explain why I said it: it’s the standard Creationist non-answer that misses the point it’s supposed to address. Whether the question is why there are similarities among different organisms in similar niches, dissimilarities among closely related organisms, the commonality of many genes among all organisms (somewhat like the commonalities among the various Windows releases), or even the fact that all living things are made of basically the same atoms, you can count on the “savvy” Creationist to simply ascribe it to the work of the same designer (must be an unimaginative one for all that intelligence) and claim it as evidence that said designer is at work behind the scenes. It’s such a question-begging response that it can’t really address anything, but it’s used to cover any number of bases from biology to basic physics and chemistry. The point is supposed to be that the answer given doesn’t address the question, it’s a stock response thrown around whenever it’s seen as convenient because the people using it don’t understand the criticisms, or refutations. On another level, though, there actually is a “common designer” of sorts for the various Windows releases, but Windows often goes “non-functional” anyway due to one of its parts not working right, which as we all know from the Behe the Ever-Bloviating is a sign of irreducible complexity! I understand your criticism of the idea that “a program,” as in “the entirety of any program given program,” is irreducibly complex, I’m just making an off-the-cuff remark that ties into inept Creationist rhetoric and the subject of Windows. And which Creationist troll used to post with “Wheels” in their handle? I used to post under this handle around here semi-regularly, but the atmosphere took a turn for the rank so I’ve been scarce.

Anyone remember AVIDA, the is an artificial life platform in which digital organisms evolved Irriducible Complexity? The research made the cover of Discover Magazine.

Avida seems to be a particular sore point for DaveScot.

Personally, I find the Hummies awards handed out by John Koza in the GP world a great talking point with ID folk. Here are programs that have not only evolved, but evolved to the point of doing something better than any human, not just better than the programmer.

The original quote maintained that “finding dead code” is “computationally unsolvable.” I think what he really meant was “finding ALL dead code” is “highly impractical.”

No, he didn’t mean that. He meant that the problem of finding all dead code is formally, mathematically, provably, unsolvable – a program that finds all dead code is truly impossible.

Finding such in a particular, human-crafted set of programs may be comparatively easy.

Well, of course there are many human-crafted programs for which it is easy to determine whether they halt. But there are also human-crafted programs for which no one knows how to show whether they halt. For instance,

for( t = 4;; t += 2 ){
    for( a = 2; a < t-1; a++ ){
        b = t - a;
        if( is_prime(a) && is_prime(b) ){
            printf("Goldbach's conjecture is false for %d = %d + %d\n", t, a, b);
            exit(0);
        }
    }
}

Judging by the number of responses that suggest people were taking my post seriously,

A common response when someone screws something up is “I was just joking”.

In my experience, though, which is apparently far more limited than yours, a lot of “bugs” have to do with interpretation, rather than inherent problems with the code. In other words, the program is executing exactly as coded, but the outcome is not acceptable to the user.

The discussion was about bugs, not “bugs”. A bug is a failure of a program to perform according to its specification.

if (PIsNP()) {
    foo()
} else {
    bar()
}

So is bar() used? Here, PIsNP() is difficult to solve.

This is not a good example. That no one has proven whether P is NP does not imply that it would be difficult to evaluate PIsNP(), which can only be written once someone has managed to prove that P is or is not NP. And at that point, an implementation of PIsNP() could simply be return(true) or return(false). Even if PIsNP() encodes the proof itself, that proof won’t necessarily be difficult to evaluate.

But it’s only a difference in the mind; in the real world, there’s no distinction of any value between “too many to instantiate in practice” and “infinite”.

I wasn’t talking about “too many to instantiate in practice” in the context of nature. My whole point was that it’s *not* only a difference in mind. The standard ID “trick” is to make people assume that “more possibilities than you can imagine” is “infinite.” Nature has little problem trying out “more possibilities than you can imagine.” I thought this was clear in my response.

No, he didn’t mean that. He meant that the problem of finding all dead code is formally, mathematically, provably, unsolvable — a program that finds all dead code is truly impossible.

I think the key was that I read “finding dead code” (in the original) as “finding ANY dead code,” whereas the OP (and you) mean “finding ALL dead code.” Hence, my capitalization of “ALL” in my response.

The discussion was about bugs, not “bugs”. A bug is a failure of a program to perform according to its specification.

Wikipedia defines a software bug as “an error, flaw, mistake, failure, or fault in a computer program that prevents it from behaving as intended (e.g., producing an incorrect result).” I.e. “intended,” not “specified.”

This is not a good example. That no one has proven whether P is NP does not imply that it would be difficult to evaluate PIsNP(), which can only be written once someone has managed to prove that P is or is not NP. And at that point, an implementation of PIsNP() could simply be return(true) or return(false). Even if PIsNP() encodes the proof itself, that proof won’t necessarily be difficult to evaluate.

Dizzy caught this. I clarified later:

No, the idea here is that PIsNP() is such a hard problem that the compiler is not going to be able to figure it out without actually running the program.

But, notably, except in some trivial cases, a compiler will not be able to tell if the code will halt.

This is a common misunderstanding of the unsolvability of the halting problem. There are programs that can determine whether any of an infinite set of non-trivial programs halts.

What is the misunderstanding?

…so the unsolvability of the halting problem has little practical significance.

I brought it up to illustrate that this is not plausible:

…since you can reduce logical operations down to the point where you can identify if the output will always be 1 or 0, it should be possible.

If by “logical operations” Dizzy meant like term, then “reducing” it is not plausible in all cases. Even a really smart compiler is not going to be able to do this, since that would solve the halting problem.

I.e. “intended,” not “specified.”

I have to agree with PG’s interpretation here. Otherwise, we are faced with the notion that an entirely error-free program can develop bugs simply because the job the program was written to perform has changed.

In fact, I think this makes a useful dividing line. A program that does what the programmer intended under all possible inputs has no bugs. If the programmer misunderstood the assignment, or if the program’s goals are changed externally, that’s not a bug.

Making a program do something different from what the programmers intended is regarded as changing the feature set. Making a program perform the programmers’ intentions without error is fixing bugs.

This distinction is critical. If my accounting program does a lousy job of word processing, this is NOT a bug in the accounting program!

Computer program maintenance — There are two aspects. One is so-called bug finding and fixing. The other is that the information environment of the program changes, so new functionality is required. A trivial example of the latter is changes in the law regarding required deductions from pay.

There are some how call the latter activity bug fixing in some settings, because in many distributed computing applications it is difficult to ascertain the difference.

A more professional term is ‘fault’. This is failure to meet the specification. The settings that I know about wherein this term is used are ones in which there are written specifications. In these settings, if the specifications change, the program requires so-called re-engineering.

A trivial example of the latter is changes in the law regarding required deductions from pay.

Or that here in the US we’ll be starting daylight savings time on March 11th this year, rather than in April …

Y’all ready? :)

Or that here in the US we’ll be starting daylight savings time on March 11th this year, rather than in April …

Y’all ready? :)

Get our time zones out of the hands of the politicians! ;)

(Well, somebody needs to say that - sorry if it’s off topic.)

Henry

PG Wrote:

A common response when someone screws something up is “I was just joking”.

The inescapable conclusion should thus be that PG is just joking.

What does “NDE” stand for? “Non-Divine Explanations”?

Re “What does “NDE” stand for? “Non-Divine Explanations”?”

It might be “non-directed evolution”, but I’m just guessing based on the context in which I’ve seen it used.

Henry

About this Entry

This page contains a single entry by Ian Musgrave published on February 10, 2007 8:50 PM.

The evolution of the genetic code was the previous entry in this blog.

My letter to New Scientist is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter