The deniers of science Part 1: Intelligent Design

| 20 Comments

On Uncommon Descent we learn why global warming deniers share so much with evolution deniers and why people should be wary of Intelligent Design:

Gildodgen Wrote:

Computer simulations of global warming and Darwinian mechanisms in biology should not be trusted, because they can’t be subjected to empirical verification. In these two areas, computer simulations and models can degenerate into nothing more than digital just-so stories — in one category about the future, and in the other about the past. The programmer can produce whatever outcome he desires, by choosing initial assumptions and algorithms, and weighting various factors to produce a desired output.

First of all GilDodgen explains why one should be critical about Intelligent Design when he ‘argues’ “Don’t Trust Computer Simulations And Models That Can’t Be Tested Against Reality”. In other words, the scientific vacuity of ID should be a major source of concern. But there is more and I will address this in my second installment.

20 Comments

He ‘argues’ “Don’t Trust Computer Simulations…

… by typing it into a microprocessor designed almost entirely via simulations.

(The complexity of processors outstripped the ability of humans to design them directly about 20 years ago)

Dammit! There goes my irony meter again.

Someone get busy and whip up some new irony meters. The demand for them on this blog alone is desperate…

The programmer can produce whatever outcome he desires, by choosing initial assumptions and algorithms, and weighting various factors to produce a desired output.

And what about the ‘designer’?… ID can produce any outcome he desires…

Speaking of scientific vacuity

The programmer can produce whatever outcome he desires, by choosing initial assumptions and algorithms, and weighting various factors to produce a desired output.

Which of course doesn’t necessarily mean they do so (in fact, such models are inherently useless, why would anyone bother?)

But since GilDodgen is a self-proclaimed programming genius, why doesn’t he download one of the source-available GCMs and show the world how the programmers have cheated?

Skeptics about global warming and evolution don’t give a damn about climatology or biology. The climate change denialists are afraid that dealing with greenhouse gases will strengthen government, including the bugbear of bugbears, one-world governemnt. The Creationists and ID types are afraid that natural selection is an argument against the existence of God. Were human-caused warming or natural evolution a garden-variety scientific issue like phase transitions in noble gases or something, they would immediately recognize where the weight of the evidence lies and that would be that. On the other hand, since neither variety of skepticism is really about the scientific issues involved, no amount of factual evidence will make any difference. Which is why evolution is still being debated 130 years after its validity became obvious and global warming will be denied by some of these guys long after Palm Beach disappears beneath the waves.

I’ve encountered the whole “you can’t trust computer simulations” argument before in a variety of forms. I remember one person actually saying, “you just made the computer do what you told it to do”. It can be a completely hallow argument, but it might seem superficially convincing to laypersons. For example, if I say that it’s possible to count from one to a million, the IDist might say, “No way! That’s impossible! A million is way bigger than one.” Because I don’t feel taking the time or effort to actually vocalize each number from 1 to one million, I write a computer program that starts with one and increments by one until finally reaching one million. The IDists would, of course, throw out this proof because, you know, you can’t trust computer programs because they’re just doing what the programmer said. That’s essentially how I feel about IDist’s poor critiques of computer programs. (And, of course, if an IDist believed that a computer program actually disproved evolution, they’d be trumpeting the “throughness”, “accuracy”, and “infallibility” of that particular piece of software.)

The argument here is a bit stronger than that – “Computer simulations of global warming and Darwinian mechanisms in biology should not be trusted, because they can’t be subjected to empirical verification”. But that’s silly; simulation programs, like all programs, are tested against known results before being used to obtain unknown results. Of course, the program might work correctly on all the test data and still fail on live data, just as any scientific theory (which such a program is, in a way) may be consistent with known observation but still make erroneous predictions. But that’s life.

To be sure, not all simulations are equally predictive; we’ve been doing econometric simulations since the advent of computers, and they “predict” past economic trends best when those trends have been used to calibrate the models. Yes, they’re getting better. Don’t bet the farm on what they predict.

Global climate is another difficult case. Most models I’ve seen assume some number of equilibria, based on both positive and negative climatic feedback effects, and interactions of one trend with another. Just how sensitive are these equilibria? This factor is both critical, and currently impossible to quantify with any accuracy.

This isn’t to say the programmer is choosing a desired outcome - these models often produce quite unexpected outcomes. But unexpected isn’t the same thing as correct. So the usefulness of the simulation, as Popper’s Ghost observes, depends on how well it works with live data rather than test (or calibration) data. So long as live data predict poorly, the model needs more work. But conversely, when models nearly always get things right, there’s a good chance the process being modeled is well understood, even if some people don’t like the implications.

they “predict” past economic trends best when those trends have been used to calibrate the models.

Overtraining is a problem in all learning systems that happen to have ‘too many’ degrees of freedom. But as modelers also learn and can constrain the correct models and parameters, the training set should have less effect.

As you say successful predictions shows correctness, and in in these cases even poor predictions are better than what ID offers.

In their argument against computer simulations, the creationist/IDers unwittingly project their basic dishonesty onto the programmers. They act as if the mere possibility of cheating (rigging the assumptions to produce predetermined results) demonstrates that cheating occurred, because, that is, of course, what they would do.

If they could. I hate to rant, but since I work professionally with mathematical models, I hear this crap all the time, so I’ve got some handy dandy arguments I use to deal with this irritating pig-ignorant criticism.

My favorite is my James Randi “put up or shut up” challenge. Any time anyone claims some computer simulation or test is biased for conclusion X over conclusion Y, I just go Missouri on them. Show me. If it is so damned easy to produce computer simulations that support global warming merely by playing games with the inputs, then reverse the process and produce simulations that show global cooling. If it is so easy to make computer simulations that support evolution by cheating, then let’s see your simulations that refute it.

If they could they would have done so already and would be shoving it in our faces every chance they got. Behe tried this, but of course even rigging the assumptions against evolution (which he admitted), he still got a result that supported it. So much for the ease of producing the results you want.

And even if he had succeeded, the catch, of course, is that you have to publish your results and your assumptions so peers can scrutinize them and make sure there is nothing unreasonable or flat wrong in there. That’s where they crash and burn, because while it is trivially true that model results are effected by changes in inputs, it is absolutely not true that those results are infinitely malleable with the range of possible inputs. Some outputs you simply cannot produce without using inputs that are impossible, and those outputs are invalid. No output of an economic model that assumed long term negative inflation would qualify, for instance, nor would a geological model that assumed that granite was as soft as a sponge, nor would an ecological model that assumed all animal reproduction rates suddenly tripled. Likewise for evolution and climate change models.

So put up or shut up guys. Let’s see your models that refute the established models, and let’s see your assumptions. Otherwise, your criticisms of evolution and global warming models are merely unintelligently designed hot air, as, well, they always have been.

As a PR strategy, I’ve never understood the “just - so” story angle of the IDers. Even something as weak as “other life exists => evolution” is less of a just-so story than ID is. No matter what they demonstrate is true about evolution, they cannot possibly obtain even a draw in this category, making it a guaranteed loser.

But then, I guess since they seem to lose all the important battles, it shouldn’t surprise us that they use arguments guaranteeing that result.

I work with scientists who use models to investigate aquatic ecology. They routinely make a distinction between models that have been gamed to produce realistic results and models that begin from first principles and generate results that check out against field work later. Those who dream of electric fish can apparently tell the difference between the use and abuse of simulation. The point is, you can certainly cheat with models, but that doesn’t imply that people who are using models are cheating or that computer simulations are intrinsically bogus. Indeed, in the case of climatology, I don’t know how anybody could get very far without using what used to be called “analysis by synthesis.”

And of course models of evolution such as Genetic Algorithms can come up with unique solutions to problems not even imagined by the programers which aren’t generally optimal solutions, merely solutions that work. Exactly as evolutionary theory predicts. All without any “front loading” whatsoever.

I was shocked driving down yesterday to hear on the local Seattle new station (1000 on the dial) what sounded at first like a right-wing commercial. Gradually I realized it was a station-approved commentary, by one of their employees. The guy was claiming that a Seattle professor had been fired from his job, because he had pointed out that the snowcap was only 15% reduced in the Cascades, instead of the regularly touted 50%. The commentator also lambasted the U Washington for splitting the difference and now advocating that the snowcap was 30%- the commentator said this was politics, not science.

I don’t know about the specifics of this case. If they were true as described, that would be bad. But I have a feeling that the commentator was playing loose with the data- and probably had never looked at the primary paper of this professor. I can also see how a school, in disseminating information to the public, might wish to take the middle ground of two studies as a quick and dirty way of explaining how much the snow cap has disappeared.

What got me at the end was this commentator’s final statement. Of course, a decreased snowcap is an another indication of Global Warming. Since the commentator didn’t like that, and wished to support the scientist who found out the snowcap was reduced by only 15%, the commentator stated, “After what happened to this professor, is it any wonder that many other scientists don’t come forward with what they know about Global Warming? They are in fear of losing their jobs too.”

I was just agast, that this would be allowed on the radio, on a news station. Sure, you have more liberty in a commentary- but there are bounds to liberty. Your opinions should be based in fact, for one. You also don’t have the right to yell “Fire” in a crowded theatre- or try to convince those in the theatre that there is no fire when it is blazing right behind the door.

It started with an op-ed in the Seattle Times last month by Seattle mayor Greg Nickels. In it, he cited the 50 percent decline in Cascade snowpack since 1950. A shocking wake-up call to water-users, skiers and utilities. But what if it’s not true?

According to Dennis Hartmann, chair of the atmospheric sciences department at the University of Washington:

* That number is based on the period from about 1950 to 1995, which is a period during which there was a very large decline which was mostly caused by natural variations; in other words, natural weather cycles melted the snow not human-caused global warming. * Furthermore, since 1995 the average snowpack has increased in the Oregon and Washington Cascades; so he concludes the overall decline is more like 30 percent; but when you adjust for nature’s normal ups and downs – that number is even smaller.

“A reasonable statement about the part that we think is attributable to the warming associated with global warming is probably more like 15 percent,” says Hartmann.

“The 50 percent was a misunderstanding that arose back in 2004,” says Philip Mote, Washington’s state climatologist. “I tried very hard to squelch it. And to explain that’s not correct.”

Instead, Mote says, it took on a life of its own. Until last month. That’s when Mote’s deputy in the state climatologist office – Mark Albright – started a campaign to debunk the myth. Earlier this week Mote fired Albright as associate state climatologist – an upaid, but prestigious position.

Source: Austin Jenkins, “Scientists Say Cascade Snowpack Has Not Declined 50% Afterall,” KUOW.org, March 15, 2007.

Snowpack statement

On February 24, 2007, The Oregonian reported on a debate between researchers at the University of Washington on recent trends in 20th century snowpack in the Washington Cascades. The issue originated with the publication of an op-ed written by the Mayor of Seattle on February 7 stating that “The average snowpack in the Cascades has declined 50 percent since 1950…”. In question was the 50% statistic for the Cascades and the implication that the reported decline was due entirely to anthropogenic (human-caused) climate change; the 50% figure had appeared, erroneously, in the June 2004 report “Scientific Consensus Statement on the Likely Impacts of Climate Change on the Pacific Northwest” (Oregon State University). Mark Albright, of UW Atmospheric Sciences, noted that at the most complete snow courses (a small subset of the total) for the Cascades, the last 10 years were only a little below the long-term average.

To help resolve questions over the statement, a group of University of Washington climate and weather scientists met to review the different statistical approaches used to examine trends in spring snowpack, also referred to as snow water equivalent or SWE. Professor Dennis Hartmann, the Chairman of the Department of Atmospheric Sciences, was asked to prepare a summary statement on the issue. The statement reiterated many of the Climate Impacts Group’s (CIG) research findings on trends in SWE and added additional important insights into recent trends.

In summary:

*

20th century snowpack trends. CIG research shows that Pacific Northwest SWE has declined since monitoring became widespread in the late 1940s, with 30-60% losses at many *individual* monitoring sites in the Cascades ( Mote 2003, Mote et al. 2005). When looking at the period 1950-1997, the overall observed decline in April 1 SWE for the Cascades is -29% (Mote et al. 2005). Relative losses are greatest in lower and mid-elevations where mid-winter temperatures are warmer; higher elevation sites where average mid-winter temperatures are still well below freezing (even with 20th century warming) don’t show any declines in SWE. See the plots yourself.

An examination of SWE trends for more recent years (e.g., beginning about 1975 or later) appears to show a small increase in SWE for the Cascades, consistent with Albright’s finding about the average of the last 10 years, though SWE in the last five to seven years has been at least 20% below the long-term mean. This leveling of the trend appears to be associated with increased precipitation in the late 1990s, especially the near-record wet winter of 1998-99, and appears to have temporarily offset the persistent declines produced at low elevations by warming. Trends over intervals as short as 30 years are rarely significant, given the shorter time frame and the higher precipitation and snowpack variability experienced in the PNW since the mid-1970s. In other words, the apparent leveling of trends appears to be the result of large natural variability in precipitation masking the declines driven by temperature. *

Data availability. Data availability is a limiting factor in long-term SWE trends analysis. Prior to the mid-1940s, there were very few snowpack monitoring sites with continuous data sets and those that had continuous data sets tended to be located at high altitudes known to be less sensitive to warming trends. These factors make it difficult to assess SWE trends before the 1940s with high statistical certainty, and results are not consistent with more complete analyses for later periods because of the high elevation bias in the available data. By mid-century, the availability of data and distribution of snowpack monitoring stations is much improved, allowing for a more robust analysis of SWE trends. “A substantial collection of snow course data records with a reasonably representative and stable distribution with altitude exists since about 1945,” notes Prof. Hartmann. *

The role of natural variability. Natural variability has played - and will continue to play - an important role in determining year-to-year and decade-to-decade variability in SWE. Mote 2006 found that natural variability as represented by the North Pacific Index (NPI) explains about 50% of the trends in Pacific Northwest SWE since mid-century (and less from earlier starting points). The remaining portion of the trend “clearly includes the influence of the monotonic warming observed throughout the West, which is largely unrelated to Pacific climate variability and may well represent human influence on climate” (p.6219). Natural variability has also played a role in 20th century Pacific Northwest temperature trends, explaining perhaps one-third of November-March warming in the region since 1920 (Mote et al. 2005, p.47).

As noted above, natural variability will continue to be a factor in 21st century snowpack accumulation. The Pacific Northwest will have good snowpack years in the coming decades as well as poor snowpack years even as the long-term temperature trends continue. This natural variability can hide long-term trends over short periods of time. Additionally, the potential for increases in precipitation as a result of climate change may make it difficult to see distinct trends in the near-term. *

Future impacts. The warming projected for the 21st century is expected to have a significant negative impact on snowpack, particularly mid-elevation snowpack, even if increased precipitation from natural variability and/or climate change is enough to “hold off” the impacts of warmer temperatures on snowpack in the near term. Changes in 21st century precipitation are less certain than temperature, however. Given that approximately 50% of the snowpack in the Cascades sits below 4200 feet, where spring snowpack is very sensitive to small increases in average temperature, preparing for climate change impacts is critical.

For more information on Pacific Northwest snowpack trends, please contact Philip Mote.

And the text of KUOW 1000

MOTE: “The fifty percent was a misunderstanding that arose back in 2004 and I tried very hard to squelch it. And to explain that’s not correct.”

INSTEAD MOTE SAYS IT TOOK ON A LIFE OF ITS OWN. UNTIL LAST MONTH. THAT’S WHEN MOTE’S DEPUTY IN THE STATE CLIMATOLOGIST OFFICE — MARK ALBRIGHT — STARTED A CAMPAIGN TO DEBUNK THE MYTH. THE PROBLEM IS MOTE DIDN’T LIKE ALBRIGHT’S APPROACH. EARLIER THIS WEEK MOTE FIRED ALBRIGHT AS ASSOCIATE STATE CLIMATOLOGIST — AN UPAID, BUT PRESTIGIOUS POSITION.

MOTE: “I asked him initially to clear with me any communications on contentious subjects where the science was still under discussion and he declined to clear that with me.”

ALBRIGHT IS TRAVELING THIS WEEK AND DID NOT RETURN PHONE CALLS FOR THIS STORY. BUT HIS DISMISSAL HAS ONCE AGAIN DIVIDED NORTHWEST CLIMATE CHANGE SCIENTISTS. DESPITE THIS INTERNAL FIGHT, THE SCIENTISTS ALL SEEM TO AGREE GLOBAL WARMING IS REAL AND THE NORTHWEST NEEDS TO TAKE STEPS TO PROTECT ITS SNOWPACK. I’M AUSTIN JENKINS REPORTING.

Fred Pearce With Speed and Violence Beacon.

“Well-documented and terrifying review of the scientific evidence supporting claims that Earth teeters on the edge of climatic precipice…” — Kirkus, starred review.

I see this as an attempt to create not only doubt - but more gaps that they can then fill in. Keep questioning with possiblies, maybies, and false accusations and you have a nice wedge to insert more ID.

Of course this is fundamentally dishonest, but I honestly think the UD crowd don’t care about it and don’t think about it. There’s no moral imagination here.

Try running various orders of pollynomials. 5th order says the line is going up the 6th down the 7th 8th and 9th up and the 11th down. The cause is the pile up of round-off errors that eats like a cancer accross the matrix. To run using the strings of the numbers often to millions of decimal places is impossible unless you have a QUANTUM COMPUTER. The D-Wave Orion perhaps ?. Simulating on a computer is often pointless.

About this Entry

This page contains a single entry by PvM published on March 16, 2007 8:42 PM.

Egnor and natural selection was the previous entry in this blog.

Scott Adams reads Newsweek. Uh-oh. is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter