Intelligent Design explained: Part 2 random search

| 81 Comments

Random Search and No Free Lunch

In his book “No Free Lunch”, Dembski argues that, based upon the No Free Lunch Theorems, finding an optimal solution via “random search” is virtually impossible because no evolutionary algorithm is superior to random search. And while various authors have shown the many problems with Dembski’s arguments, I intend to focus on a relatively small but devastating aspect of the No Free Lunch Theorems.

First I will explain what the No Free Lunch Theorems are all about, subsequently I will show how Dembski uses the No Free Lunch Theorems and finally I will show that the No Free Lunch Theorems show how a random search, perhaps counterintuitively, is actually quite effective.

No Free Lunch (NFL) Theorems

The No Free Lunch Theorems, after which Dembski named his book, are based on a set of papers by Wolpert and MacReady which basically state that:

“[…] all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions.” (Wolpert and Macready, 1995)

Dembski and the NFL

Dembski argued, based on the No Free Lunch Theorems, that evolutionary algorithms could not perform better than a random search.

Dembski Wrote:

It’s against this backdrop of displacement that I treat the No Free Lunch theorems. These theorems say that when averaged across all fitness functions of a given class (each fitness function being an item of information that constrains an otherwise unconstrained search), no evolutionary algorithm is superior to blind or random search.

and

Dembski Wrote:

In general, arbitrary, unconstrained, maximal classes of fitness functions each seem to have a No Free Lunch theorem for which evolutionary algorithms cannot, on average, outperform blind search.

Source: Dembski Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr

While Dembski’s treatment of the “No Free Lunch” theorems was, according to mathematician David Wolpert, mostly written in Jello, it is still interesting to pursue some of Dembski’s claims. As I will show, not only do the No Free Lunch theorems fail to support Dembski’s thesis, but in fact the No Free Lunch theorems show that such optimization is child’s play.

The question really becomes: Is it really that hard to find an optimal solution using random search under the assumptions of the No Free Lunch Theorems?

The answer may be a surprise to many and it is ‘not really’.

Finding the optimum value may be hard but finding a solution which is almost as good, is actually reasonably simple. And since evolution does not necessarily search for the best solution, it quickly becomes clear that Dembski’s ‘No Free Lunch” may have little relevance to evolutionary theory.

In 2002, on the ISCID boards, Erik provided the following calculations:

Erik Wrote:

Ironically, even if we grant that the prior over the set of all cost functions is uniform, the NFL theorem does not say that optimization is very difficult. It actually says that, when the prior is uniform, optimization is child’s play! I mean that almost literally. Almost any strategy no matter how elaborate or crude will do. If the prior over the set of cost functions is uniform, then so is the prior over the set of cost values. That means that if we sample a point in the search space we are equally likely to get a low cost value as a high cost value. Suppose that there are Y possible cost values. Then the probability a sampled point will have one of the L lowest cost values is just

r = L / Y,

regardless of which strategy that was used to decide which point to sample. The probability s that at least one of N different sampled points will have a cost value among the L best is given by

s = 1 - (1 - r)^N,

again independently of the strategy used. Is that good or bad performance? The number of points required to achieve a given performance and confidence level is

N = ln(1 - s) / ln(1 - r) ~ - ln(1 - s) / r.

After sampling 298 points the probability that at least one of them is among the best 1% is 0.95. After 916 sampled points the same probability is 0.9999. If instead we want a point among the best 0.1% we need to sample 2994 points to find one with probability 0.95, or approximately 9206 points to find one with probability 0.9999. That kind of performance may not be satisfactory when the optimization must be done very fast in real-time under critical conditions, but it is good for most purposes. Certainly our universe would seem to be able to spare the time necessary to sample 9206 points.

This is why Thomas English wrote

“The maligned uniform distribution is actually benign. The probability of finding one of the better points with n evaluations does not depend on the size of the domain [7]. For instance, 916 evaluations uncover with 99.99% certainty a point that is better than 99% of the domain. What is remarkable about NFL and the uniform is not just that simple enumeration of points is optimal, but that it is highly effective.” (see below for a reference)

Source: English T. (1999) “Some Information Theoretic Results On Evolutionary Optimization”, Proceedings of the 1999 Congress on Evolutionary Computation: CEC99, pp. 788-795

The inference is never better than the assumption of a uniform prior that it relies on, however. It would seem that in most non-trivial optimization problems the number of good points in the search space are not as frequent as the number of bad points, meaning that the corresponding cost functions are not drawn uniformly from the set of all possible cost functions.

As Erik pointed out, as early as 1996, Tom English derived how relatively simple optimization really is:

Tom English Wrote:

The obvious interpretation of “no free lunch” is that no optimizer is faster, in general, than any other. This misses some very important aspects of the result, however. One might conclude that all of the optimizers are slow, because none is faster than enumeration. And one might also conclude that the unavoidable slowness derives from the perverse difficulty of the uniform distribution of test functions. Both of these conclusions would be wrong. If the distribution of functions is uniform, the optimizer’s best-so-far value is the maximum of n realizations of a uniform random variable. The probability that all n values are in the lower q fraction of the codomain is p = qn. Exploring n = log2 p points makes the probability p that all values are in the lower q fraction. Table 1 shows n for several values of q and p. It is astonishing that in 99.99% of trials a value better than 99.999% of those in the codomain is obtained with fewer than one million evaluations. This is an average over all functions, of course. It bears mention that one of them has only the worst codomain value in its range, and another has only the best codomain value in its range.

Thomas M. English Evaluation of Evolutionary and Genetic Optimizers: No Free Lunch Evolutionary Programming V: Proceedings of the Fifth Annual Conference on Evolutionary Programming, L. J. Fogel, P. J. Angeline, and T Bäck, Eds., pp. 163-169. Cambridge, Mass: MIT Press, 1996.

Fraction Probability
0.01 0.001 0.0001
0.99 458 678 916
0.999 4603 6904 9206
0.9999 46049 69074 929099
0.99999 460515 690772 921029

Tom English repeated these facts on a posting to PandasThumb

In 1996 I showed that NFL is a symptom of conservation of information in search. Repeating a quote of Dembski above:

Dembski Wrote:

The upshot of these theorems is that evolutionary algorithms, far from being universal problem solvers, are in fact quite limited problem solvers that depend crucially on additional information not inherent in the algorithms before they are able to solve any interesting problems. This additional information needs to be carefully specified and fine-tuned, and such specification and fine-tuning is always thoroughly teleological.

Under the theorems’ assumption of a uniform distribution of problems, an uninformed optimizer is optimal. To be 99.99% sure of getting a solution better than 99.999% of all candidate solutions, it suffices to draw a uniform sample of just 921,029 solutions. Optimization is a benign problem with rare instances that are hard. Dembski increases the incidence of difficult instances by stipulating “interesting problems.” At that point it is no longer clear which NFL theorems he believes apply. Incidentally, an optimizer cannot tune itself to the problem instance while solving it, but its parameters can be tuned to the problem distribution from run to run. It is possible to automate adaptation of an optimizer to the problem distribution without teleology.

Source: Tom English Pandasthumb Comment

I cannot emphasize strongly enough how wrong Dembski is in his comments on random search as these almost trivial calculations reveal. While Dembski is correct that finding the optimal solution may be extremely hard, finding a solution which is arbitrarily close to the solution is actually quite straightforward.

It should not come as a surprise that the “No Free Lunch Theorems” have more unfortunate surprises in store for Intelligent Design. More on that later…

No Free Lunch Theorems

81 Comments

The Anti-Science League really does seem to miss the point that evolution doesn’t “seek the optimal solution”, or anthing so anthropomorphically deliberate; it is a *result* of the range of the available options at the time. “Best” is constantly in flux. Hence the title of this website.

Dembski:

The upshot of these theorems is that evolutionary algorithms, far from being universal problem solvers,

I must have missed the memo where somebody claimed that evolutionary algorithms are universal problem solvers.

Actually, it’s easy to write a “universal” problem solver if you’re not in a hurry to see it get an answer (e.g. dove-tailing: just enumerate all possible solutions starting from the smallest ones first). But usually we are interested in an optimization method that comes up with good results in a feasible time frame. Evolutionary algorithms are generally applicable to domains in which we do not have a deep knowledge of structure that would lead to a better optimization (unlike, say, bipartite matching, network flow, or linear programming, for which there are domain-specific algorithms). However, evolutionary algorithms are not “universal”. Evolution itself is not capable of optimizing all objective functions. Arguably, it does not optimize anything, though it does a fine job of satisficing fitness criteria in natural ecologies.

Dembski’s take on NFL is that it somehow proves that evolutionary algorithms are a waste of time. But I fail to see how he distinguishes between these and other very general methods–e.g. hill-climbing and fixed-point methods–that are also not “universal” but often give useful results when one applies them to an appropriate kind of objective function.

I will discuss both the concept of evolvability (why evolution works so well) and the concept of displacement in later postings. Little steps… Otherwise there may be too much information to deal with.

Evolvability, or as I see it the co-evolution of mechanisms of variation, helps understand how and why evolution can ‘learn from the past’ and how it can be successful.

Displacement is a whole can of worms and I believe that Dembski’s approach fails to explain why/how natural intelligence can circumvent this problem, if it even is a real problem. Since evolution is not a global optimizer, I find his displacement problem of limited interest.

PvM Wrote:

While Dembski’s treatment of the “No Free Lunch” theorems was, according to mathematician David Wolpert, mostly written in Jello,

Wolpert stated that the mathematical part of Dembski’s treatment of NFL was hopelessly mistaken. “Jello” referred to the wooziness of Dembski’s philosophy.

Poof! I say! Poof!

Pim Wrote:

Displacement is a whole can of worms and I believe that Dembski’s approach fails to explain why/how natural intelligence can circumvent this problem, if it even is a real problem.

That’s why Dembski sees all intelligence as ultimately supernatural. But that doesn’t solve the problem either.

His thinking is: (1) X is impossible, or virtually impossible, according to established science. (2) Therefore, X is designed.

Even if he could establish his premise, which he can’t, his conclusion is a non sequitur. According to his logic, if we were to discover a particle that moves faster than the speed of light, we should just label it “designed” instead of adjusting our understanding of physics.

Very nice, succinct post.

I really liked Wolfram’s jello article, and thought that it was good enough to blow the whole “Evolution-eats-a-free-lunch” thing out of the water. Basically, it points out that

1) Dembski knows what he wants to prove: evolution is insufficient to produce complexity,

2) Knows how he wants to prove it: show that evolution purports to be an algorithm which performs better over all problem spaces.

3) Does not in fact prove it.

In the world of formal logic and math, there’s no such thing as a partially proved theorem.

The No Free Lunch theorem seems essentially pointless.

First if limit the domain in any way (like to planets with water, or to carbon based life) the theorem is no longer applicable. Try the wikipedia entry

Although mathematically sound, the fact that NFLT cannot be applied to arbitrary subsets of cost functions limits its applicability in practice

Second if I understand it correctly the idea is that over all theoretical math problems any two search algorithms are equally effective. So in principle if you can create a search algorithm that is inherently less efficient the theorem would be disproved.

Maybe this is an oversimplification but doesn’t this disprove the NFLT?

Search A: (0,-1,+1,-2,+2,-3,+3,…)     if LastResult is nothing then return 0     elseif LastResult>=0 return -(LastResult+1)     else return -LastResult Search B: (0, 0, 0,-1, 0,+1, 0,…)     IterationCount = IterationCount +1     if IsEven(IterationCount) return Search A     else return 0

or how about this:

Search C: (0, 0, 0, 0, 0, 0, 0, …)     return 0

While A,B,C may be equal for the subset of mathematical problems which result in zero, for anything else B will automatically take twice as many iterations, while C will never find the answer.

Could someone tell me if I am missing something here?

Could someone tell me if I am missing something here?

You would have to apply Search A and Search B over all possible sequences and average their effectiveness. If either A or B performed better, we’d have a problem.

While Dembski is correct that finding the optimal solution may be extremely hard, finding a solution which is arbitrarily close to the solution is actually quite straightforward.

Yes, but see, Dembski knows that *he* is the perfect man. Consequently, evolution, if it exists, has had to produce the optimal solution (i.e. him). Since that is impossible, evolution cannot have happened.

Never ignore sublime egotism as the basis for creationism drivel.

Hope that helps,

Grey Wolf

Poof! I say! Poof!

“rule #1:

No Pooftas!”

ya know, sometimes I think Dembski made up this stuff just to piss off his old alma mater.

One does wonder how on earth he could have presented a defendable thesis, if his NFL drivel represents the way his mind actually works.

Alann Wrote:

Second if I understand it correctly the idea is that over all theoretical math problems any two search algorithms are equally effective.

My understanding is that any two non-repeating search algorithms are equally effective, so this would not apply to your algorithms B and C. Hope this helps.

Alann Wrote:

While A,B,C may be equal for the subset of mathematical problems which result in zero, for anything else B will automatically take twice as many iterations, while C will never find the answer.

Could someone tell me if I am missing something here?

Alann, my understanding is that the ‘no free lunch’ theorems only consider algorithms which (1) never visit the same point twice, and (2) eventually visit every single point in the space. In other words insofar as the ‘no free lunch’ theorems are concerned, your algorithms B and C do not exist.

Evolution is surivial of the adequate.

If there is some “structure” on the problem space, the iterated nature of evolutionary algorithms (putting the next generation’s samples in the “better” region) can give a very fast convergence rate.

His thinking is: (1) X is impossible, or virtually impossible, according to established science. (2) Therefore, X is designed.

Indeed. “God of the Gaps”.

That, of course, is precisely why his “filter” consists of three steps ihn a specified order: (1) rule out chance, (2) rule out law, (3) therefore design.

Why doesn’t Dembski follow the sequence (1) rule out design, (2) rule out chance, (3) therefore law, or some other sequence? Because, of all the possible sequences to his “filter”, he has chosen one that absolutely positively negates any need whatsoever for design “theory” to present anything, offer anything, test anything, or demonstrate anything. His whole argumetn boils down to nothing more than “if evolution can’t explain it, then godiddit”. Or, “God of the Gaps”.

I suspect that is not a coincidence.

Notice how Dembski hurries to wash his hands from providing any substance whatsoever to his beloved theory?

This from the ‘Issac Newton of information theory’.

(snicker)

sez Dembski:

finding an optimal solution via “random search” is virtually impossible because no evolutionary algorithm is superior…

This very premise reveals a gross lack of understanding of evolution on Dembski’s part.

[Name an ‘optimal’ organism, Mr. Bill!]

Evolution does not optimize. At best, it results in gene pools becoming better adapted to local conditions, subject to available genetic variability combined with phenotypic, genotypic, and developmental constraints.

Hmmm… 1) Only guided search results in optimal solutions.

2) Evolution doesn’t result in optimal solutions.

3) Therefore, evolution is not a ‘guided search’.

“Oh dear,” says the Designer, and disappears in a puff of logic.

Wow… I already knew that Dembski’s NFL drivel was bad… but this thread seems to have torn it a few new ones.

I continue to be held speechless by both the intellectual vacuity and the amoralism (lying, deceptive use of fallacies, etc.) of today’s religious apologists.

If there’s a God, how come his followers behave as if there isn’t?

Adam Ierymenko Wrote:

Wow… I already knew that Dembski’s NFL drivel was bad… but this thread seems to have torn it a few new ones.

I continue to be held speechless by both the intellectual vacuity and the amoralism (lying, deceptive use of fallacies, etc.) of today’s religious apologists.

If there’s a God, how come his followers behave as if there isn’t?

If cows had gods, …

The IDists are not followers of any ethic god, but of the DESIGNER. It’s frequently claimed that the purpose of the Wedge Strategy is to introduce a theocrazy (mis-spelling intentional), but really it is a technocrazy (mis-spelling intentional).

How often do you find anything about ethics in ID articles? Very rarely! It’s mostly engineers claiming that the Designer designs like an engineer.

What if the domain is infinite and the cost function goes to infinity somewhere? If I’m understanding correctly, that sort of situation wouldn’t allow you to apply this result in any meaningful sense.

Am I horribly missing the point here?

Syntax Error: not well-formed (invalid token) at line 1, column 62, byte 62 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

Corkscrew:

What if the domain is infinite and the cost function goes to infinity somewhere? If I’m understanding correctly, that sort of situation wouldn’t allow you to apply this result in any meaningful sense.

Since the result applies to averaging over all fitness functions, I don’t see how to handle it in this context. For a function to have an infinite limit somewhere as you describe, I think it has to satisfy at least some conditions of continuity. “Most” fitness functions are not continuous. So I can see how to average over fitness functions that map the domain to a finite interval, but if there’s a way to include functions with infinite limits, then my math skills are completely inadequate. Maybe someone else can help. It is also true that the functions with limits are going to have measure 0 in the space of all possible functions to real values, for instance, so maybe we can just ignore them in the average (but I will not go out on a limb and say we can).

Finally, I think the comments about random search still apply. If I have a fitness function, whether or not it has some range values that are much higher than the others (outliers) or even has some infinite limits in places, it is still relatively easy to find values in the 99%tile, 99.9%tile etc. just by random sampling. My result may turn out to be very small compared to the actual maximum, but it will still be much higher than most other values of the fitness function.

Correction: when I wrote “but it will still be much higher than most other values of the fitness function” I should have said “but it will still have a high probability of being greater than or equal to most other values of the fitness function.” There’s no guarantee that it will be “much higher.”

Actually, many optimization problems look a lot like this. Almost all randomly chosen solutions have an objective function that could just as well be set at 0 (alternatively, they violate constraints and are infeasible). You can easily find the 99.9%tile value, but it will be 0. The useful solutions are a vanishingly small percentage of the sample space, so you will need some other kind of optimization to find them. It still might not be hard; for instance if the objective function is convex, some kind of hill-climbing will take you to the solution once you find a feasible starting point.

Let me see if I get the story so far.

English notes that the averaging complaint should be dropped. I don’t understand why it doesn’t apply, since usually evolution should see a gradient or else be momentarily happy or slowly drift.

Anyway, he notes that those complementary random instances doesn’t fit in the world, as Alann says, and that he has in fact proved it, but that “it remains possible that the “real-world” distribution approximates an NFL distribution closely”.

*But* he also notes that “Under the theorems’ assumption of a uniform distribution of problems, an uninformed optimizer is optimal” and sufficiently fast for a population. So No Free Lunch for Dembski.

Furthermore, if NFL assumes “(1) never visit the same point twice, and (2) eventually visit every single point in the space” it doesn’t necessarily apply to evolution. Populations are related (duh!) and doesn’t cover the whole space they are evolving through.

And since it assumes optimality it doesn’t apply to evolution anyway. Furthermore it doesn’t apply for since genes and species coevolve. NFL also seems to assume conservation of information. (Perhaps that is why coevolution dropkicks it, as PvM hints.) Since we have an inflow of information from the nonclosed environment on the properties of newly realised features and from changes in the environment affecting these properties, it also breaks NFL assumptions on the fitness landscape.

I’m eager to see the rest of this story!

“What if the domain is infinite and the cost function goes to infinity somewhere?”

What does that mean? If you look at fitness instead I guess you have 0-100 % of the population reproducing with on average n children. How is fitness defined, do you need to invoke cost, and how do you map between?

Adam Ierymenko Wrote:

If there’s a God, how come his followers behave as if there isn’t?

I get the impression that the ID people are far more concerned about the perceived consequences of whether or not people believe in God than they are about the actual truth of the matter.

I think that they believe in God in the much same way some older children believe in Santa Claus. In latter case, the child has pretty much figured out what’s really going on but still chooses to feign belief a little longer because it’s fun. In the case of ID, they firmly believe, regarding themselves, that disbelief in God will consign them to unmitigated existential angst and a loss of meaning or purpose to life and, regarding others, that disbelief in God will inevitiably lead to cultural decline and moral chaos.

But they still have enough of a lurking intellectual conscience to crave some balance between their desire to believe in God and the values of reason and science via some kind of rationalization. They also often have scientistic leanings, since modern science has been so successful. The resulting inner tension, combined with their own repressed nihilism, makes fideism, or else just belief based on non-scientific grounds, seems psycholgically unaccepatble. So to them the truth isn’t really what counts. It’s just about having a (pseudo-) scientific justification for their faith and convincing everyone else of its truth. To them, that end justify any means.

For a function to have an infinite limit somewhere as you describe, I think it has to satisfy at least some conditions of continuity.

Good point. I’d note, however, that there are other functions that would mess things up - for example, if for any n there exists an infinite number of elements of the set of genotypes/phenotypes such that the cost function evaluates to greater than n. This wouldn’t necessarily need to be continuous. Consider, for example, the markings on a ruler, where the length is more or less proportional to int(log10(x)). That sure as hell ain’t continuous, but still has an infinite number of elements of cost > any given value.

Finally, I think the comments about random search still apply. If I have a fitness function, whether or not it has some range values that are much higher than the others (outliers) or even has some infinite limits in places, it is still relatively easy to find values in the 99%tile, 99.9%tile etc. just by random sampling.

Gah, you’re right, I need to read up on my measure theory. My problem was I was having trouble seeing how percentages could be usefully defined on certain sets.

TL Wrote:

Furthermore, if NFL assumes “(1) never visit the same point twice, and (2) eventually visit every single point in the space” it doesn’t necessarily apply to evolution.

And the second point, although incidental, puts the nail in my objection’s coffin - obviously you can’t visit every single point of an infinite space.

In another thread PvM wrote:

What a little critical thinking and access to google can do…

Apparently, Pim, you didn’t heed your own advice in this post (or your earlier one on this subject for that matter). Besides the fact that you remove your quotes from Dembski from their full context in order to create your pretext, you don’t even cite the correct source. Instead of using a brief summary statement from the response to Orr, why didn’t you quote directly from the book No Free Lunch? I presume you’ve read it, since you participated in the ISCID online study/disucssion group with Dembski a couple years back. Also, you completely ignore Dembski’s further eloborations and development of his concepts in this article and this one as well as this one, all of which are relevant to your project of trying to dismantle Dembski.

Further, you’re careful to footnote the source for the Dembski quote from the Orr paper, but you don’t bother to reference where we can find the original ISCID discussion that contains the elaborate quote from Erik. Given how out of context so much else of your ramblings seem to be, I have to wonder if this oversight was because you didn’t want anyone to see the rest of the ISCID discussion.

If I didn’t know better, I’d say you were attempting to carefully construct an elborate straw man, complete with select-o-quotes and other misrepresentations. You’ll have to do better than that.

Donald,

Which of the two Dembski quotes in PvM’s post do you consider to be misleading, absent their full context?

As for the aleph-null and aleph-one: it was proven that the continuum hypothesis (essentially whether the cardinality of real numbers is aleph-one or higher) is undecidable in standard set theory, so whether you want to accept it or not, you won’t hit any contradictions.

A computable (real) number is one that can be generated by a turing machine (or a markov algorithm, post system, lambda calculus, etc.). Since there are only countably many turing machines, there are only countably many such numbers and whence “most” real numbers are uncomputable.

Allright, I may have it. The “size” of a set is not absolute but is relative to another set, and the relation between the two is established by a function that correlates the members of one with members of the other. So any set can be larger, smaller or equal to any other set depending on the function used. Right?

If so, is it the case that the odd numbers are smaller than, larger than, or equal to the natural numbers depending on the function used? That doesn’t sound quite right.

Gray Wrote:

Allright, I may have it. The “size” of a set is not absolute but is relative to another set, and the relation between the two is established by a function that correlates the members of one with members of the other. So any set can be larger, smaller or equal to any other set depending on the function used. Right?

Er…no. :) Whether one set is larger, smaller or equal to another depends on whether functions between the sets and with certain properties exist; but it is not dependent on exactly what those functions are or which one you’re talking about at the moment.

By analogy, a number is square if (say) there exists an integer whose square is that number. If there was no such integer, it wouldn’t be square; but you don’t have to reference that integer to say it’s square. You don’t have to say “25 is square relative to 5 or -5, but not relative to 7;” 25 is a square number period.

The sets {1,2,3} and {a,b,c} are equal in size because there exists a one-to-one function from each set into the other. For instance, the function f, with f(1)=a, f(2)=b, f(3)=c; and the function g, with g(a)=1, g(b)=2, g( c)=3. These aren’t the only functions which do the job; you could replace f with F, F(1)=b, F(2)=c, F(3)=a. There are also lots of functions which don’t do the job: for instance, h with h(1)=a, h(2)=a, h(3)=a. But the mere existence of f and g is all you need to say the sets are equal in size.

By contrast, take the sets {1,2,3} and N={natural numbers}:

There’s still a one-to-one function from {1,2,3} into N–for instance, f with f(1)=10, f(2)=300, f(3)=1. That proves that the size of {1,2,3} is less than or equal to to the size of N.

On the other hand, there is no one-to-one function from N into {1,2,3}. (This is easy to see. Take any function h from N into {1,2,3}. h(1), h(2), h(3) and h(4) cannot all have different values; therefore h is not one-to-one). That proves that the size of N is not less than or equal to the size of {1,2,3}.

Put them together, and they prove that the size of N must be greater than the size of {1,2,3}.

O.K. So N (natural numbers) and O (odd numbers) are equal in size because there exists at least one function that uniquely correlates all the members of N with all the members of O.

Assuming that I’ve got that right, my difficulty is as follows. For sets with finite numbers of members, if two sets are of equal size, and all of the members of the first set are correlated to members of the second set by a one-to-one function (any one-to-one function), the second set cannot have members that are not correlated to members of the first set. But for sets with infinite numbers of members, that’s not the case.

It’s possible to correlate all of the members of O with members in N, but leave some (an infinite number) of the members of N with no correlate in O. It’s also possible to correlate all of the members of N with members in O, but leave some (an infinite number) of the members of O with no correlate in N. It follows, paradoxically, that it’s possible for a one-to-one function between sets of equal size to leave “extras,” as it were.

Re “It follows, paradoxically, that it’s possible for a one-to-one function between sets of equal size to leave “extras,” as it were.”

That’s essentially a rephrasing of the definition of infinite set - it can have the same size as some proper subset of itself.

Ergo, don’t expect infinite sets to “act like” finite sets; they don’t.

Henry

Henry J Wrote:

That’s essentially a rephrasing of the definition of infinite set – it can have the same size as some proper subset of itself.

Now that I understand what is meant by the claim that two infinite sets are the “same” size or “equal” is size, I am of the opinion that it is a complete misuse of those words. What the are used for in this context has nothing to do with the word’s normal meaning. And while normal usage may be refined for technical applications, the core of the normal meaning cannot be abandoned, much less contradicted. It would be better if mathematicians coined a word (perhaps they already have and we havnt been using it). At any rate, talk of “equality” or “sameness” of size should be abandoned.

Syntax Error: mismatched tag at line 11, column 50, byte 498 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

Stevaroni: OK, I just blew out my irony meter on that one. For those who haven’t looked, the page is the research intelligent design.org wiki homepage. At least in my browser, it comes up as…

Sorry, that was my mistake, one slas too much at the end of the URL.

Click here and you will still find a vacuum filled with emptiness

Coin Wrote:

Basically, if you ever find yourself treating “infinity” like a number, just assume you’re doing something wrong. In equations, infinity is something that should only be dealt with in the limit (and you have to follow the rules that limits bring with them). Infinity is not a number.

Infinity is not a number, except when it is.

In modern mathematics, the word “number” is used as is convenient, so arguing about it is pointless. Infinity can be dealt with directly, or as code for certain kinds of limits. There are numerous situations where treating infinity like a finite number is encouraged, even rigorous. Abraham Robinson’s nonstandard analysis is the most famous instance of this.

For millennia, philosophers owned the concept of infinity, and talked about it ad infinitum, but since Cantor, it’s been discovered that most of what they said was arrant nonsense, or at best of very limited application. Not all philosophers have noticed, however. This is like philosophers claiming relativity is wrong because of Kant. Or denying stellar astrophysics because of Comte.

Pim snarls:

I encourage you to document

1. That I quote out of context 2. That I create Strawmen 3. That I selectively quote 4. That I misrepresent

Or be known for spreading unsupported assertions.

Fair enough eh Donald. Surprise us with some independent thought. If you need to ask Dembski for guidance, realize his poor performance in this area.

What is it going to be… yes or … Or do you need to sleep on it for a while.

I already did 1-4. I referenced two additional articles that eloborate on the subject. You can do your own homework. Trying to make it appear as if a summary paragraph out of one article, one that pre-dates the two papers that provide far more detail, as well ignoring NFL, represents the whole of the argument is simply misleading. You don’t even reference any of these in your reference list. Instead you only reference critiques. Nor do you reference any of the responses that Dembski himself has made to some of his critics. IN other words, you’re being very careful to select only what makes the case the you want to make, and compeltely ignore anything that might call that case into question. This comes directly from the “How to Build A Straw Man in 5 Easy Steps” manual. You see, Pim, in order to refute Dembski’s arguments, you have to consider the actual arguments in total…not the bits and pieces that suit your purpose, which is all you’re doing here.

Hey Donald, why won’t you answer the simple questions I keep asking you?

What, again, did you say the scientific theory of ID is? How, again, did you say this scientific theory of ID explains these problems? What, again, did you say the designer did? What mechanisms, again, did you say it used to do whatever the heck you think it did? Where, again, did you say we can see the designer using these mechanisms to do … well . . anything?

Or is “POOF!! God — uh, I mean, The Unknown Intelligent Designer — dunnit!!!!” the extent of your, uh, scientific theory of ID .… ?

How does “evolution can’t explain X Y or Z, therefore goddidit” differ from plain old ordinary run-of-the-mill “god of the gaps?

Here’s *another* question for you to not answer, Donald: Suppose in ten years, we DO come up with a specific mutation by mutation explanation for how X Y or Z appeared. What then? Does that mean (1) the designer USED to produce those things, but stopped all of a sudden when we came up with another mechanisms? or (2) the designer was using that mechanism the entire time, or (3) there never was any designer there to begin with.

Which is it, Donald? 1, 2 or 3?

Oh, and if ID isn’t about religion, Donald, then why do you spend so much time bitching and moaning about “philosophical materialism”?

(sound of crickets chirping)

You are a liar, Donald. A bare, bald-faced, deceptive, deceitful, deliberate liar, with malice aforethought. Still.

Donald, you should have asked Pim for his source if you doubted his quote. Instead you called him a liar. You owe Pim an apology.

in order to refute Dembski’s arguments

Dembski himself has already given up on his, uh, arguments, Donald. He and his fellow IDers have already declared that there isn’t any scientific theory of ID, probably won’t ever be any, and that he is returning to his “first love”, apologetics. Because, after all, ID isn’t about religion. No sirree Bob.

Didn’t you get that memo, Donald?

I also don’t think that there is really a theory of intelligent design at the present time to propose as a comparable alternative to the Darwinian theory, which is, whatever errors it might contain, a fully worked out scheme. There is no intelligent design theory that’s comparable. Working out a positive theory is the job of the scientific people that we have affiliated with the movement. Some of them are quite convinced that it’s doable, but that’s for them to prove…No product is ready for competition in the educational world.

–Philip Johnson, 2006

*Go get the reference yourself, DonaldM. Nobody’s obliged to do your research for you.

Donald M Wrote:
PvM Wrote:

I encourage you to document

1. That I quote out of context 2. That I create Strawmen 3. That I selectively quote 4. That I misrepresent

Or be known for spreading unsupported assertions.

Fair enough eh Donald. Surprise us with some independent thought. If you need to ask Dembski for guidance, realize his poor performance in this area.

What is it going to be… yes or … Or do you need to sleep on it for a while.

I already did 1-4. I referenced two additional articles that eloborate on the subject. You can do your own homework.

In other words, Donald did not really do 1-4 but referenced to additional articles that elaborate on the topic and expect me to make his arguments to support 1-4. It should be clear by now that Donald M is once again making unsupported accusations. Has he not learned from the past?

Donald M Wrote:

Trying to make it appear as if a summary paragraph out of one article, one that pre-dates the two papers that provide far more detail, as well ignoring NFL, represents the whole of the argument is simply misleading.

In order for my argument to be misleading you have to show that it does not address the two papers that followed my references. In addition, if Donald believes that my references were so inadequate, why Dembski has not publicly withdrawn them as being insufficient>?

Donald M Wrote:

You don’t even reference any of these in your reference list. Instead you only reference critiques. Nor do you reference any of the responses that Dembski himself has made to some of his critics. IN other words, you’re being very careful to select only what makes the case the you want to make, and compeltely ignore anything that might call that case into question.

Perhaps Donald M can show us what Dembski has contributed that address my arguments? That Dembski has ‘responded’ to his critics is a far cry from Dembski admitting his errors and or Dembski actually addressing his critics main objections.

Donald M Wrote:

This comes directly from the “How to Build A Straw Man in 5 Easy Steps” manual. You see, Pim, in order to refute Dembski’s arguments, you have to consider the actual arguments in total…not the bits and pieces that suit your purpose, which is all you’re doing here.

That is what you claim and yet you have failed to provide any detailed examples. Come on Donald, do some research and show that by not addressing these two papers I somehow misrepresented Dembski’s argument and/or created a strawman.

You have again made several unsupported accusations… Why is it so hard to defend them? Let alone present them with sufficient supporting data? What arguments did I miss Donald? Show us that you are familiar with Dembski’s own arguments. If you were, I doubt you would be making these accusations.

I understand that Donald may be going through the typical steps of recovery, and denial and anger are high on the list. After all, to find out that ID is without clothes must come as quite a shock to the faithful.

Donald M’s references, help further show how Dembski seems to be unfamiliar with the effectiveness of random search under NFL…

Dembski Wrote:

The upshot of the NoFree Lunch theorems is that averaged over all fitness functions, evolutionary computation does no better than blind search (see Dembski 2002, ch 4 as well as Dembski 2005 for an overview). But this raises a question: How does evolutionary computation obtain its power since, clearly, it is capable of doing better than blind search?

And yet blind search itself performs quite well as I and others have shown. Dembski clearly ‘does not get it’

Dembski does end up with an excellent question

The question therefore remains: Insofar as evolutionary computation does better than blind search, what is its source of power? That’s a topic for another paper.

The answer is relatively simple ‘evolvability’. The details are even more interesting as they show how neutrality itself is a selectable trait and how self-evolution requires neutrality (Toussaint). The idea that evolution can ‘adapt its variation’ based on fitness, helps understand the answer to Dembski’s question.

What is Dembski’s answer? I mean scientifically relevant answer… Poof just does not do it. His articles on ‘conservation of information’ and ‘searching large spaces’ fail to show much relevance to evolutionary processes. But perhaps Donald can make his case based on these three papers?

Well Donald? Can you?

Re “Now that I understand what is meant by the claim that two infinite sets are the “same” size or “equal” is size, I am of the opinion that it is a complete misuse of those words.”

Well, the technical term is “cardinality of the set”, or “cardinal number”. Which for finite sets is essentially the same thing as the size of the set, so I don’t generally bother distinguishing the terms. I don’t know if mathematicians limit the use of the word “size” to finite sizes or not.

Henry

Syntax Error: not well-formed (invalid token) at line 17, column 61, byte 2744 at /usr/local/lib/perl5/site_perl/5.12.3/mach/XML/Parser.pm line 187

Gray Wrote:

Now that I understand what is meant by the claim that two infinite sets are the “same” size or “equal” is size, I am of the opinion that it is a complete misuse of those words.

Well, you can complain all you like. No one will care.

What they are used for in this context has nothing to do with the word’s normal meaning.

This is rank nonsense. It obviously has something to do with the word’s normal meaning.

And while normal usage may be refined for technical applications, the core of the normal meaning cannot be abandoned, much less contradicted.

The core of the normal meaning has been retained. Tell me, how would you define two disjoint sets, say points on one line versus points on a parallel line, to be the “same size”, except by setting up a correspondence in the first place?

It would be better if mathematicians coined a word (perhaps they already have and we havnt been using it).

The word “equipollent” is out there, but is very rarely used. There’s a reason: the simpler terminology does just fine.

At any rate, talk of “equality” or “sameness” of size should be abandoned.

Why? Because you’re stupid?

Sorry for this

About this Entry

This page contains a single entry by PvM published on June 21, 2006 7:43 PM.

Three Million and Counting was the previous entry in this blog.

Pharyngula: Deep homologies in the pharyngeal arches is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.361

Site Meter