Antievolution Objections to Evolutionary Computation

| 36 Comments

Back in 1999, I started a draft of an article about objections to evolutionary computation. It looks like now would be a good time to remind people that these arguments against evolutionary computation have long been addressed.

Creationist Objections

1. Natural selection is an inadequate theory of adaptation because it does not provide the basic rules for successful computer simulation.[1]

Natural selection turned out to be perfectly adequate as a source of basic rules for simulation. The field of evolutionary computation dates back to the late 1950’s or early 1960’s. Even when Marcel-Paul Schutzenberger assured the attendees of the mid-1960’s Wistar conference on mathematical challenges to the Neo-Darwinian synthesis that all efforts to model natural selection on computers just “jam” the machines, another attendee spoke out saying that he had successfully run such simulations, and was, in fact, impressed with how quickly they worked.

2. Natural selection is disproved because attempted computer simulation has always failed.[2]

Certain early attempts at EC did fail, but in the particular case cited above, the person was attempting a variant of genetic programming, which is still a very difficult field. Since then, successful simulations have been accomplished in GAs, AL, and even genetic programming. It is doubtful that Marcel-Paul Schutzenberger, the originator of this criticism, would have been willing to go so far as to say that natural selection is supported because simulation is successful.

3. Simulation of natural selection on computers will be found to be no different than random search in efficiency.

This one is commonly encountered in online discussions as a variant of the popular “natural selection is just the same thing as blind chance”. We can show that various problems solved by GAs are solved much more efficiently than by random search. [Note that humans deploying evolutionary computation are not doing so to explore all cost functions of a problem; most applications have a specific cost function of interest.]

4. Natural selection might be capable of being simulated on computers, and the simulations may demonstrate good capacity for solving some problems in optimization, but the optimization problems are not as complex as those in actual biology.

This objection typically appears once the first three above have been disposed of. Computer simulation, once held to be either a potential indicator of merit or an actual falsifier of natural selection, is then treated as essentially irrelevant to natural selection. It is certainly true that computer simulations are less complex than biological problems, but the claim at issue is not that EC captures all the nuances of biology, but rather that EC gives a demonstration of the adaptive capabilities of natural selection as an algorithm.

5. Natural selection simulated on computer produces solutions which are informed by the intelligence that went into the operating system, system software, and evolutionary computation software.

If we take a limited form of evolutionary computation for simplicity’s sake and analyze it, I think that we will come out ahead. Genetic algorithms, as presented by John Holland in 1975, work on a population of fixed-length bit strings. The bit-string representation is generic. The operations which the genetic algorithm performs involves the manipulation of these bit strings, with feedback from an evaluation function.

What are the manipulations on the bit-strings? The GA can copy bit-strings with mutation (change in state of a bit), crossover (production of a new bit-string using parts of two existing bit strings in the population), and a variety of other “genetic” operators. The GA selects bit-strings for reproduction based upon results returned from an evaluation function which is applied against each bit string in the population.

The purpose of the evaluation function is to provide a metric by which the bit-strings can be ranked. The critical point to be grasped is that neither the operations of the GA nor those of the evaluation function need information about the pattern of the end solution. The GA’s operations are completely generic; there are a variety of GA shell tools available for use, including plug-ins for MS Excel spreadsheets. Since the same GA tool may be used for job-shop scheduling in one instance, and oilfield pipeline layout in another, the objection that the intelligence of the GA programmer informed the specific designs that result from its application quite soon appear ludicrous. That a programmer might code a generic GA shell and also happen to somehow infuse it with just the right information to optimize PCB drilling movements might be possible, but to insist that the same programmer managed to infuse specific domain knowledge for each and every application to which his tool is put stretches credulity.

Now, let’s eliminate the evaluation function as a source of domain-specific information. Obviously the evaluation function does give information to the GA, but that information does not give a direction for adaptive change for each bit-string evaluated, but rather just how well each bit-string performed when evaluated. The result passed back to the GA does not give the GA insights like “Toggle bit 9 and swap 20-23 with 49-52”. It merely passes back a scalar number, which when compared to other scalar numbers, forms a ranking of the bit strings. The evaluation function can require very little in the way of domain-specific knowledge. For the PCB drilling application mentioned above, the evaluation function can very simply be instantiated as “return closed path length of the route represented by the input bit-string”, which says nothing at all about what the path looks like, and works for any set of hole coordinates. Because the evaluation function can be generic over cases, again we have the argument that domain-specific information is unavailable here on the same grounds as for the GA operations. While we might be able to conceive of an evaluation function that somehow encapsulated information about a particular solution, for problems like the PCB routing one mentioned it is highly unreasonable to credit that information about all possible PCB route configurations has somehow been instilled into the code.

What’s left? Merely the information content of the initial bit strings in the GA population. Since this is often, if not always, done by filling the bit-strings based upon random numbers, any non-trivial bit representation is highly unlikely to correspond to a final solution state.

The information or designs said to be produced by GA are the contents of the bit-strings at the end of the GA run. It can be confirmed that the end bit-string content differs from the initial bit-string content. It can be demonstrated that the evaluation of the initial bit-string indicates poorer function than the final bit-string. The question which those who object to evolutionary computation via the assertion that intelligence has somehow been infused into the result must answer is that if intelligence intervenes to shape or produce the final bit-string, *how* does it accomplish that, and *where* did the domain-specific knowledge come from? I’ve already sealed off infusion via the GA, the evaluation function, and the initial bit-strings for “how”. The “where” question poses an extremely serious difficulty for proponents of this assertion, since if the information needed to solve all the problems which a GA can solve is present on every machine which a GA can be run upon, the information capacity of each machine is demonstrably smaller than the information content of all those possible solutions. It is problematic where the information is stored, and even if that information were capable of being stored somehow, the problem of *why* computer designers and programmers, who would be shown by this to be very nearly omniscient, would chose to put all that information into their systems when the vast majority of it is very likely never to be used.

I’ll note that it is entirely possible to point to or construct evolutionary computation examples whose evaluation functions incorporate a known final solution state. I only know of such simulations done for pedagogy. Dawkins’ “weasel” program from “The Blind Watchmaker” is a fine example of this. However, the mere existence of that simulation is not sufficient to show that all evolutionary computation does so. Any example without a “distant ideal target” demonstrates that GA operation is not dependent on having such a property.

6. The generation of a natural language sentence via means of evolutionary computation is either difficult or impossible.

I think that instead of either being difficult or impossible, the correct classification is that it would be time-consuming to generate such an application. I’ll lay out the approach I would take if I had the time and inclination to do such. First, I would not use fixed-length bit strings, so the underlying computational approach would not quite match the definition of a GA, although most of the same code would likely be useful. Second, the initialization of the evaluation function would involve scanning a large source of text in the language of choice, building a symbol sequence frequency table. (A possible or likely objection here is that this gives information about the language to be generated. However, this procedure gives far less information than is provided to developing humans, who in the absence of examples of language use do not generate grammatically correct sentences, either.) Third, the evaluation function would return a probability value for a bit-string based on the likelihood that the bit-string could be drawn from the distribution represented by the symbol sequence frequency table, with extra points for the final symbol being a period, and the initial symbol being a capital letter. The GA would finish when a bit-string achieved a threshold evaluation value. The likely results will be the production of nonsensical, but often grammatically correct or near-correct sentences. I say this on the basis of experience in coding ‘travesty’ generators and information entropy analysis applications. The use of evolutionary computation in this regard would be no huge stretch.

7. There is an internal contradiction between natural selection and a genetic algorithm, whereby the first step in a GA is to “create” a population of candidate solutions, but no such step exists in natural selection.

In the case of natural selection operating or the case of a GA being run, there has to be some initial state. We aren’t concerned with *how* that initial state came to be; our analysis concerns what happens when natural selection operates, or the GA operates. Let’s look at another simulation, one in which the rise and fall of barometric pressure is simulated according to provided atmospheric conditions. In this unexceptionable case of a simulation of barometric pressure, we don’t claim that there is an internal contradiction because the simulation does not include the creation of an atmosphere from a vacuum.

36 Comments

And don’t forget all the variations of the Monte Carlo techniques, which go back to at least John von Neumann and perhaps earlier if I am remembering my history correctly.

I have used these techniques in my research and have often had the eerie feeling that the methods come up with more clever solutions than we can imagine. This is especially true when exploring a very complicated potential function that forms the landscape over which the solutions are developed. Not unlike evolution occurring on a landscape that includes natural selection and a “potential function” made up of all the composite characteristics of molecules clustered together in an environment that varies with time.

I know very little about information theory, but it seems pretty easy to verify if the evaluation function sneaked in information.

Just check the inormation content of the solution v. the information content of the code of the evaluation function.

Someone correct me if I’m wrong.

Interested parties might look at “Genetic Algorithms in Search, Optimization & Machine Learning” by David E. Goldberg, Addison-Wesley, 1989. This is a pretty accessible introduction to GA’s. Chapter 2 discusses the effectiveness of GA’s over a wide range of problem types as a search through “schema space”.

I’m not competent to speculate on the question of whether or not GA’s truly tell us something about biological evolution. However, in response to point 5 in your post, my understanding of GA’s is that the programmer decides which attribute(s) of the solutions generated are of interest and how to rank them via a fitness function. Using only that fitness function to discriminate “good” from “bad” solutions (or “better” from “worse”), the GA is then able to execute an efficient search through the schema space - a space much larger than the space of putative solutions that can be encoded as fixed length strings - for optimal or near optimal solutions as measured by the fitness function.

For example, all the programmer has to bring to the algorithm is the idea that transportation costs in a large network should be minimized even though he has no idea what a minimum cost solution would look like.

GuyeFaux, the essential point is about the content of the solution, not the quantity of information. Unless the evaluation function is very short indeed (as is the case for the Travelling Salesman Problem and similar problems), the quantity of information in a “winning” solution is likely to be smaller than that in the evaluation function. But the two are information about different things. The evaluation function can be thought of as the description of constraints relevant to the problem such that any proposed solution can be ranked against another. The solution, though, is just the configuration which best (in the search made by the GA) meets those constraints.

Unless the evaulation function actually provides or includes the solution configuration in some form, though, the antievolutionist criticism in question fails. And thus it fails almost all of the time, since only a handful of GA simulations run for pedagogy utilize an evoluation function with a “distant ideal target” inside.

Gotcha.

How do you then prove that the evaluation function is/isn’t some sort of a fixed target? Practically, how do you show, just by looking at the evaluation functions, that Dave Thomas’s evaluation function is not targeted whereas something like Dawkin’s Weasel and Cordova’s summer is?

I have an intuition about it, but is there a formal distinction?

GuyeFaux wrote

How do you then prove that the evaluation function is/isn’t some sort of a fixed target? Practically, how do you show, just by looking at the evaluation functions, that Dave Thomas’s evaluation function is not targeted whereas something like Dawkin’s Weasel and Cordova’s summer is?

If you can wait till next week or two, I’ll be posting a treatise on software engineering tools and tricks for determining just that. It’ll be titled “Genetic Algorithms for Uncommonly Dense Software Engineers,” subtitled “See if your algorithm has a fixed target!”

Probably not till the end of next week, as Monday I’ll be posting the results of the Design Challenge.

Dave

but rather that EC gives a demonstration of the adaptive capabilities of natural selection as an algorithm.

Just a minor quibble - if I’m not mistaken, EC is a broad term encompassing a wide range of techniques and processes not necessarily restricted to “natural” selection and traditional GA-style algorithms, including self-organizing systems, swarm intelligences, and the like. Tough to do anything interesting without some form of selection, though. It’s there wherever there’s an if-statement. ID’ers would call you a “designer” when you code one, even though you may have no idea what it will ultimately “design”.

Dave Thomas: This is quite good, but it seems to me you have yet to address the “intelligent design” which went into the operating system and system software. While you are at it, the same arguments will also work for the ‘intelligent design” which went into the hardware as well.

If you can wait till next week or two…

I eagerly await. I think it’s important to show that so-called front-loading doesn’t happen via the fitness function, except in academic cases.

[of the fitness function] It merely passes back a scalar number, which when compared to other scalar numbers, forms a ranking of the bit strings.

Or even less. Some just compare two solutions and say which is better. This can be useful in cases where no universal ranking is possible, or is hard (like a competitive game).

David W. Benson, I’ve already taken up your concern. See #5.

jeffw, yes, EC is a broad term. Within my responses, I do mention more than just GAs, as in #1 where I include artificial life and genetic programming as example fields. But the GA model itself can be used as a sufficient counter to almost all the antievolutionist objections to evolutionary computation, and has the advantage of simplicity.

Wesley — My point is a small one. You address EC, specifically GA, thoroughly. But the operating system and system software mentioned in ‘objection 5’ never reappears in your reply.

Oops. I need to give credit where it is due. Wesley Elsberry, this is quite good. (Dave Thomas was another thread. It was quite good as well.)

David B.:

The operating system, “system software” (difference?), computer hardware, etc. do not really contribute anything to solving the problem beyond providing a deterministic environment in which the algorithm can be executed quickly and accurately.

GuyeFaux wrote

I eagerly await. I think it’s important to show that so-called front-loading doesn’t happen via the fitness function, except in academic cases.

I should mention that the upcoming tutorial is mainly intended for software engineers working with (coding, compiling, etc.) genetic algorithms.

As far as proving theoretically whether any given algorithm’s evaluation function is some sort of a fixed target, or not, those have to be handled on an individual basis.

Dave

David B. Benson, if it helps, feel free to imagine that the following sentence is inserted at an appropriate place: “Likewise, the operating system and system software is of finite information capacity, but may be used to support the running of an indeterminate number of instances of EC applications solving different problems, and thus there is no hope for the contention that the specific information of each solution might somehow be passed on via that conduit.”

GuyeFaux Wrote:

I think it’s important to show that so-called front-loading doesn’t happen via the fitness function, except in academic cases.

http://www-cse.uta.edu/~cook/ai1/le[…]tsp/TSP.html

If the IDiots can extract the “front-loaded” solution for the NP-complete Travelling Salesperson Problem from that GA then they’d be justifiably famous.

How do you then prove that the evaluation function is/isn’t some sort of a fixed target? Practically, how do you show, just by looking at the evaluation functions, that Dave Thomas’s evaluation function is not targeted whereas something like Dawkin’s Weasel and Cordova’s summer is?

The shape of the fitness space is defined by the parameters describing it, so technically every EA problem encodes its solution – just not in an explicit and/or obvious way. Whether the evaluation function encodes the solution is much easier to determine.

Wesley wrote

The question which those who object to evolutionary computation via the assertion that intelligence has somehow been infused into the result must answer is that if intelligence intervenes to shape or produce the final bit-string, *how* does it accomplish that, and *where* did the domain-specific knowledge come from? I’ve already sealed off infusion via the GA, the evaluation function, and the initial bit-strings for “how”. The “where” question poses an extremely serious difficulty for proponents of this assertion, since if the information needed to solve all the problems which a GA can solve is present on every machine which a GA can be run upon, the information capacity of each machine is demonstrably smaller than the information content of all those possible solutions.

I’ve asked that question in a number of discussions with ID creationists about Avida: Precisely where in the code (freely available) is the “intelligence” infused? Only once have I had an answer, and that ID creationist (himself a programmer) pointed to the mutation/recombination/selection code. That is, he pointed precisely to the code that modeled the unintelligent mechanisms of biological evolution. What a telling answer!

IanC wrote

Or even less. Some just compare two solutions and say which is better. This can be useful in cases where no universal ranking is possible, or is hard (like a competitive game).

Which is called “tournament” selection. In our work we’ve moved exclusively to a variety of tournament selection rather than generational rankings to determine mating pairs. Our twist is to require random pairs to ‘compete’ (on their current fitness) to enter a recombination pairing.

GuyeFaux wrote

I think it’s important to show that so-called front-loading doesn’t happen via the fitness function, except in academic cases.

My company uses GAs in an applied context. If we could front-load solutions we wouldn’t waste our time with a computationally intensive GA. We’d just write down the solution and use it. But we can’t do that because we have not the faintest idea what solutions should look like. (“Solutions” is plural because we work with dynamic environments in which fitness landscapes change over time). Regarding the fitness landscapes our GAs evolve on, we don’t know global optima, we don’t know local optima, and we know little about what the topographies of the fitness landscapes look like (they’re in anywhere from 25 to 60 dimensions) except that they have features (e.g., locally correlated gradients) that enable a GA to work substantially more efficiently than random search. And we know that last bit only because the GAs do work: they produce viable solutions. If they didn’t we’d be out of business tomorrow – or actually, years ago. What we can specify and measure is the results of behavior of the evolving replicators that make one replicator better than another in a given selective environment. And we can do that only because we know what we want the results of their behavior to be – it’s the real-world situations we want them to be able to operate successfully in. The environment itself is piped into the GA from the outside world, so we have no control over it. All we can say is that given that external environment, this replicator getting better results right now than that one and hence is more likely to enter recombination.

RBH

The shape of the fitness space is defined by the parameters describing it, so technically every EA problem encodes its solution — just not in an explicit and/or obvious way.

No, I disagree. An evaluation function absent a “distant ideal target” at best tells you what properties a solution will have, not what the solution actually is. It’s like saying that the sentence,

A vehicle based on ejection of mass that can carry three men to the moon and back.

encodes the plans for a Saturn V rocket plus Apollo command, service, and lunar modules. There’s no “encoding” there. The information of the evaluation function and the information of the solution are about different things, as I mentioned earlier.

No, I disagree. An evaluation function absent a “distant ideal target” at best tells you what properties a solution will have, not what the solution actually is. It’s like saying that the sentence,

A vehicle based on ejection of mass that can carry three men to the moon and back.

encodes the plans for a Saturn V rocket plus Apollo command, service, and lunar modules. There’s no “encoding” there. The information of the evaluation function and the information of the solution are about different things, as I mentioned earlier.

No, no, no – you’re making a similar error to the one I just corrected. Perhaps I’m not expressing my thought clearly enough to be understood. I’m not saying that the evaluation function encodes any particular solution, but the problem space necessarily encodes the solution. In your spacecraft example, the problem is defined by the laws of physics and the particulars of the specific situation involved, namely the properties of the Earth, the Moon, and the general Solar system.

To use another example: an expanse of hills and valleys determines how a ball pushed from a particular spot with a particular velocity will move. A selection principle that evaluates some paths as being ‘better’ than others won’t necessarily specify or encode any particular starting location/velocity combination as being inherently better, but whatever criteria are used to define it, the hills and valleys necessarily encode the solution.

It’s the difference between the problem space and the evaluation function that IDists intentionally confuse.

No, no, no — you’re making a similar error to the one I just corrected. Perhaps I’m not expressing my thought clearly enough to be understood. I’m not saying that the evaluation function encodes any particular solution, but the problem space necessarily encodes the solution. In your spacecraft example, the problem is defined by the laws of physics and the particulars of the specific situation involved, namely the properties of the Earth, the Moon, and the general Solar system.

That a solution exists within a problem space isn’t an “encoding”. At least thinking that the evaluation function might have the solution embedded in it made some sort of sense.

the essential point is about the content of the solution, not the quantity of information. Unless the evaluation function is very short indeed (as is the case for the Travelling Salesman Problem and similar problems), the quantity of information in a “winning” solution is likely to be smaller than that in the evaluation function. But the two are information about different things. The evaluation function can be thought of as the description of constraints relevant to the problem such that any proposed solution can be ranked against another. The solution, though, is just the configuration which best (in the search made by the GA) meets those constraints.

And the Traveling Salesman solver could easily be designed to loop, generating an unlimited number of variations of sets of point and their solutions. So even if some information is somehow “encoded” in the program, the looping Traveling Salesman will ultimately generate more information. So the only way to rescue the “information comes only from intelligence” doctrine is to insist that the solution of a Traveling Salesman contains no information at all!

Wesley Elsberry: Yes, that provides the needed paragraph.

That a solution exists within a problem space isn’t an “encoding”.

True, but it may ultimately be a question of degree. It becomes less and less true as the size of the solution space shrinks toward the size of the solution (i.e. adding more and more constraints). When they are equal, the solution is an encoding of the problem space. Perhaps the task of ID then, is to add enough constraints to a problem (or change the description of it enough), to show that it’s solution is encoded or “front-loaded”.

That a solution exists within a problem space isn’t an “encoding”.

Ah, but if the nature of the problem space highlights particular solutions (for example, local minima in a topological space) the space can be said to encode the solutions – assuming of course that we choose evaluation functions that value those highlighted properties. It would be easy to presume that the lowest paths are what would be selected for when viewing such a topography. When presented with a hammer, everything looks like a nail, after all.

In any evolutionary algorithm, the solutions that arise are at local minima/maxima of the combination of the selection pressures and the environmental conditions. IDists choose to spread the false idea that when humans set up the pressures and conditions that this somehow involves their directly encoding the solutions into the process, which is nonsense. The solutions are indirectly encoded at best, and it makes no difference if the factors are chosen by an intelligence or selected randomly.

“It becomes less and less true as the size of the solution space shrinks toward the size of the solution (i.e. adding more and more constraints). When they are equal, the solution is an encoding of the problem space.”

Interesting observation, but isn’t this assuming that you can come sufficiently close and that the solution space is locally constrained in all dimensions by the evolving solution?

Also, the problem space is continually changing. RBH notes that for the environment. There is also coevolution AFAIK. For example between species: preys may choose to become good with avoidance, fleeing, hiding, or defences, or any combination whereof which deconstrain the solution space, and the interactions change continually.

So if some solutions may become sufficiently constrained close up, it may not be the general situation. I’m not sure if this will impress creos, though.

In any evolutionary algorithm, the solutions that arise are at local minima/maxima of the combination of the selection pressures and the environmental conditions.

It’s important to keep hammering that point. In the evolutionary algorithm the “solution space” is vast, namely “live long enough to reproduce”.

There’s hardly any specificity there at all, consequently, there are a host of locally optimal solutions that all work well.

It’s not *quite* that simple. Organisms need to reproduce enough that the species continues, which is somewhat harder than merely “living long enough to reproduce”.

It’s not *quite* that simple. Organisms need to reproduce enough that the species continues, which is somewhat harder than merely “living long enough to reproduce”.

True enough, I shall ammend myself. Ahem…

“live long enough to reproduce effectively.”

Still, not much in there about developing flagella-based outboard motors.

How about inboard motors? ;)

Henry

Interesting observation, but isn’t this assuming that you can come sufficiently close and that the solution space is locally constrained in all dimensions by the evolving solution?

Oh, absolutely. I was just making an “theoretical” observation. And as you and others have pointed out, the real world solution space is highly mutable, and is itself subject to evolution and natural selection.

However, there are software tools of various kinds out there that build programs and systems by specifications or “constraints”. As the constraints get progressively more detailed and numerous, the distinction between specifications and code begins to blur.

Holy Star Trek convention, Batman, this thread just blew up my nerdometer.

Okay, kidding aside, what you guys are doing here is comparable to arguing with an 8-year-old child about the existence of Santa Clause by using quantum mechanics to demonstrate that reindeers can’t fly.

Let’s see if I can tone down the argument a bit so people without a PhD in computer sciences can have a clue what you’re talking about. (In case you care, I’m a computer nerd with 20+ years of experience, though I’m not into AI, so excuse my inevitable misuse of the terminology.)

First of all, intelligence is simply the capacity to solve a problem. Creationists love to think that there’s something magical about intelligence that can only occur inside a human mind (or god’s mind) but that’s BS with a capital BS. Technically speaking, the ID claim that “design requires intelligence” is actually correct, but only because it is a tautology; that is, any process that can produce design should be qualified as “intelligent” regardless of how it produced the design.

The most basic form of intelligence is composed of two mechanisms: (1) a mechanism that alters the behavior of the “intelligence” within the set of possible behaviors that would solve the problem and (2) a mechanism that can tell the difference between a correct answer and an incorrect one. For instance this program:

10 x=random() 20 if x!=cos(x) then goto 10 30 print x

This programs represents the most basic form of intelligence. It has the capacity to solve a problem, namely, it will print a value of x so that x=cos(x). And this program will indeed solve the problem. It might take it a long time, but it will solve it. The difference between this “intelligence” and what happens inside your mind is a matter of degree, not category. What the mind does is to add mechanisms to complement the two I mentioned. For instance, you can add a “memory” so the program can know when it has already tried a value so it doesn’t have to check it again. You can add a more sophisticated test so the program can tell which values are closer to the solution than others, etc., etc. That would certainly make the program much more efficient, but the program as it stands is already “intelligent.” It isn’t particularly bright, but it’s intelligent nonetheless.

In case you missed it, mutation and natural selection would qualify as the two required mechanisms for an intelligence. Mutation causes changes in behavior, and natural selection serves as a filter that gets rid of the wrong answers. So there you have it, there IS an intelligence guiding evolution. Like in the case of my program, it isn’t a particularly bright intelligence (it takes it millions of years to get anywhere), but it is indeed an intelligence.

And in case anyone wants to complain about it, the only real difference between a completely random behavior (for the first mechanism) and a guided behavior is efficiency. In fact, any intelligence requires a random component (i.e., creativity), otherwise its behavior would simply be an infinity loop producing the exact same sequence of possible answers over and over and over.

In fact, any intelligence requires a random component (i.e., creativity), otherwise its behavior would simply be an infinity loop producing the exact same sequence of possible answers over and over and over.

Algorithms don’t have truly random components, so it must not be the case that a random component is necessary. A typical pseudo-random number generator produces a repeating sequence of values, and thus the same sequence of possible answers over and over and over – but since the set of values generated by a good pseudo-random number generator is very large, it can yield a very large sequence of possible answers. But an infinite sequence such as 1 …, as produced by a simple unending loop (e.g., for(i = 1;;i++) in C) can yield an infinite sequence of possible answers, and thus plenty of “creativity”. Whether a pseudo-random sequence or a linear sequence produces results more quickly depends on the search algorithm.

Popper's ghost Wrote:

Algorithms don’t have truly random components, so it must not be the case that a random component is necessary.

My mistake. Creativity requires randomness, intelligence doesn’t, although it can benefit from it in cases when the solution cannot be directly derived from a process. In such cases, the good old “trial and error” (i.e., pick a random option, then test it) is the simplest way to implement an intelligence, though hardly the most efficient one.

Actually, algorithms could have ‘truly random’ components. However, pseudo-random number generators mimic random number sequences so well that few, if any, bother to equip their computer with a device to sense a random process such as radioactive decay.

About this Entry

This page contains a single entry by Wesley R. Elsberry published on August 18, 2006 1:37 PM.

Stupid ID statement of the month was the previous entry in this blog.

The Politically Incorrect Guide to Darwinism and Intelligent Design Review is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Author Archives

Powered by Movable Type 4.381

Site Meter