So, apparently, do some robots. Well, simulated robots, anyway.
Specifically, scientists in Lausanne, Switzerland, built some robots that were designed to seek small disks, which we will call “food.” See this article and this podcast in Science magazine. The robots had wheels, a camera, and something that passed for a nervous system. The scientists devised a computer simulation of the robots, so that they could randomly vary the strengths of different connections in the nervous system. They allowed the simulated robots to compete for food and allowed those with the most successful mutations to compete in the next generation; I am sure you know the drill. Once in a while, the researchers programmed some real robots to match their simulations and found that these real robots behaved as they “should” (don’t let any philosophers of science hear me say that).
Somewhere along the way, the researchers allowed the robots to share food; altruism promptly evolved, and robots shared food with other robots to whom they are “related.” The result is taken as support of the theory of kin selection.
Martin Nowak, of all people, dismisses the result as a mere computer simulation. What interested me, though, and apparently also Science’s online news editor, David Grimm, is that the robots apparently made choices and behaved precisely as if they had free will.
I have elsewhere likened free will to a chess-playing computer, but this result is much more interesting. A chess-playing computer, after all, has been programmed to make decisions and only appears to have free will. It has been designed, if you will. But the robots have not been designed to make decisions. Rather, they evolved. Still, if the simulations are realistic, their decisions are based on physics and chemistry – just like yours and mine.