Trolley problem, again

I ran across two articles today on the trolley problem as it applies to driverless (or self-driving) cars: one in Science by Joshua Greene and one in the LA Times by Karen Kaplan. Both are based on this article by Jean-François Bonnefon and colleagues in today’s issue of Science. We discussed the trolley problem briefly here at PT last October. More precisely, we discussed an extended trolley problem wherein you are in a driverless car and the choices are to kill 5 people, kill 1 person, or kill yourself.

The current research also concerns driverless cars. Not surprisingly, the researchers found support for driverless cars choosing to kill one person rather than five, but they also found that such support withered when you were the one. Their result in fact is completely consistent with the research of April Bleske-Rechek, which I outlined in my talk on the evolution of morality. Professor Bleske-Rechek found that people’s willingness to sacrifice one person in favor of five decreased with, for example, increasing relatedness of the one person.

Professor Bonnefon and his colleagues employed a survey, similarly to Professor Bleske-Rechek and hers, and found that people’s enthusiasm for a “utilitarian” car – a car that will sacrifice the driver in favor of a larger number of pedestrians – decreased as the driver became closer related to the respondent. Professor Greene asks whether driverless cars should indeed be programmed to be utilitarian in that sense; or programmed to behave in some other way, say, to save the driver; or simply be programmed to avoid a crash, come what may. He notes,

Manufacturers of utilitarian cars will be criticized for their willingness to kill their own passengers. Manufacturers of cars that privilege their own passengers will be criticized for devaluing the lives of others and their willingness to cause additional deaths.

Professor Bonnefon and colleagues similarly conclude,

Although people tend to agree that everyone would be better off if AVs [autonomous vehicles] were utilitarian (in the sense of minimizing the number of casualties on the road), these same people have a personal incentive to ride in AVs that will protect them at all costs. Accordingly, if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so. … [M]ost people seem to disapprove of a regulation that would enforce utilitarian AVs. Second–and a more serious problem–our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether.

This question – whether to design utilitarian cars or to let the chips fall where they may – is precisely the trolley problem which, as I showed in my talk, is very real and not simply a philosophical exercise.