Global Survey Reveals Who We'd Prefer to Sacrifice on the Bumper of a Self-driving Car

Steph Willems
by Steph Willems

In 2014, as publications and automakers began making greater noise about autonomous vehicles, researchers at MIT’s Media Lab issued some questions to the public. The institute’s Moral Machines experiment offered up a series of scenarios in which a self-driving car that has lost its brakes has to hit one of two targets, then asked the respondents which of the two targets they’d prefer to see the car hit.

Four years later, the results are in. If our future vehicles are to drive themselves, they’ll need to have moral choices programmed into their AI-controlled accident avoidance systems. And now we know exactly who the public would like to see fall under the wheels of these cars.

However, there’s a problem: agreement on who to sacrifice differs greatly from country to country.


Published in the journal Nature, the results of the online questionnaire say a lot about the mindset in different countries, though there’s still agreement among nations on certain moral basics.

MIT’s Moral Machine experiment riffed on the classic “Trolly Problem,” a moral exercise in which people are asked to put themselves in the shoes of a bystander witnessing a runaway trolly careening towards five persons lying (or tied to) the tracks. A switch is nearby, which the bystander could pull to send the trolley down a second set of tracks, straight towards a single prone person. You’d be signing the death warrant of one human, but saving five lives in the process.

What do you do, Jack?

In the updated scenario (nine scenarios, to be exact), respondents in 130 countries were forced to make a moral choice in who or what to sacrifice. If it came down to a choice of hitting an animal or a human, humans vastly prefer the car swerve out of the way of the wayward human, squishing the animal. Easy stuff.

The same goes, in general, for sparing the young over the elderly, and for sparing more pedestrians at the expense of fewer pedestrians. Of all objects to be avoided at all costs, a stroller ranked highest, followed by a girl, a boy, and a pregnant woman. Saving pedestrians is slightly more popular than prioritizing the lives of passengers.

The globe apparently couldn’t come to a consensus over whether they’d spare a large woman over a thin one, though we collectively seem to value the lives of large men slightly less than thin, angular, sexy ones. The homeless get a bum rap in these results, as do criminals (unfortunately, your car isn’t likely to know just which pedestrian is a serial killer or rapist). Interestingly, respondents were more likely to spare the life of a dog over the life of a criminal. Cats were ranked least important, overall.

Of course, these results are all tabulated from numerous countries. Break the responses down into individual countries, and religious and cultural norms enter the fray.

In Asian countries like Japan, China, Taiwan, and South Korea, respondents were much more likely to place less emphasis on saving the young over the elderly. Taiwan and China were nearly tied as the countries most likely to spare the elderly. Scandinavians were slightly predisposed to this response, too. Western European (France, UK) and North American respondents were far more likely to single out the old as a sacrificial lamb.

Similar aberrations were seen when dealing with numbers — ie, killing fewer pedestrians vs. killing greater numbers of pedestrians. Respondents from countries that are more collectivist in nature, like those in Asia, placed less emphasis on saving more lives vs. fewer. Japan led the way in that regard, followed by Taiwan, China, and South Korea (in descending order). The Netherlands hit the median, so to speak. Among the “save more people” crowd, France placed the most emphasis on prioritizing a higher number of saved lives, followed close behind by Israel, the UK, Canada, and the United States.

“The results showed that participants from individualistic cultures … placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual,” wrote MIT Technology Review.

Cultural groupings seem to disappear when it comes to passengers vs. pedestrians. By a far greater margin than any other country, China placed greater emphasis on sparing the lives of passengers over that of pedestrians, though Estonia, France, Taiwan, and the U.S. mildly fall on the passenger side as well. Israel and Canada were essentially neutral on it, with neither side prioritized. More so than any other country, Japan prioritized the saving of pedestrians over passengers. Western European and Scandinavian countries, as well as Singapore and South Korea, fell on the “pedestrians over passengers” side.

The authors of the paper don’t want their results to decide which people an AI-controlled vehicle should run down in a given country; rather, their aim is to inform lawmakers and companies of how the public might react to choices made by a programmed driverless car. Above all else, the MIT researchers want companies to start thinking about ethics and AI.

“More people have started becoming aware that AI could have different ethical consequences on different groups of people,” said author Edmond Awad. “The fact that we see people engaged with this—I think that that’s something promising.”

[Source: MIT Technology Review]

Steph Willems
Steph Willems

More by Steph Willems

Comments
Join the conversation
3 of 30 comments
  • TimK TimK on Oct 27, 2018

    Now the real question: what software engineer, division manager, or CEO is going to affix their signature approving a system that can autonomously kill people? Who is going to give that authority to a $5 CPU chip?

    • Stuki Stuki on Oct 27, 2018

      Technically, it gets even more nebulous: The chip is unlikely to be able to explain why it did why it did. Noone explicitly programmed it to do so. Because the environment it operates in, unlike that of traditional software, is far to rich to be spanned by closed end approaches that can be backtracked and explained after the fact. That's the I part of AI. The human brain, in addition to working as an in-the-now decision making engine, is at least as focused on explaining, and even post-hoc rationalizing, why it did what it did. It evolved to be just as much a social creature as a control circuit for a set of limbs, after all. Furthermore, all humans are wired at least somewhat the same in that regard, even if there are cultural differences. So explanations tend to be fairly universally shared. Hence, a driver can reasonably be "tried" and "judged" by a reasonably coherent group of "peers," after the fact, for mishaps which may have happened. An AI of any complexity, is just a black box of various self propagated and reinforced weights given to who-knows-what, for who-knows-what reasons related to the environment, and the AIs experiences within it, in which the AI has been trained and rewarded for performance. Very high level priorities, like the ones in this questionnaire, can be hard coded for sure. But those kinds of stark choices are not what the AI will be faced with in a complex real world. Hence large parts of the "reasoning" that led to the decisions that looks to have increased the probability of an undesirable outcome, will remain opaque to investigating humans. "The Bot just looks like it decided to run you over....." It's like trying to figure out exactly which mosquito in Tokyo, is to blame for the Florida hurricane its wing flapping "caused." This is a very fundamental reason for why it's not "good enough" that AIs can be "demonstrated" to be "safer" drivers than humans in large population historical "studies." Accidents will always be with us. Any complex environment will have them. Hence, a mechanism to, at a minimum, somewhat attempt to explain them when they happen, even if not assigning blame, is an integral part of the broader traffic picture. Having things out there who may decide to do who-knows-what that kills you, for what appears to be no reason whatsoever, at any given time, just isn't going to be acceptable to people. So the AIs need to be much, much safer than human drivers before they are acceptable. Which is a big issue, as humans already cause so few accidents that it complicates training and testing AIs in the real world..... Which leads to the only really realistic approach: Make their environment more predictable. Which is another way of saying: Segregate them from open ended unconstrained interaction with unpredictable humans. Do that and, like trains and planes, you can make the machines perform feats of speed and efficiency that is simply not possible in "humanspace." Don't do it, and all you end up with, are "accidents" an unresolvable conflict.

  • Detroit-Iron Detroit-Iron on Oct 27, 2018

    I can't see the actual framing of the questions behind the paywall, but from what I read I don't understand the numbers question. Assuming all other things are equal why would you hit more people rather than less? I mean I would, but I am trying for the high score.

  • CanadaCraig You can just imagine how quickly the tires are going to wear out on a 5,800 lbs AWD 2024 Dodge Charger.
  • Luke42 I tried FSD for a month in December 2022 on my Model Y and wasn’t impressed.The building-blocks were amazing but sum of the all of those amazing parts was about as useful as Honda Sensing in terms of reducing the driver’s workload.I have a list of fixes I need to see in Autopilot before I blow another $200 renting FSD. But I will try it for free for a month.I would love it if FSD v12 lived up to the hype and my mind were changed. But I have no reason to believe I might be wrong at this point, based on the reviews I’ve read so far. [shrug]. I’m sure I’ll have more to say about it once I get to test it.
  • FormerFF We bought three new and one used car last year, so we won't be visiting any showrooms this year unless a meteor hits one of them. Sorry to hear that Mini has terminated the manual transmission, a Mini could be a fun car to drive with a stick.It appears that 2025 is going to see a significant decrease in the number of models that can be had with a stick. The used car we bought is a Mk 7 GTI with a six speed manual, and my younger daughter and I are enjoying it quite a lot. We'll be hanging on to it for many years.
  • Oberkanone Where is the value here? Magna is assembling the vehicles. The IP is not novel. Just buy the IP at bankruptcy stage for next to nothing.
  • Jalop1991 what, no Turbo trim?
Next