By on October 26, 2018

In 2014, as publications and automakers began making greater noise about autonomous vehicles, researchers at MIT’s Media Lab issued some questions to the public. The institute’s Moral Machines experiment offered up a series of scenarios in which a self-driving car that has lost its brakes has to hit one of two targets, then asked the respondents which of the two targets they’d prefer to see the car hit.

Four years later, the results are in. If our future vehicles are to drive themselves, they’ll need to have moral choices programmed into their AI-controlled accident avoidance systems. And now we know exactly who the public would like to see fall under the wheels of these cars.

However, there’s a problem: agreement on who to sacrifice differs greatly from country to country.

Published in the journal Nature, the results of the online questionnaire say a lot about the mindset in different countries, though there’s still agreement among nations on certain moral basics.

MIT’s Moral Machine experiment riffed on the classic “Trolly Problem,” a moral exercise in which people are asked to put themselves in the shoes of a bystander witnessing a runaway trolly careening towards five persons lying (or tied to) the tracks. A switch is nearby, which the bystander could pull to send the trolley down a second set of tracks, straight towards a single prone person. You’d be signing the death warrant of one human, but saving five lives in the process.

What do you do, Jack?

TRI Platform_3.0 autonomous Lexus

In the updated scenario (nine scenarios, to be exact), respondents in 130 countries were forced to make a moral choice in who or what to sacrifice. If it came down to a choice of hitting an animal or a human, humans vastly prefer the car swerve out of the way of the wayward human, squishing the animal. Easy stuff.

The same goes, in general, for sparing the young over the elderly, and for sparing more pedestrians at the expense of fewer pedestrians. Of all objects to be avoided at all costs, a stroller ranked highest, followed by a girl, a boy, and a pregnant woman. Saving pedestrians is slightly more popular than prioritizing the lives of passengers.

The globe apparently couldn’t come to a consensus over whether they’d spare a large woman over a thin one, though we collectively seem to value the lives of large men slightly less than thin, angular, sexy ones. The homeless get a bum rap in these results, as do criminals (unfortunately, your car isn’t likely to know just which pedestrian is a serial killer or rapist). Interestingly, respondents were more likely to spare the life of a dog over the life of a criminal. Cats were ranked least important, overall.

GM Cruise self-driving Testing

Of course, these results are all tabulated from numerous countries. Break the responses down into individual countries, and religious and cultural norms enter the fray.

In Asian countries like Japan, China, Taiwan, and South Korea, respondents were much more likely to place less emphasis on saving the young over the elderly. Taiwan and China were nearly tied as the countries most likely to spare the elderly. Scandinavians were slightly predisposed to this response, too. Western European (France, UK) and North American respondents were far more likely to single out the old as a sacrificial lamb.

Similar aberrations were seen when dealing with numbers — ie, killing fewer pedestrians vs. killing greater numbers of pedestrians. Respondents from countries that are more collectivist in nature, like those in Asia, placed less emphasis on saving more lives vs. fewer. Japan led the way in that regard, followed by Taiwan, China, and South Korea (in descending order). The Netherlands hit the median, so to speak. Among the “save more people” crowd, France placed the most emphasis on prioritizing a higher number of saved lives, followed close behind by Israel, the UK, Canada, and the United States.

“The results showed that participants from individualistic cultures … placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual,” wrote MIT Technology Review.

Cultural groupings seem to disappear when it comes to passengers vs. pedestrians. By a far greater margin than any other country, China placed greater emphasis on sparing the lives of passengers over that of pedestrians, though Estonia, France, Taiwan, and the U.S. mildly fall on the passenger side as well. Israel and Canada were essentially neutral on it, with neither side prioritized. More so than any other country, Japan prioritized the saving of pedestrians over passengers. Western European and Scandinavian countries, as well as Singapore and South Korea, fell on the “pedestrians over passengers” side.

The authors of the paper don’t want their results to decide which people an AI-controlled vehicle should run down in a given country; rather, their aim is to inform lawmakers and companies of how the public might react to choices made by a programmed driverless car. Above all else, the MIT researchers want companies to start thinking about ethics and AI.

“More people have started becoming aware that AI could have different ethical consequences on different groups of people,” said author Edmond Awad. “The fact that we see people engaged with this—I think that that’s something promising.”

[Source: MIT Technology Review]

Get the latest TTAC e-Newsletter!

30 Comments on “Global Survey Reveals Who We’d Prefer to Sacrifice on the Bumper of a Self-driving Car...”

  • avatar

    Was ex-wife a category?

  • avatar

    I’m pretty sure I never had to answer for this in any of my driver’s training (closest I got on a motorcycle is riding at an instructor and riding to the side they point at the last minute).

    For that matter, who’s getting into all these accidents where there’s no choice but to hit one of two targets?

    • 0 avatar

      This was my reaction too. In real life, things happen fast, people would scatter, and there would be no time to make an “ethical” decision.

      I suppose this research was an attempt to identify our values and reveal cultural differences, but I don’t see how it can help an AI. Outside government regulation, the automaker will program the AI to protect the occupants as the occupants are the one paying the bills.

  • avatar

    When I took this questionnaire, I was surprised to see it try and label my responses, so I took the time to write in clarifications to my judgement. I didn’t care about weight, gender, or criminal history; only the laws on the road. If pedestrians were walking against a “don’t walk” sign, the car’s occupants took priority, and if the pedestrians had right of way, the car would be sacrificed. If completely unavoidable, then the fewer deaths the better.

  • avatar

    Nobody scores my navigator!

  • avatar

    Illinois Nazis.

  • avatar

    In the real world, a driverless car is going to attempt to stop as soon as possible. It is probably going to go in a straight line because weaving is likely to make the vehicle unstable and make things worse.

    • 0 avatar

      Yup. To the point where braking erratically for a cat, ends up “causing,” or more correctly greatly increasing the probability of, the schoolbus worth of children behind it, prematurely dying. The “blame” for which the AV maker’ lobbyists, will have made sure always rest firmly on the school bus driver. After all, he deosn’t have as many lobbyists.

      • 0 avatar

        With few exceptions, the driver who is rear-ended is not at fault.

        You are supposed to leave enough room to stop so that you don’t crash into what is in front of you. We were supposed to learn this prior to getting our licenses.

        • 0 avatar

          The incredible frequency with which AVs get into accidents relative to driver operated vehicles has blown enormous holes in the belief that following drivers being at fault for all such accidents amounts to anything more than judicial expediency.

        • 0 avatar

          Drivers license requirements were shaped by an environment of relatively predictable human drivers. Robots out of the blue, for no discernible (to other drivers) reason, limit braking on crowded but fast moving freeways, would require a school bus to follow half a mile behind, for the driver to keep (unseatbelted, no less) kids half safe by rigid application of than rule. While would just entice others to fill in the space in front of him. Going around him and causing accidents in the process. Brake checking schoolbuses into massive accidents probably wouldn’t be tolerated if humans did it, either….

  • avatar

    “Saving pedestrians is slightly more popular than prioritizing the lives of passengers.”

    BS. If my self driving car doesn’t prioritize my life (and the lives of my family), I’m not buying it.

    • 0 avatar

      If you, as a driver, run over a pedestrian instead of attempting to swerve out of the way, even at some increased risk to yourself, you’re already jailbound. Thank goodness. I’m assuming your not unusually well connected, of course, as Party membership these days do come with “run over people for free” cards.

      • 0 avatar

        “some increased risk”? That wasn’t how I read the question the question. If it’s my death or the death of a pedestrian, the pedestrian loses. No jail involved.

  • avatar

    I, for one, believe that self-driving cars ought to be programmed to run over cats even when there isn’t another collision to avoid.

  • avatar

    In all these scenarios the survey takers have all the time in the world to think about their responses and then derive their “most moral” reaction. In reality, when shit is about to go down you don’t have time to visualize the scene and think about income brackets, weight, gender, animal type, or pedestrian counts. Most drivers in these scenarios would freeze and whatever is directly in front is going to get hit. While I imagine AI won’t have the human freak-out and might have time to react, the physics that caused the loss of control in the first place are still in play, and no self-driving magic is going to make that go away.

  • avatar

    Interesting scenarios, especially the ones about killing fat people vs fit people. Thank God they didn’t include questions about race.

  • avatar

    Now the real question: what software engineer, division manager, or CEO is going to affix their signature approving a system that can autonomously kill people? Who is going to give that authority to a $5 CPU chip?

    • 0 avatar

      Technically, it gets even more nebulous: The chip is unlikely to be able to explain why it did why it did. Noone explicitly programmed it to do so. Because the environment it operates in, unlike that of traditional software, is far to rich to be spanned by closed end approaches that can be backtracked and explained after the fact. That’s the I part of AI.

      The human brain, in addition to working as an in-the-now decision making engine, is at least as focused on explaining, and even post-hoc rationalizing, why it did what it did. It evolved to be just as much a social creature as a control circuit for a set of limbs, after all. Furthermore, all humans are wired at least somewhat the same in that regard, even if there are cultural differences. So explanations tend to be fairly universally shared. Hence, a driver can reasonably be “tried” and “judged” by a reasonably coherent group of “peers,” after the fact, for mishaps which may have happened.

      An AI of any complexity, is just a black box of various self propagated and reinforced weights given to who-knows-what, for who-knows-what reasons related to the environment, and the AIs experiences within it, in which the AI has been trained and rewarded for performance. Very high level priorities, like the ones in this questionnaire, can be hard coded for sure. But those kinds of stark choices are not what the AI will be faced with in a complex real world. Hence large parts of the “reasoning” that led to the decisions that looks to have increased the probability of an undesirable outcome, will remain opaque to investigating humans. “The Bot just looks like it decided to run you over…..” It’s like trying to figure out exactly which mosquito in Tokyo, is to blame for the Florida hurricane its wing flapping “caused.”

      This is a very fundamental reason for why it’s not “good enough” that AIs can be “demonstrated” to be “safer” drivers than humans in large population historical “studies.” Accidents will always be with us. Any complex environment will have them. Hence, a mechanism to, at a minimum, somewhat attempt to explain them when they happen, even if not assigning blame, is an integral part of the broader traffic picture. Having things out there who may decide to do who-knows-what that kills you, for what appears to be no reason whatsoever, at any given time, just isn’t going to be acceptable to people.

      So the AIs need to be much, much safer than human drivers before they are acceptable. Which is a big issue, as humans already cause so few accidents that it complicates training and testing AIs in the real world….. Which leads to the only really realistic approach: Make their environment more predictable. Which is another way of saying: Segregate them from open ended unconstrained interaction with unpredictable humans. Do that and, like trains and planes, you can make the machines perform feats of speed and efficiency that is simply not possible in “humanspace.” Don’t do it, and all you end up with, are “accidents” an unresolvable conflict.

  • avatar

    I can’t see the actual framing of the questions behind the paywall, but from what I read I don’t understand the numbers question. Assuming all other things are equal why would you hit more people rather than less? I mean I would, but I am trying for the high score.

Read all comments

Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • slavuta: macmcmacmac Canada! We get something like 50% of oil imports from Canada
  • Lightspeed: Saw my first one just yesterday, a very handsome car, looks far more expensive than it is. Nice size and...
  • EBFlex: Ford is lying. This amazingly cheap vehicle is not that compelling of a vehicle, offers poor capability...
  • macmcmacmac: If only there was a friendly neighbour nearby you could pipe heavy oil in from.
  • Carlson Fan: “It’s hard to overstate how awesome it is, in more ordinary conditions, to run the climate control...

New Car Research

Get a Free Dealer Quote

Who We Are

  • Adam Tonge
  • Bozi Tatarevic
  • Corey Lewis
  • Jo Borras
  • Mark Baruth
  • Ronnie Schreiber