Your (Future) Car’s Moral Compass
Picture a driverless car cruising down the street. Suddenly, three pedestrians run in front of it. The brakes fail and the car is about to hit and kill all of them. The only way out is if the car crosses to the other lane and swerves into a barrier. But that would kill the passenger it’s carrying. What should the self-driving car do?
Would you change your answer if you knew that the three pedestrians are a male doctor, a female doctor, and their dog, while the passenger is a female athlete? Does it matter if the three pedestrians are jaywalking?
Millions of similar scenarios were generated by an experimental website my fellow researchers and I created and named “Moral Machine.”
After the website received substantial media attention, more than four million people from 233 countries and territories visited the website between June 2016 and December 2017. They rated scenarios like the one described above, which were inspired by the famous philosophical conundrum the trolley problem. Though all of the scenarios are unlikely in real life, what we learned from visitors’ appraisal of them could help inform the regulation and programming of autonomous vehicles (AVs) and may also have implications for machine ethics generally. The main question we wanted to answer: How does the public think autonomous vehicles should resolve moral trade-offs? And could we use their responses to build a new kind of moral compass?
But before we dive into that, it’s important to understand how driverless cars make moral decisions in the real world. This might seem like a problem for the future, but cars already are making such decisions. For example, let’s say a car is programmed to drive in the middle of a lane. Sometimes, the car may “decide” to drive closer to the right side or left side of the lane, a response to programming meant to optimize for various objectives. These could include maximizing passenger convenience or minimizing liability.
Read more at Behavioral Scientist.
Edmond Awad is a postdoctoral associate in Iyad Rahwan's Scalable Cooperation group at MIT Media Lab.
This post is a shared excerpt from the Behavioral Scientist and is shared with permission.