Would you buy a car that might be programmed to kill you?

    Listen
    Uber employees test the self-driving Ford Fusion hybrid cars in Pittsburgh. (AP Photo/Jared Wickerham)

    Uber employees test the self-driving Ford Fusion hybrid cars in Pittsburgh. (AP Photo/Jared Wickerham)

    It seemed like a major moment in human history: a fleet of driverless Uber cars hitting the streets of Pittsburgh last week, making their way through busy intersections and rush hour traffic.

    Reporter Liz Reid of public radio station WESA was one of the chosen few to go on a ride-along, and she was excited. “They lined up 14 cars outside the Uber offices, and they had these futuristic-looking sensory devices mounted on the roof along with a bunch of cameras,” she said. Two Uber employees sat up front, ready to take over should anything go wrong. As Reid entered the car, an iPad greeted her by name. She confirmed her destination, and tapped a button that said “Let’s ride.”

    It seemed that the future had arrived.

    However, after just few short moments on the road, Reid noticed something. “We were coming up on a car that was parallel parking, and though there was room to change lanes and go around it, our car stayed put and just waited.”

    Reid’s futuristic ride basically was driving like your over-cautious, never-break-the-speed-limit aunt Berta.

    Other reporters had similar experiences. Driverless cars waiting endlessly at four-way intersections, as other cars rolled past stop signs.

    Of course, Reid was quick to acknowledge, safety is paramount with this new technology. It’s a major concern. Most deadly car crashes are caused by human error, so if these vehicles can do better than us, they have the potential to save lives.

    Safe, but for whom?

    Federal regulators just released a 15-point safety checklist for driverless cars. For example, the cars have to be able to switch safely from autopilot to human driver. They have to be able to avoid sudden problems such as a falling tree. They have to obey the speed limits across state lines.

    And there’s a line item on ethical questions here — asking carmakers to be transparent about how these vehicles will respond in a no-win situation — where somebody is going to die. But who?

    Philosophers and ethicists call this “The Trolley Problem.” It’s a thought experiment from the 1960s that has many different versions.

    Basically, it goes like this: A runaway trolley is speeding along, and its brakes are not working. It’s unstoppable. The trolley is about to kill five people, who are stuck on the track. But there’s another track the trolley could move on to. There it would kill one worker. And you are in control of a lever that you can pull in order to move the trolley to the second track. Would you pull that lever? And kill one worker and save five lives — or do nothing, and kill five?

    ‘The Trolley Problem’ 2.0

    MIT professor Iyad Rahwan and his team have created “trolley problem” scenarios for self-driving cars on his website, Moral Machine, to get people thinking about these issues. Rahwan says, broadly speaking, two schools of philosophical thought guide the response. One is utilitarianism. “Which means, I should take the action that should produce the best outcome. In this case, the best outcome is to kill fewer people, so I should pull the lever.”

    On the other side is duty-based, or Deontic, ethics. “We should not do something that is fundamentally wrong. So the action itself of killing a person through performing an act is intrinsically wrong, regardless of the outcome, and therefore those people will just receive their fate, and a human being is wrong to take an action that will alter that.”

    Rahwan says humans have to discuss these questions and come to a consensus before putting driverless vehicles on our streets in big numbers.

    “It is possible for a scenario to happen in which a driverless car may run over people because it fails to stop, and it’s also conceivable that the car may take a different course of action in that situation.”

    Of course, human drivers don’t carefully consider ethics in life-and-death traffic situations. They freeze. They freak out. Their self-preservation instincts kick in.

    “Now the difference here is that, for the first time in the history of automotives, we are able to make that decision prior to that moment, so you have a moral imperative to make the right choice.”

    Forced to make terrible choices

    When you go on the website Moral Machine you are presented with different side-by-side scenarios. These are simple images reminiscent of DMV drivers’ manuals. There’s a car about to get into an accident. You have to choose: Should it be programmed run into a barrier, and kill the people in the car (this would include you), or stay straight and kill several pedestrians. Every situation is slightly different. The number of passengers in the car varies. The pedestrians change in terms of age and gender. Sometimes they are doctors. Other times, bankrobbers. The decision-making process gets increasingly hairy. At the end, you see your choices compared to those of other users.

    “We wanted people to confront the difficulty of making those types of moral decisions, and to appreciate the complexity. If society doesn’t want these cars to discriminate based on gender and age and so on, then society needs a way to articulate these expectations, and we need to make sure they are implemented,” said Rahwan.

    The car in theory, versus the car I might buy

    Of course this brings up a very tricky problem. Who wants to own a machine that might be programmed to kill them, in certain, very unlikely, situations?

    “People by and large wanted cars to be utilitarian, so they wanted the cars to minimize total harm regardless of whether it was sacrificing pedestrians that the car would swerve towards, or whether the car would sacrifice the passenger in the car,” said Rahwan.

    But here comes the catch. “When we asked people whether they would purchase such cars, they all said ‘absolutely not.'”

    Rahwan says society has to come to an agreement on how these cars should function.

    “If everybody generally wants cars that reduce total harm, I need a way to know that if I buy into that, and I purchase a car that may sacrifice me, I have to be sure that everybody is doing that.”

    Rahwan says this won’t be possible without regulations.

    He likened this situation to the social contract described by British philosopher Thomas Hobbes in the 17th century. In simple terms, Hobbes describes government as an agreement between the people and their ruler, whereby they give up certain rights in order to live more protected lives. The government agrees to enforce the rule of law, and people need to have checks and balances on this power.

    “People are now giving powers to algorithms,” said Rahwan. “And this have consequences on everybody, and we need to make sure we have checks and balances on these powers. We don’t really know how to do this with machines yet.”

    WHYY is your source for fact-based, in-depth journalism and information. As a nonprofit organization, we rely on financial support from readers like you. Please give today.

    Want a digest of WHYY’s programs, events & stories? Sign up for our weekly newsletter.

    Together we can reach 100% of WHYY’s fiscal year goal