Would you buy a car that’s programmed to potentially kill you?
No way, right? But, as driverless cars are beginning to hit the road, how would you want them to behave in no-win situations, where no matter what you do, somebody will die? Should they be programmed to kill the smallest number of people possible, even if that includes you — the vehicle’s owner?
It’s not a pretty question. We debate it with an artificial intelligence ethics expert, using a thought experiment called “The Trolley Problem.”
Also on the show: We stare into strangers’ eyes for four minutes to see if we feel closer to them afterwards. We talk to a researcher who says he has found a pesky mosquito’s Achilles heel. And we get a sense of how tinnitus sounds as music.