Transcript: A.I. in the Driver’s Seat (A.I. Nation)

Listen to A.I. Nation

MALCOLM BURNLEY, HOST: [faintly] Hey! Good! Ani? 

ANI MAJUMDAR: Yeah! 

MB: Yeah, what’s in here? What all is in here? [fades under narration]

MB: Ani Majumdar is my tour guide through the Gas Dynamics Lab at Princeton University. He’s a robotics researcher who’s using cutting edge artificial intelligence to train unmanned aerial crafts — or drones.

AM: We work on controlling really agile robotic systems, so that’s the demo we’re going to do here for you today. Specifically trying to make guarantees, mathematical guarantees, on safety and performance of these complicated robotics systems.

MB: The building is old and clanky-looking, not necessarily what you’d imagine for a sophisticated drone laboratory. There are some computer monitors doing things l’ve never seen before — flashing charts and diagrams that, to me, looked like hieroglyphics.

But the lab is also full of stuff that looked like it’d been plucked from a regular old supplies closet: box fans, tennis balls, safety goggles. A whole bunch of cardboard cylinders.

AM: For me that’s one of the cool things about robotics, the space for both of those things, elegance, mathematical elegance, complexity in terms of algorithms and programs, things like that, but there’s also space for hacking if you’d like. Doing things with your hands.

MB: Ani took me into a long, nondescript room with high ceilings and fluorescent lights. The blinds are closed, which I’m told is because drones crash more often in the sun. It’s also why I’m wearing goggles.

AM: For our safety, yep! [laughs] 

MB: In the middle of the lab is an obstacle course with netting all around it. Sushant Veer, a postdoc who works with Ani at Princeton, is rearranging a bunch of columns inside, which are about 6 feet tall and made of cardboard. 

AM: He is just setting the columns up in a new configuration. 

MB:  Then he pushes a button on the computer. 

[buzzing drone sound] 

MB: The drone takes off like a helicopter, floating straight up in the air, then zooming forward at a high speed. It shoots through a narrow gap in between the columns  and glides down to the ground at the other end. 

[drone buzzing stops] 

MB: Mission accomplished. 

AM: I think a lot of people when they think about drones in the future, they have this scary image of thousands of these things flying over our heads, crashing into each other, falling out of the sky. So our research aims at making sure that that kind of thing doesn’t happen.

MB: Ani and Sushant design and hone algorithms that allow these drones to move through the air on their own. Essentially, they teach the drone various flight paths. And then, confronted with an unfamiliar set of obstacles, the drone can process whatever visual information is in front of it and use the path of motion to avoid collision. 

Thanks to neural networks, that process happens over the course of a split-second. 

SUSHANT VEER: So it’s like around a 93, 94 percent success rate. So you don’t get to see collisions very often. 

MB: 93 or 94 percent accurate? I was pretty blown away. 

[drone buzzing]

MB: I showed up expecting to see at least some drone crashes, ready to record some crazy sound to play you. But the drone… never crashed. And the reason is that the AI they’re using is fast improving.  

And once they get good enough to operate in the real world, you can imagine some of the applications for this particular AI.

AM: Having a drone that goes and maybe inspects the structure of a bridge or some other large scale infrastructure, looking for mechanical deficiencies or things like that. And, of course, ideally potentially even doing some repairs on its own, so switching a light bulb is a really nice application if you have a lightbulb way up there on a bridge or on some building that you want to change. 

[music]

MB: So, in the near future, the answer to the age-old question of how many people it takes to screw in a lightbulb? Well the answer’s going to be … none. 

AM: Yep, exactly. Zero people. [laughs] 

[music continues] 

MB: From WHYY in Philadelphia and Princeton University, this is A.I. Nation, a podcast exploring how humans are handing over more and more control to machines, and what that means for us. I’m Malcolm Burnley. 

ED FELTEN, HOST: I’m Ed Felten. I’m a computer scientist at Princeton University, and I’ve studied AI for a long time. 

MB: He also used to work in the White House. 

EF: I also used to work in the White House. 

MB: As AI gets better, it’s becoming a bigger part of our lives. And while I’m not too worried about a lightbulb-changing robot — What about other uses for this kind of automation? 

Engineers are actively working to put AI  in our cars — and some people have put it in weapons. They’re doing that even as we continue to grapple with that question of how to make sure AI is helping us, not hurting us. 

EF: People aren’t great drivers. We get tired. We get angry. We pay attention to the radio. We fiddle with our phones. We do all kinds of things we shouldn’t be doing. Machines could do much better. They don’t get tired. They pay attention all the time. And they can learn from experience in a way that we can’t. So the hope is that we can make driving and riding and being a pedestrian so much safer than it is now, if we can just get machines up that learning curve. 

[music] 

ANDREW HAWKINS: All self-driving cars that are on the road today are strictly in sort of development mode and it’s a pretty slim chance that you’ll get to ride in one until they start sort of opening them up on a wider scale. 

MB: Andrew Hawkins is a senior transportation reporter for The Verge. Driverless cars have always been a part of his beat. 

AH: You know I’ve been in a Waymo car I want to say three times now. I’ve ridden with Uber twice. I’ve ridden with a couple of smaller companies a couple times. So yeah I want to say six or seven times, total. 

MB: Each of those times, the driverless car did… have somebody in the driver’s seat: a safety driver, making sure everything was going smoothly. Except for a ride he took with Waymo just before the start of the pandemic.  

AH IN THE VERGE VIDEO: It’s asking me to start the ride so let’s do that. 

AUTOMATED CAR VOICE IN THE VERGE VIDEO: Heading to Baby Kay’s Cajun Kitchen. Please make sure your seatbelt is fastened…  

MB: Waymo is an offshoot of Google. They’ve been tinkering with self-driving cars for more than a decade. They have a small fleet on the road in Phoenix that they’re testing as a ride-hailing service with about a thousand people. 

When you picture a self-driving car, you might imagine something pretty futuristic. 

AH: We’ve seen a lot of concepts and renderings of what autonomous vehicles are supposed to look like. This stretches back to like the 1950s. You open up the door to a car and inside there’s no steering wheel. Maybe there’s like a living room setting or like, more of like a lounge-type setting. And that’s meant to really sort of challenge you sort of like what the futuristic determination of what an autonomous vehicle is supposed to be. This is not that. This is a regular Chrysler Pacifica minivan that’s just been retrofitted, basically, to drive itself. 

MB: That said, these cars still do look a little different than everything else on the road. On top, they have a sensor that looks a bit like a siren. It’s called a LIDAR sensor.  

AH: That’s a laser center that sends out thousands of laser points in a 360 degree perimeter around the vehicle that then send that information back into the car’s computer. 

MB: The car takes in all of that information and runs it through a computer in the trunk. 

AH: It runs through, you know, sort of all of the deep learning and machine learning AI code that is uploaded onto those computers, that then can say, okay, this is a person pushing a stroller, this is a bicyclist. This is another vehicle on the road. This is construction. And it uses that sort of instantaneously by going through a vast library of images and other videos that it has that it uses to make sort of those determinations. 

MB: The car is also constantly trying to predict what any objects it identifies might do. It’s always asking itself: Will that pedestrian step into the street? Will that car move into my lane? 

Andrew wasn’t nervous to get into Waymo’s driverless car, but it still took some getting used to. 

AH: It was a little bit weird, I will say, when the car pulls up and there’s literally nobody inside of it. 

AH IN THE VERGE VIDEO: Alright, so we’re pulling out of this parking lot right now. And onto a public road. Oh there goes another Waymo car! [fades under narration]

MB: The car makes a right hand turn and changes lanes smoothly — all with the steering wheel moving kind of mysteriously by itself. 

AH IN THE VERGE VIDEO: I like the notice on the steering wheel too says, “Do not touch steering wheel or pedals. The vehicle will pull over,” as sort of a warning to anybody that might try to mess with the driverless car. 

MB: And then… the car keeps doing exactly what it’s supposed to: going the speed limit, coming to a complete stop at stop signs. Waymo has made its cars very conservative drivers. 

AH: And eventually you’re just sort of bored almost, like you’re in a normal Uber ride or something! Like you’re expecting something beyond that initial surrealness of getting in the vehicle with nobody in the front seat. 

MB: The car did make one mistake. 

AH IN THE VERGE VIDEO:  Interesting. So it looks like we’re making a little bit of a routing correction. We’re going back the way we came …  

MB: Though even calling it a mistake might be going too far. Later, the engineers at Waymo said the car decided it couldn’t get over in the lane fast enough, so it took a detour. Which is something you or I might do on the road, too. 

As the car was pulling into the parking lot to drop Andrew off, it came to an abrupt stop. 

AH: The seatbelt caught me. And I looked out the front window, windshield and there was nothing there. And then all of a sudden, like five pigeons took flight. 

AH IN THE VERGE VIDEO: Oh, it stopped for the pigeons! You love to see it! You absolutely love to see them stopping for the pigeons. 

AH: It just like kind of warmed my heart a little bit. But at the same time it was like, oh, you know, you can very easily see a, you know, callous human driver just barreling through the flock of pigeons without any second thought. But, you know, this benevolent AI decided to spare the life of the lowly pigeon. 

MB: There might have been a lot of reasons the car stopped for the pigeons, including the fact that it didn’t know they were pigeons and was just being careful of something unknown. Andrew knew that. 

But it’s still sort of nice to think about: a more gentle AI, that breaks for pigeons, that doesn’t cut anybody off on the road, that doesn’t text and drive. 

And Ed says that hopeful vision of driverless cars has shaped how our country handles them now, in their developmental stage. 

EF: Regulation of self-driving cars has been pretty light. And let me tell a personal story here. When I worked in government, I actually worked on issues around regulation of self driving cars. And we would meet with the head of the government agency, the National Highway Traffic Safety Administration, or NHTSA, that’s responsible for safety regulation of cars. And the first time you met with this guy, he would always say, “What I’m trying to get out of this meeting and every meeting is very simple: Every year, 35,000 Americans die on the highways, and I want to make that number smaller. So talk to me about how I can make that number smaller.” Right? And so to the regulators, what they see is, and I think correctly, is a future in which cars are much safer than they are now because future machine drivers are just much safer than people. 

MB: To develop driverless cars, you have to eventually test them on everyday roads. That can be risky — What if the car makes a mistake? But you also have to weigh that risk with that huge number: 35,000 deaths each year. 

The hope is that light regulations let engineers work faster — to more quickly get to that place where we have fewer accidents. 

Still, the development of this technology is going a lot slower than we anticipated. The way tech companies were describing their progress, a lot of us kind of expected driverless cars were right around the corner. 

AH: There was an expectation, especially back in like 2015, 2016 that this technology was a lot closer. Just look at Elon Musk and Tesla. He said that there was going to be a million robo-taxis on the road by the end of 2020. Didn’t happen! It is nowhere close to happening. 

MB: He chalks this up partially to something called the Gartner Hype Cycle — the theory that for almost all technology, you have a peak of inflated expectations. You can imagine this on a line graph, a line going up and up. 

AH: That’s the peak of inflated expectations, and it then drops down into what’s called the trough of disillusionment. And then you have sort of the slope of enlightenment and then it plateaus into the plateau of productivity.

[quiet music]

MB: In 2015, 2016, 2017, we were in that peak of inflated expectations. But then, in 2018… 

CBS EVENING NEWS CLIP: Uber today put the brakes on all road testing of its self driving cars after a deadly accident. 

MB: …Andrew says we hit our trough of disillusionment.

CBS EVENING NEWS CLIP: A pedestrian in Tempe, Arizona was killed last night by a self-driving Uber taxi. It is believed to be the first fatality caused by an autonomous vehicle. 

MB: There was a safety driver in the car at the time, but she was watching an episode of The Voice instead of watching the road. Prosecutors charged her with criminal negligence. 

The pedestrian was walking her bike across the street, not at a crosswalk. Uber’s system wasn’t set up to expect people jaywalking, and it had trouble figuring out what was going on. It ID’ed her as a bicycle less than two second before the crash, when it was already too late to brake. Uber apologized to her family and reached a quick legal settlement with them. 

Overall, everybody’s expectations for driverless cars plummeted. The hope is that we’re now entering that plateau of productivity, where we make more steady, measured progress. 

Driverless cars seem pretty simple when you make them in a lab. 

[music]

AH: Anyone can basically, you know, slap some cameras and radar and a LIDAR sensor on to a vehicle, upload some machine learning code and, you know, sort of jerry rigged the electronics in the vehicle and get it to drive in a basic loop-style route. That’s not very challenging. Maybe on a closed course or, you know, on a college campus perhaps. 

MB: But it gets way harder when you put that car on a real road and ask it to drive among humans, like humans. 

Like Ed said before, AI is kind of like an alien intelligence. The way it handles information is hugely different from the way we do. There are some ways in which it’s superior to us, and other times it utterly fails at things we’d think of as common sense. 

For example, when you’re driving a car, you’re looking around for possible problems you could encounter: somebody running into the street, construction, things like that. 

EF: AI systems, they do that, but they also tend to operate when they’re driving by having an incredibly detailed map of these surroundings that includes every single parking meter, every single tree, every single bush, every single mailbox, in the whole city. You and I could never memorize the whole city or area that we live in. We have to scan and sort of figure that out as we go. But because a computer can have a huge amount of data, a self-driving car can have an incredibly detailed map that includes all of this stuff, and rely on that map in a way that we would not. 

And what that means is if there’s a new parking meter that gets put in, the self-driving car might freak out and say, “Oh no, unknown object! I need to slam on the brakes!” Whereas you and I would look at it and say, “Oh, it’s a parking meter.” We might not even notice that it was new. 

MB: The fundamental challenge in autonomous vehicles right now is to help them think more like us, to correctly respond to an unexpected pedestrian and not a parking meter. Maybe even to break for a pigeon. 

MB: And the way they’re doing that is by running lots of tests, letting the AI learn through trial and error, and giving the engineers more information on where they need to make tweaks. 

But there’s an extent to which we don’t know what this looks like until we’re there. And Andrew says we could be in for a rough transition period. There will be a time when the road has both autonomous vehicles and human drivers, which might be confusing. 

[music]

AH: How are you going to react when you get to an intersection? Traditionally you’ll maybe wave somebody through if you want someone to go ahead of you, or you’ll make eye contact with another driver. When there’s nobody in that vehicle, you’re not going to be able to have that same interaction. So it’s going to lead to a lot of confusion and confusion can often breed accidents and crashes. And that leads us down a whole sort of dark path that a lot of the AV companies don’t really like to talk about in their marketing materials because they like to talk about eliminating crashes and reducing fatalities. 

MB: But when a driverless car does crash, understanding AI’s mistakes can be really difficult. 

EF: Issue number one is because the system may be relying on different kinds of signals or information than we do, it might be really hard for us to kind of quote unquote empathize with it or understand what the world looked like to it, so that we can think about its decision in a way we would think about a person’s decision. 

If there was a human driver who was involved in an accident, the police would come or someone would come later and say, “What happened? What did you see? What were you thinking?” But with the machine you can’t necessarily even ask it the question. And so it requires an investigation, kind of like an investigation that happens after a plane crash. 

If an airplane crashes the FAA goes and they look inside the black box that records all kinds of information about what happened, and they listen to the chatter among the pilots, and they try to figure out what must have happened. And that’s kind of what happens with self-driving car accident.

MB: But, the thing is, the investigators can’t listen to pilot chatter. Implementing deep learning means asking AI to think for itself. It means we get a conclusion, but we can’t always understand how the computer got there. 

EF: The algorithms that they used are derived based on enormous bodies of data, and very complex analyses of those large volumes of data. And it’s just really hard for people to understand, to wrap their head around, what led the system to being set up in such a way that it behaved the way it did.  

MB: That is a supremely unsatisfying response to somebody who just lost a family member in a car accident. We want to be able to wind events back to a choice a person made, to hold them responsible. But who is that person? 

Andrew has a friend, Alex Roy, who works for the self-driving car company Argot. He has an interesting take on this that Andrew told us about. 

[music]

AH: Alex likes to say that it’s not a self-driving car if you as a passenger, or even someone who’s sitting in the driver’s seat, has any liability for any of the mistakes that are made. Only if the company that makes the vehicle accepts 100 percent of that liability, then you can be assured that what you’re in is really a self-driving car.  

MB: The nature of a self driving car is that a person cannot be responsible for an accident. That pretty much puts autonomous car companies in the hot seat. 

AH: And I think we’re going to start to see which companies take responsibility for the mistakes of their vehicles and which don’t. And that’s going to be sort of the dividing line. 

MB: In order for driverless cars to become a normal part of life, people have to buy into them. They’ll need proof it’s safe to get inside. 

AH: You know, am I going to drive my own car to work today or should I summon the robot car to come take me? Well, I saw a study on the news the other day that shows that, you know, all the robot cars that are on the road today are operating much safely. And there’s less, there’s less crashes and less fatalities. So maybe I’ll choose that robot car. That’s going to be sort of the individual choices that go into what will be sort of the impetus to these vehicles potentially taking over human driving on the road. 

MB: We’ll have to be comfortable with a machine making choices that could possibly kill somebody. We’ll have to know that a machine’s choices will be better than a person’s choices — and maybe be able to offer some kind of empathy for their mistakes. We’ll have to be comfortable handing over the control to AI. 

How long until we get to that day? Andrew was hesitant to make a prediction.  

AH: eventually and someday is what I’ll say for autonomous vehicles on the road. 

MB: [laughs] Let’s hope we’re not talking about it like flying cars. 

AH: Oh man, don’t get me started.

[both laugh]  

[music]

EF: I think in the coming years you’re going to see them easing in. I think you’ll have something more like a taxi or a car sharing kind of system. And the reason is, if you think about it, the way we use cars right now is super inefficient. My family has a car and it sits idle literally 99 percent of the time. And if the car can drive itself around, it can be shared between a lot of people. And the most efficient way to use that is with some kind of taxi service. And so we’d use transportation as a service instead of as a thing we own. That seems like where things are headed. 

MB: When we come back: You think it’s hard to teach a car to drive itself? Well how about a weapon that decides to kill? I’m Malcolm Burnley. 

EF: And I’m Ed Felten 

MB: This is A.I. Nation. 

[music fades]

Listen to A.I. Nation

[music]

MB: Welcome back to A.I. Nation. I’m Malcolm Burnley. 

EF: And I’m Ed Felten. 

MB: In 2017, Laura Nolan got pulled into a conference room with some of her senior colleagues at Google. Laura’s a software engineer. She works on systems like cloud storage. And her colleague told her she needed to make some really major changes to one of her programs. 

LAURA NOLAN: And I kinda said, why? This is going to be big. It’s going to be expensive. It’s going to take a while. And he said, well, it’s this Maven. It’s this drone thing. 

MB: Laura’s colleague told her expensive wasn’t a problem. Because this was for Project Maven, a contract with the U.S. Department of Defense. 

The DoD had run into a problem. They had drones positioned in the Middle East, capturing video. But going through all that video, and actually turning it into usable information, was tedious work. 

LN: They had people sitting at desks, looking at a wide-area motion imagery from drones, and then when they would see a person, when they would see a vehicle, when they would see other things of interest, they would take note of the time of the place. 

They got to a point where they literally couldn’t hire enough people to analyze all the video that they want because they had so much surveillance. So, enter machine learning. Technology to the rescue. 

MB: The DoD wanted a computer — called “Maven” — to be able to do a lot of this cataloguing work and help them identify targets. 

They hired Google to make it, which meant Laura needed to help reconfigure some cloud storage to make it work. 

LN: I thought it was completely crazy because I think at that time Google had fostered an image of, you know, being about organizing information and being not evil. 

MB: (At the time, there was literally a line in Google’s code of conduct that said “Don’t be evil.”) 

LN: I’m certainly not saying that all military activity is evil. For sure that there are sometimes times when force is required. But Google had very much cultivated this image of not being a military kind of organization. And then suddenly it was doing this. And it was just very strange. I honestly thought that there must have been some mistake. But, you know, I absolutely had concerns from the start.

MB: And these were big, ethical concerns. Laura worried that Project Maven would lead to greater surveillance, which could have a big impact on civilians as well as people the military was interested in. 

LN: I don’t think that people’s privacy goes away just because they happen to live in a country that may harbor some terrorists. I think we have to think about the proportionality of removing the privacy of many, many people for many years

MB: But she was also worried about where technology like this could lead. 

[music]

LN: Drone surveillance is the first step in drone strikes. It really was this military project that was feeding this kill chain and would lead to people being killed. 

MB: She says she didn’t want to be a part of a kill chain. And she didn’t want to help develop a technology that could be the first step in a dangerous autonomous weapon — one that could identify targets, and then kill them, all on its own.

Laura raised objections with her supervisors. Not long after, word spread within the company. She and over 3,000 Google employees signed a petition saying they didn’t want Google to be, quote, in the business of war. 

In response, Google executives said they would put out a set of AI ethical principles that would fix the problem. But, when those principles came out, Laura thought they fell pretty short. 

LN: They do say that Google won’t build weapons, but Maven is not a weapons system. Maven is a surveillance system that would feed into targeting systems. And it’s the surveillance and the analysis that’s the hard part, not the weapon itself, right?  

That was the final straw for Laura. She was one of 12 Google engineers who quit over Project Maven. 

LN: Yeah my last day, it was in early summer in 2018. It was weirdly anticlimactic, right? You sort of end your day and you walk downstairs and you hand back your badge and your laptop. And I shed some tears as I walked out of the building. It was a big part of my life and a big part of my professional development. And I was sad to leave, but I kind of felt like I couldn’t stay, you know? 

MB: Google finished out its Project Maven contract, but, after all the blowback, they didn’t renew with the DoD to keep working on it. 

Laura started to work with the Campaign to Stop Killer Robots, an advocacy group that’s pushing for a ban against all autonomous weapons, worldwide. 

A different group, The Future of Life Institute, who are also against these types of weapons, made this video I want to tell you about. 

It’s called Slaughterbots. They’ve used it in presentations they’ve given, including one they gave to the United Nations. I found it on YouTube, with the caption, “If this isn’t what you want, please take action.” 

It starts with a CEO — a handsome, tall, middle-aged guy — giving a presentation on a big stage. A tiny drone zooms out from the wings over towards him…  

[zoom sounds] 

SLAUGHTERBOTS CEO: Your kids probably have one of these right? Not quite. 

MB: … and lands in his palm. 

[zooming stops, audience laughs] 

SLAUGHTERBOTS CEO: Just like any mobile device these days, it has cameras and sensors. And just like your phones and social media apps, it does facial recognition. Inside here is three grams of shaped explosive. This is how it works. 

MB: The tiny drone speeds over towards a mannequin on stage left, scans its face, and blows a hole in its forehead. 

[higher pitched zoom, loud pop, louder applause]

SLAUGHTERBOTS CEO: Did you see that? A 25 million dollar order now buys this… 

[humming sound of many drones]

SLAUGHTERBOTS CEO: … enough to kill half a city. The bad half. Trained as a team, they can penetrate buildings, cars, trains, evade people, bullets, pretty much any countermeasure. They can not be stopped. 

[applause]

MB: And… things devolve from there. 

SLAUGHTERBOTS NEWS ANCHOR: The nation is still recovering from yesterday’s incident, which officials are describing as some kind of automated attack, which killed 11 US senators at the Capitol building. 

SLAUGHTERBOTS WITNESS: They flew in from everywhere but attacked just one side of the aisle! It was chaos! People were screaming!

MB: This is obviously speculative fiction, a story put together by an organization trying to make a point. I sent Ed the video, and I was expecting him to kind of laugh it off as too extreme. But that is not what Ed said.  

[music]

EF: Well, it seems, unfortunately, that it probably will be possible in the future. If probably not now. That is that kind of technical capability is something that we should be worried about. 

MB: In fact, there are a few places around the world where autonomous weapons already exist. 

Laura Nolan told me about a weapon Israel has called the IAI Harpy, a so-called “loitering munition.” It flies around a specific area and looks for radar signals it doesn’t recognize. And when it finds something… 

LN: It does a kamikaze strike on it. So it will actually fly into the target. 

MB: There’s also a Turkish military company that makes a weaponized drone called the KARGU. The KARGU is way bigger than Slaughterbots, but it does also have facial recognition capabilities. 

LN: So that sort of immediately suggests that there is an intention to use these things as kind of people-hunter drones. 

MB: Terrifying. Ed, I’m curious — what kind of conversations was the Obama administration having about autonomous weapons? 

EF: When I arrived on the White House staff,  there was already a pretty well established policy that the Department of Defense had about issues around autonomous weapons. 

This was an issue that was understood as being a really important one, to make sure that decisions about the use of force were always made by people, and that we weren’t going to delegate the decision to shoot at a particular person or particular thing to a machine. So that’s a pretty straightforward line to draw in principle. But what it actually means in practice can be pretty complicated. 

So the discussions that I was involved in, in the government, were broadly around the idea that AI and machine learning made possible types of weapons that were much more autonomous, much more dangerous in some ways. And the question was, What should the United States government be doing about that? And one part of that is What should we be willing to do ourselves? And the other part of it is how to deal with the risk that terrorists or other adversaries might have very advanced autonomous weapons. 

MB: Mhm. How can we defend ourselves against that? 

EF: That’s right. It’s in some sense, the easy part of the problem, what should we do? Because we can draw a clear moral line and then work hard to stick to it. But if there are terrible things that adversaries are willing to do, or terrorists are willing to do, that we are not willing to do, we might worry about our ability to defend ourselves in a world where our adversaries have certain kinds of weapons and we don’t. 

MB: The thing is, even if our adversaries are plowing ahead with autonomous weapons, the tech isn’t really there yet. 

Think about the conversation we had earlier, about some of the difficulties with self-driving cars. Laura Nolan says the same problems come up with autonomous weapons… but in a way that’s more complicated, and has even bigger consequences. 

LN: So if we look at a self-driving car, you want to get from A to B without crashing into anything. It’s a relatively well defined problem, right? The problem an autonomous weapons has much more difficult. It has to get from A to B and it has to sort of loiter around and decide if what it sees as a viable target and attack it. This is a much less defined problem. 

MB: Engineers still haven’t mastered driverless cars — which are meant for a much more predictable environment. Roads have rules and signs, and most people are trying to behave safely. That’s not true in war. 

EF: In military conflict, all kinds of surprising and terrible things happen. You have an adversary who is trying to be deceptive, trying to get systems to behave in the wrong way. You have some bad actors who are willing to try to get your systems to target civilians as a sort of human shield strategy. 

MB: It’s also really hard to test autonomous weapons to make sure they work the way you intend. The military can stage a fake, experimental war — but they’re not entirely realistic. 

And again, we run into this problem of accountability, where a machine is making a choice we may not be able to understand. And this choice is supposed to lead to somebody dying. 

LN: If you’re driving a car and you have an accident, somebody is liable for that. And that costs a lot of money. So manufacturers, they have a huge incentive to make their cars safe, right? In a war context, things are quite different. It is not easy for victims of war to get that same redress, particularly if they’re dead.

MB: How do you create an autonomous weapon with the guarantee that it won’t kill the wrong person by accident? 

LN: I just don’t see how it can be done in any reasonable way. 

MB: Autonomous weapons have another dangerous advantage: speed. 

[music]

EF: There is and has been for a long time in, you know, military thinking, the idea that it’s a big advantage if you can decide and act faster than your rival or adversary can. And there’s a certain speed that humans go and machines can go a lot faster than people. And so my real concern is that there’s this kind of imperative that our systems need to be able to operate as quickly as our adversary’s systems can operate, and that the only way to be able to operate that quickly is to take humans out of the loop. That dynamic generates a powerful push toward delegating more of the decisions to machines 

MB: Laura and the Campaign to Stop Killer Robots would like to see autonomous weapons banned entirely — an international treaty that explicitly forbids them. 

For now, we’re covered by other treaties, like the Geneva Conventions. Though, recently, the National Security Commission on AI released a report. It was over 700 pages, and it warned that quote, “AI will not stay in the domain of superpowers or the realm of science fiction,” end quote. The Commission recommended that President Biden not sign a treaty banning autonomous weapons. They said this was because they didn’t think Russia or China would abide by a treaty — which would leave us unprepared. 

I’m still left with a question — a question probably driven by too many sci-fi movies. We just heard about huge advances in machine learning, and also that we currently have the technology for murderous robots with built in facial recognition software. Should I actually be worried about a robot apocalypse here? 

EF: Not right away. I’m less worried about an all out robot apocalypse. I’m more worried that we’re just going to lose control of all the complicated things that we’re building. And that we’ll be living in a world that’s hard to understand, and hard to control. 

[music]

MB: On our next episode, we’ll take a look at AI and pandemics. Could developments in AI make pandemics in the future… not that big a deal? And how intimate do we have to be with AI to make that happen? 

A.I. Nation is produced by me, Malcolm Burnley, and Alex Stern. My co-host is Ed Felten. We have editing help from Katie Colaneri and John Sheehan. This podcast is a production of WHYY and Princeton University. 

[music fades] 

Want a digest of WHYY’s programs, events & stories? Sign up for our weekly newsletter.

Together we can reach 100% of WHYY’s fiscal year goal