Transcript: Echo Chambers (A.I. Nation)
Listen to A.I. Nation
[low music]
MALCOLM BURNLEY, HOST: Jonathan Tamari’s first day back on Capitol Hill felt kinda special.
JONATHAN TAMARI: It could sound kind of corny, but there’s definitely this, this reverence that you get when you walk into the building.
MB: Jonathan is a veteran political correspondent for the Philadelphia Inquirer and had spent a lot of time in the capitol building. But thanks to the pandemic, he hadn’t been there for most of 2020.
JT: Probably for first time since maybe early summer that I was back was January 6th. And that was the day that Congress was going to certify the presidential election results.
MB: Yep. And if you’ve ever watched the State of the Union, the press area is in a balcony right above where the president speaks. That’s where Jonathan was.
JT: We each got a small red tag with a number on it that assigned our seat for the day in order to keep us socially distanced. I got number six.
MB: Speaker Nancy Pelosi gaveled in the session of Congress.
[gavel sound]
JT: But at the same time, I’m monitoring Twitter to see what else is happening around the Capitol, to see what the president is saying at his rally.
DONALD TRUMP: We will stop the steal! And after this, we’re gonna walk down, and I’ll be there with you. We’re gonna walk down, we’re gonna walk down to the Capitol!
[cheers, music changes]
JT: Then I started getting texts from my wife saying who’s watching this on television, saying, you know, are you seeing this?
[indistinct yelling, voices chanting “Stop the steal!”]
MB: We all know what happened next.
[voices and sounds of the mob continue under narration with music]
MB: During the vote counting, rioters overran capitol police, broke down the doors, and breached the capitol building.
JT: At some point a security officer comes to the lectern and says, People have breached the Capitol. They’ve deployed tear gas in the Capitol.
[banging sound]
VOICE FROM THE CAPITOL RIOT: Knock knock! We’re here!
JT: And then they tell the lawmakers to reach under their chairs and pull out escape hoods that go over your head and have a fan in front of them. And we were told to start moving in. And I grabbed my laptop. I had my escape hood in one hand, my phone in the other.
[voices chanting, mob sounds continue]
JT: And we come out of the House chamber and I see four or five rioters laying face down on the floor surrounded by police. And there’s just a line of officers with their handguns drawn, urging us down the stairs, down the stairs, into the basement, and there’s all these interconnected tunnels. And it’s us. It’s lawmakers. And eventually, you know, we just followed the police directions to a secure area full of reporters, staffers and members of Congress.
[music ends]
MB: Hours later, authorities secured the building, and order was restored. Congress certified the election results around 3:30 in the morning.
[low music]
MB: Around them, the Capitol was still filled with broken glass and debris — a reminder of the politically fueled anger that had just ripped through the building. Jonathan says the riot was a reminder of why he does what he does.
JT: My hope has always been that if we provide good information for people, that they can then make good, sound decisions about what should happen with the government.
MB: But it also reminded him of how many people don’t share that ideal anymore.
JT: This entire insurrection was fueled by lies, was fueled by a belief that is just untrue, that the election was somehow stolen or had so many irregularities that it couldn’t be trusted.
MB: Since episode one, we’ve talked about the ways AI is shockingly good at spreading mis- and disinformation. And we see this all the time on social media, where algorithms amplify content and make it go viral — whether it’s true or not.
Is social media serving up content that pushes us to extremes?
How much can we really blame AI for what happened on January 6th?
Maybe more than you’d think.
[music]
MB: From WHYY in Philadelphia and Princeton University, this is A.I. Nation, a podcast exploring how humans are handing over more and more control to machines, and what that means for us. I’m Malcolm Burnley.
ED FELTEN, HOST: And I’m Ed Felten.
MB: My co-host Ed is a computer science professor at Princeton, and an AI expert. Plus, he used to work at the White House.
EF: Yep, I advised President Obama on policy around technology and machine learning.
[music changes]
MB: Social media is a great example of how prevalent AI is in our lives, in ways we might not even realize. Aimlessly scrolling through twitter while you’re waiting in line? Wake up and check your notifications on instagram? That’s you, interacting with artificial intelligence.
EF: We might think of social media as talking to our friends, or reading what people we know or famous people say. But really that experience is shaped in a profound way by the AI systems that decide what we’re going to see.
MB: TikTok, Twitter, Instagram — they all use AI. But we’re going to focus on Facebook. It’s still the most used social media platform by adults, by a lot. About 180 million people in the US are on Facebook. Globally, that number is 2.8 billion.
Which means Facebook, and their AI-powered algorithms, have a lot of influence.
EF: And those algorithms are using all the data they have about you — that means things like which things you read, which things you click on, what you write, which images you post. All of that goes into some model which is deciding which of your friends posts, which ads to show you and in what order and when. Even things like what font they use and how big is the print, what color the border is and how many pixels wide the border is. They test everything and optimize it, using algorithms to try to serve their businesses end.
MB [TO ED]: And what is their business end exactly?
EF: Usually their business end involves keeping you on the site and getting you to interact with the site as much as they can.
And so that can really lead to people being shown sort of more extreme and polarizing content because something that riles you up, something that gets you angry or gets you enthusiastic, that’s something you might feel you need to respond to.
And someone else who has different say political, ideological views than you, it will show them content that will get them riled up in the same way. And the result is you’re both sort of in your own angry echo chambers. And that can really lead to polarization. And people worry a lot about that. I worry a lot about that.
MB: Yeah. And is that where fundamentally, this kind of attention economy, if you will, of Facebook and other social media, wanting you to stay on the site, is that really where you see the proliferation of fake news?
EF: I think fake news or low quality information, it basically spreads, it’s prevalent because it works. And it’s this kind of marriage between the economics of page views and clicks and online advertising and the algorithms that are trying to optimize a site to drive that economics that builds this powerful ecosystem where content that’s extreme or content that’s that’s low quality or unreliable can really thrive.
MB: You might be on Facebook to see a picture of your friend’s new baby. But the stuff that gets pushed to the top of your feed might be something your girlfriend’s aunt shared about President Trump. What she posted may or may not be true.
And that same experience is happening 2 billion times over.
EF: And I think not only the companies, but really everyone didn’t appreciate how much of a mess social media spaces could easily become, and how hard it would be to maintain any kind of civility there.
MB: One person I talked to for this episode (Surya Mattu — We’ll hear from him later!) described Facebook’s algorithm like a recipe: the instructions the company feeds into its system to get the final product you see every day.
Facebook is always tinkering with that recipe. Sometimes that’s to improve engagement and keep you logged on. Sometimes it’s to try and catch hate speech, fake news, or something else that goes against their terms of service. And sometimes it’s in response to public backlash.
[music]
NINA JANKOWICZ: After the 2016 election and then the subsequent Cambridge Analytica scandal, Facebook decided to really put an emphasis on friends and family in people’s Facebook experience.
MB: That’s Nina Jankowicz. She studies disinformation at the Wilson Center, a think tank run by the Smithsonian. She also wrote a book called “How to Lose the Information War.”
She says the 2016 election put Facebook in the hot seat. Russian agents and bots tried to influence the outcome with fake profiles and fake news. Later, we found out a political consulting firm called Cambridge Analytica had illegally harvested the data of tens of millions of users. They then used that data to target them with political ads.
Suddenly, everyone was way more aware of all of the information Facebook had about them. We realized an idea Mark Zuckberberg had in college had morphed into a technology that could have a major impact on elections, and on the country.
In response, Facebook made what it calls a pivot to privacy.
NJ: They wanted people to have a more private experience. Rather than the digital public square. It was more akin to the digital living room. And Groups were a big part of that.
MB: Before this pivot, Facebook Groups weren’t as much of a thing. They existed, but mostly what you saw on your feed were individual posts made by people, or news organizations, or companies. But Facebook decided it wanted to prioritize the community experience.
NJ: So it started recommending people more Groups based on their likes on Facebook, based on their interests and based on their previous engagement. So the types of posts and content that they had interacted with before. And then Groups started being recommended based on the Groups that you were already in.
MB: Nina says Groups in particular are really vulnerable to misinformation, and disinformation, and that Facebook’s group recommendations can pull people down radical rabbit holes.
NJ: So I tell this story a lot. But when we’re getting into the pandemic, I encountered an alternative health Group which on the side panel, on the recommendations panel of this Group, there was a QAnon Group, a white supremacist Group, a false flag conspiracy theorist Group, and then another one about 5G conspiracies, I think. All right there, just one step removed from something that was seemingly innocuous about alternative health remedies.
Of course, just because Facebook recommends you join a QAnon Group doesn’t necessarily mean you’ll become a conspiracy theorist. But you might be more likely to believe something posted in a Group.
NJ: People’s guards are down a little bit. So they’re more likely to trust content that’s being shared in this community that they feel is a safe space for them, that they feel in some way is already vetted by their friends, by their family, or by people that they trust. And they’re less likely to do their due diligence in that way. They’re not going to really scrutinize that information to the same degree that they would if they just encountered it somewhere else on the Internet.
MB: And then, in Groups, the algorithm keeps doing what it’s been doing: surfacing content that it thinks you’ll engage with — stuff that might be political, or make you mad, or both. It might be a meme about Ted Cruz, or a video like Plandemic, a COVID conspiracy theory documentary that spread like wildfire before Facebook could pull it down.
NJ: In general, the thing that unites it is that highly emotional content that really appeals to people’s grievances, real grievances in their lives. So it’s not going to set off fake news alarm bells, if you will. It’s going to be something that rings true to their personal experience.
MB: Groups have also caused something Nina calls the conspiracy convergence, where people already vulnerable to mis- or disinformation are repeatedly exposed to new conspiracy theories. Thanks to Facebook’s Group recommendation algorithm, one Group can act as a gateway to more.
It can just… become the water you’re swimming in.
[music]
And Facebook knew about this. According to reporting by the Wall Street Journal, the company started studying the polarizing impact Groups could have in 2016. In an internal presentation, a researcher told the company that Facebook was hosting a lot of extremist groups, and that the platform was helping them grow. 64 percent of joins to these Groups were thanks to Facebook’s own recommendations.
There have been battles within the company about how to handle this, and what its role is when it comes to moderating political discourse.
We reached out to Facebook about this. We asked about user polarization, and the steps they’ve taken to push back against misinformation. They sent us some statements that pointed to the billions of fake accounts they’ve taken down, and the thousands of people they’ve hired to work on these issues and moderate the site.
It’s worth noting that this is not a small issue to tackle. Those billions of users are posting content faster than Facebook workers and computers can possibly moderate it.
EF: Just maintaining a relatively civil and non-threatening, non-terrifying environment for everyone in a big social space is already pretty hard to do. The big social media companies do a lot, and they have a lot of people working and a lot of algorithms working to try to keep the most obnoxious and heinous content off of them in a way that isn’t overly broad and doesn’t keep legitimate conversations about difficult things from happening.
MB: The moderation process involves nuance that can be complicated.
EF: It’s difficult to draw the line sometimes between, say, a depiction of genocide that glorifies the genocide versus a depiction of genocide that is appropriate and is is condemning it and raising awareness about a wrong that needs to be righted. Lines between real satire and something that is that is a serious threat and can be difficult to draw.
MB: When Facebook CEO Mark Zuckerberg talks about this publicly, he references that line. He talks a lot about free speech.
MARK ZUCKERBERG: Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put other people in real danger. This raises the question of exactly what counts as dangerous speech online. [fades under narration]
MB: This is from a talk he gave to students at Georgetown University in 2019.
MZ: Increasingly, we’re seeing people across the spectrum try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are now so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves. I personally believe that this is more dangerous for democracy over the long term than almost any speech.
MB: While Facebook has been debating this internally, and while Mark Zuckerberg has been giving speeches and fielding questions, the company has continued to push and promote Groups.
In 2019, Facebook had a big advertising campaign centered around them, including an ad that aired during the Super Bowl. Talking to Nina and Jonathan, one of the commercials kept popping up in my head.
It opens in a lecture room, with a professor writing on a blackboard.
FACEBOOK AD: Sound power is defined as… [fades under]
MB: Then we see the students, with their headphones on or looking at their phones. Equations, information — What a bore, am I right? One student surreptitiously blows on a kazoo.
[kazoo sound]
MB: Another, rows behind her, plays a kazoo back.
[kazoo sound]
MB: The two make eye contact, smile, and start playing together while the professor drones on in the background. They skip out of the lecture hall together and burst out of the building.
[building kazoo music runs underneath]
MB: As they jam through campus, more and more kazoo players join them. The crowd ends up at a house packed to the gills with everybody partying. Their energy is so contagious, the house cracks at the seams! The whole thing comes down.
[crash, silence]
MB: And the kazoo players find themselves in a concert hall.
FACEBOOK AD: Hold up? Y’all kazoo? Well get on up here! [music continues]
MB: The screen reads: “There’s a Facebook Group for everyone.”
[music]
MB: This is a pretty benign example of how Groups work. But what about when your common interest isn’t kazoos but… overthrowing an election?
FACEBOOK AD: Hold up?
[off-key kazoo]
MB: When we come back:
SURYA MATTU: Facebook has always said that their motto was, you know, connecting people to each other. They didn’t really account for the fact that not everyone wants to be connected to each other or shouldn’t be connected to each other.
MB: I’m Malcolm Burnley.
EF: And I’m Ed Felten.
MB: This is A.I. Nation.
[music fades]
Listen to A.I. Nation
[music]
MALCOLM BURNLEY, HOST: Welcome back to A.I. Nation. I’m Malcolm Burnley.
ED FELTEN, HOST: And I’m Ed Felten.
MB: To get a better sense of what was happening on Facebook around the time of the Capitol riots, I reached out to Surya Mattu.
Surya is an investigative journalist. He used to be an engineer at a healthcare tech company, where he’d always tested the technology he built to ensure against unintended consequences.
SURYA MATTU: And what really interested me in social media was that this was just not true at all. There was no kind of check on the unintended consequences of these algorithms.
MB: Surya has reported on social media for Gizmodo and ProPublica, and now he’s at The Mark Up.
Specifically, he wants to understand why Facebook shows you exactly what it shows you. This is different from person to person, based on the data Facebook has collected about you. While we know the basics about engagement driving the algorithm, we don’t know a lot of details. Surya calls these voids of information social media black boxes.
[music fades]
SM: Honestly there’s two levels of black boxes. One is the complexity of the algorithms themselves and how hard they are to step backwards through. I think even the companies that make these algorithms wouldn’t be able to tell you exactly why those things are being shown. And this is because there’s so many different input signals that go into those choices that it’s hard to reverse engineer them.
MB: Facebook’s algorithms use machine learning. We’ve talked about this before; it involves asking a computer to learn for itself, either from examples or by trial and error. You give it information and ask it to complete a task, and you don’t always know how it does what it does.
Facebook’s algorithms prioritize things like engagement — more likes, comments, shares — but Facebook doesn’t always know the particulars of how that engagement happens.
Surya’s other black box is a little less technical.
SM: The other is that, as outsiders, we don’t know what decisions these companies are making or what choices these companies are making in what goes into these algorithms.
MB: A lot of how Facebook works is proprietary, and the access they’ve given journalists is limited. So to understand what’s going on on Facebook, Surya needs to find another way.
SM: The secret sauce behind all of it is essentially crowdsourcing, which is actually quite a traditional journalism technique. It was called shoe leather reporting back in the day, where you would knock on two or three doors and get the anecdotes. The way I do it is I build technology, like build tools that I can give to people to download on their devices. And then they collect the data on their devices from these platforms and they share them with us.
MB: Surya’s latest tool is called The Citizen Browser. He made a custom application that 2,000 people volunteered to download. Those 2,000 people are paid, and they’ve given consent for Surya and other journalists from The Mark Up to use the app to look at parts of their newsfeeds. There’s a lot they can’t see, for privacy reasons. But The Citizen Browser gives them a window into what Facebook is recommending people, and what kinds of news articles come up.
The 2,000 people also shared demographic information, including who they voted for in the 2020 election, so Surya and his team can understand how that might factor into what Facebook has shown them.
SM: This is the first time this data has been collected from the outside. And it’s not easily available.
MB: They started the project in late November, so they didn’t capture the election. But they were able to see what unfolded on Facebook after, and some of what users could see on January 6th, the day of the attack on the Capitol.
SM: And what we basically found was how Biden and Trump voters were exposed to radically different coverage of the riots on Facebook, right? In essence both groups were being told the story they wanted to hear.
MB: Trump voters were more likely to see stories from partisan news organizations like Breitbart and The Daily Wire. The coverage typically favored Trump, and painted him as a victim of the situation. Biden voters were more likely to see stories from traditional news sources, like CNN or NPR, that had less spin.
SM: The thing that was most interesting in that for us was to notice how few common links there were between what Biden voters were seeing and Trump voters were seeing. So it really did feel like a reflection of the polarization taking place on social media platforms.
MB: Surya and his team also saw something interesting when it came to Group recommendations.
So in August of 2020, Mark Zuckerberg and Twitter’s CEO, Jack Dorsey, testified before Congress about how content on their sites gets moderated.
Senator Gary Peters brought up the Wall Street Journal reporting we talked about before: that Facebook’s own researcher said Group recommendations were leading to more people joining extremist groups. He asked if any of the measures Facebook had taken since 2016 had made a difference.
GARY PETERS: My question is have you seen a reduction in your platform’s facilitation of extremist group recruitment since those policies were changed?
MARK ZUCKERBERG: We’ve taken a number of steps here, including disqualifying groups from being included in our recommendation system at all, if they routinely are being used to share misinformation, or if they have content violations, or a number of other criteria.
MB: Zuckerberg told Congress they were no longer recommending political groups.
SM: But through our data collection platform, citizen browser, we were actually able to find examples of that still happening.
[music]
MB: After the election, The Mark Up found Facebook was recommending groups with names like “Donald Trump is Still our President.”
MB: Did you ask Facebook why they still were recommending political groups after they said that they would stop doing that?
SM: Yeah. So they basically just said that they have they have they have stopped and they will look into it, and they will remove these Groups because essentially what’s probably happening is that they think they’ve stopped.
MB: Surya suspected that Facebook had trouble identifying which Groups counted as political and which didn’t. This was something he and his colleagues at The Mark Up ran into when they did this reporting.
SM: There’s no easy way to tell a computer what is a political Group, because that thing keeps evolving. And you can’t do it just based on keywords. Even as we were doing this story, it became clear that these are fundamentally difficult things for computers to be able to do.
MB: Remember how GPT-3, the natural language processor, didn’t really understand the language it produced? Those shortcomings would complicate a task like identifying political Groups.
After The Mark Up’s reporting, Senator Ed Markey sent a letter to Facebook, asking why the company had broken the promise it made to Congress. Facebook confirmed Surya’s theory, calling it “a technical error with designating and filtering.”
Surya and his colleagues kept watching their Citizen Browser. Suddenly, recommendations for political groups plummeted.
[music]
MB: I want to get back to our original question, after we heard Jonathan Tamari describe the attack on the capitol: How much of this is AI’s fault? Did artificial intelligence play a role in polarizing our nation so much that we ended up with an insurrection?
I asked Nina Jankowicz, the disinformation expert
MB: Do you think that Facebook and Facebook groups, you know, deserve some of the blame for what happened at the Capitol?
NJ: So, without data, without hard data, which I don’t have access to, I would hesitate to directly link them. But I can tell you what we do know.
After the election was called for Biden, within 24 hours, there was a group called Stop the Steal that had over 300,000 members. It then splintered off after Facebook banned that group into several smaller groups — in fact, over 60 smaller groups, that we know of. And outside of those groups, conservative commentators, members of the Trump inner circle, President Trump himself had tens of millions of followers that were being targeted with those messages over and over.
MB: Mis- and disinformation about election results spread both on and off of Facebook. And so did plans for the riot. You could find calls to violence in some Facebook groups weeks before January 6th.
NJ: There was reporting by Jane Lytvynenko of BuzzFeed News that showed over 60 Stop the Steal groups were still active the day after the insurrection at the Capitol and had tens of thousands of members. And there was plenty of planning going on in those groups and directions for how to get where, who to meet up with, carpools, things like that.
MB: Of course, people are responsible for their own actions. But —
NJ: Social media provided kind of the information superhighway through which not only people received this information, but the people who were most vulnerable to it were targeted with it, thanks to the targeting tools that Facebook offers.
So absolutely, Facebook primed this message, and primed people to go out and show up IRL, as we say, in real life, and commit acts of violence.
MB: So, Ed, do you buy that argument in general? Do you think it’s fair to blame Facebook and their AI for the capital riots?
EF: I do up to a point. But the big question really is, or the remaining question is, is this a place where a bunch of already radicalized people found each other, or did this space actually play a role in radicalizing them? Did it turn people who otherwise would not have been involved in violent extremism into violent extremists? We know that seeing extreme political content does affect how people think. There is a whole community of social scientists who are studying these issues because they are urgent and trying to figure out what do we know, what parts of our intuition about these spaces are right, in which parts are not. There’s a lot that’s new here and we’re still catching up to understand what the full impact is.
MB: But it seems like at the very least, it’s something that Facebook should be aware of going forward.
EF: Absolutely. Because even if the only impact of this technology is to give people with extreme violent views a place to meet and organize themselves, and we know that that has happened, even if that’s the only effect, it’s still something that I think companies have a responsibility to take action against.
And of course, when you have the bottom line, pulling them in the direction of always wanting more people on the service, always wanting people to do more on it, that creates an additional tension.
[music]
MB: So what do we do with this?
SM: My motivation comes from trying to make this problem, like, from an existential crisis to a regulatory one, right? Like this problem of what is happening on social media has kind of reached a scale it has because there isn’t any check on these systems.
MB: The Mark Up caught one of Facebook’s errors. But they’re just one team of journalists. What Surya would like to see are regulations, at the very least around transparency, that would let people look at how Facebook works without building a whole browser extension.
SM: If there was just some basic ways in which you could kind of collect this data, and by us, I don’t just mean like me or like journalists like us, but it could be a variety of different groups. It could be like advocacy groups working for particularly protected classes of people. It could be government groups. It could be a variety of different groups who should be able to interrogate how these systems are actually influencing the communities that they care about.
MB: The call to regulate Facebook is a pretty common one; even Facebook is asking for some kind of regulation, to help guide them in their choices. But Ed points out that asking the government to get involved can be tricky.
EF: The other issue that comes up if you’re talking about a government agency is freedom of speech and the First Amendment, right? Social media sites are venues for speech, and the companies in deciding what information is out there, who can have an account, and what they can say, have a big influence over which speech can happen, and get a big audience, and which speech we hear.
But as soon as you have government saying you need to take this down, or you need to allow this up on your site, now you have government dictating controls on speech in a way that might be legally problematic, but in any case, should make us nervous.
MB: Nina has studied how other countries have handled regulating Facebook, and she says even with the best intentions, it often ends up in censorship.
[music]
Take Germany. If any content on Facebook violates German law, it has to be removed really quickly — sometimes within 24 hours. If Facebook, or any other social media company, doesn’t take it down in time, they get hit with a pretty hefty fine.
NJ: And what this has led to is that Facebook and other platforms have started over-removing content. So if anything has a whiff of being potentially in contravention of the law, it gets taken down and it usually doesn’t have any sort of appeal process. And that means that just thousands and thousands of posts are taken down so that they can avoid these fines. So we have to be really careful. And that’s why I think the most important thing we can do is just creating some mandated transparency and oversight of these platforms.
MB: Instead of removing posts, Nina thinks it’s better to add context. That’s something Facebook, Twitter, and Instagram have been testing lately. For example, posts about COVID-19 or vaccines on Instagram get a little tag that says, for more information, go to the CDC’s website.
NJ: Adding a little bit of context has slowed the spread of misinformation and has discouraged people from retweeting, sharing content that they haven’t read or interacted with or that has been labeled as false.
EF: There are some approaches that are better than others, but all of these approaches rely fundamentally on algorithms and AI. And the reason for that is that whether you are filtering content, whether you’re adding context to user comments and so on, you still have to be prepared to do that in real time for every user comment on the site. And there’s so much activity, so many users posting so many things, that you can’t do that manually because there just aren’t enough people to look at everything. You’re going to have to have algorithms trying to figure out how to protect people from the effects of other algorithms.
MB: [laughs] And what about this idea of transparency requirements, either asking the companies to disclose more about their algorithms, their AI, or even just to disclose more about, I guess, what they’re doing to combat this.
EF: Regulation can require disclosure of certain kinds of information. It might require disclosure to the public. It might require disclosure to a regulator. And if the regulator has the right kind of expertise and the right kind of tools, then that can be a pretty effective way to go. But even just disclosure to the public can be pretty useful because there are all kinds of experts, there are journalists who will dig into that stuff, and they break important stories and they help to embarrass or shame or convince the parties to behave in a better way.
[music]
MB: We’ve talked a lot, Ed, over the course of this whole series about AI’s growing influence in so many facets of life. I started off with this very dark sci-fi view of what AI is. And I think I’ve kind of come around to a more positive light. And strangely, that’s because probably my misanthropic outlook about humans. It seems like maybe AI can do things better than us.
EF: Well, that is in a lot of cases, that is the alternative. Right? You can do things the way we have done it. We know about the pros and cons of that pretty well. Or we can delegate some of this to machines and maybe machines are better at it, or better at parts of it. Or maybe a human plus machine team is the right way to approach some issue. We need to be open minded.
MB: We’ve talked a lot about positives and negatives with AI. And is this a situation where we can just say take the positives with us and forget the negatives as long as we engineer it right?
EF: No, I don’t think we can eliminate the negatives. It’s like any other important decision in life. You have to decide how can I get the most positives with the negatives? And I think with an optimistic view of human nature, which I ultimately have, the idea is that we will get smarter, we’ll get better at this, and over time we’ll learn how to deal with the downsides of this tech, and we’ll learn how to accentuate the the positive aspects of it, right? So that a century from now I will be built into everyday life and people will have benefits that we might not even be able to imagine now. It might be a bumpy road to get there, but I think in the long run, the positives do outweigh the negatives.
MB: So the arc of the AI universe does bend towards something optimistic in the end.
EF: For me, yes, I think we can bend it in that direction, but it is up to us. It’s not going to happen automatically.
MB: What about everyday people? What can they do to take back some control over how AI is integrated into their lives?
EF: Well, the first thing I think is to look around you and see the places where AI is involved and ask yourself, do I want this? In a lot of cases, you’ll probably say yes. Do I want automated navigation to help me find my way to places? Probably, yes. But maybe in some other cases not.
The other thing is to just be mindful that this tech is often a mirror that reflects us back to ourselves. And so what we do and say on social media, how we interact with these systems, that steers how they interact with us back. And so if we act like we would like people to act, then the hope is that that positive view of ourselves will get reflected back to us. So, you know, be mindful that these systems will adapt to what you do. They are learning from you all the time. And so think about what you’re teaching them.
MB: And maybe the biggest lesson I’ve learned in this process is that I need to be aware right now because we’ve shown AI is all around us and it’s already happening seemingly everywhere. When we look back 100, 200 years from now, how are we going to see this moment?
EF: I think we’re going to see this as a real turning point.
[music]
EF: I would put it up there with things like the printing press or even the invention of writing. The printing press helped lead to the birth of modern science because you could have scientific journals. It led to mass printing of bibles, which led to the Protestant Reformation and so on down the road. And that helped to shape a lot of the world that we live in now. There are so many changes that came about, but they took centuries to happen. And I think when we look back on this transition, we’re going to say this was a major transition in how human culture works and is organized, and that it had a lot of profound effects that, looking back from that future, maybe seem inevitable. But to us today, we probably don’t see them at all.
MB: Well, I feel like we spent a lot of energy trying to figure out what the first two decades of this millennium would be called, with the aughts or the teens. So maybe we’ll just have to settle with the AI age.
EF: Or the beginning of the AI age.
MB: [laughs] Yes. Ed, it’s really been a great time talking with you throughout the podcast.
EF: Thanks, Malcolm. It’s been a lot of fun.
A.I. Nation is produced by me, Malcolm Burnley and Alex Stern. My co-host is Ed Felten. We have editing help from Katie Colaneri and John Sheehan. This podcast is a production of WHYY and Princeton University.