What you need to know about fake video, audio, and the 2020 election

"Deepfakes" are digitally altered images that make incidents appear real when they are not. Such altered files could have broad implications for politics. (Marcus Marritt for NPR)

Security experts have warned about the prospect of a new era of high quality faked video or audio, which some commentators worry could have deeply corrosive effects on U.S. democracy.

Here’s what you need to know.

What are “deepfakes?”

That’s the nickname given to computer-created artificial videos or other digital material in which images are combined to create new footage that depicts events that never actually happened. The term originates from the online message board Reddit.

One initial use of the fake videos was in amateur-created pornography, in which the faces of famous Hollywood actresses were digitally placed onto that of other performers to make it appear as though the stars themselves were performing.

Members of Congress and security specialists have warned that the quality of these software-created fakes is improving significantly. One concern is that they could be used as part of an influence campaign and fool significant numbers of people, especially at a critical time in an election cycle, into believing something had happened that really hadn’t.

A related phenomenon might also bring a big change to American politics: The profusion of fake material also means politicians could try to deny things that really did happen.

How difficult is it to create fake media?

It can be done with specialized software, experts say, the same way that editing programs such as Photoshop have made it simpler to manipulate still images. And specialized software itself isn’t necessary for what have been dubbed “shallow fakes” or “cheap fakes.”

In the spring of 2019, a doctored video appeared of House Speaker Nancy Pelosi that had been adapted from a real one. The manipulated video was slowed so that Pelosi appeared to be disoriented and slurring her words.

Can fakes be detected?

Yes, although usually not until after some time has passed since they’ve appeared. Observers noticed that the doctored Pelosi video was a copy of a real video, and news organizations played them side by side.

Researchers also say they are working on new ways to speed up systems aimed at helping establish when video or audio has been manipulated. But it’s been called a “cat and mouse” game in which there may seldom be exact parity between fabrication and detection.

One important question at any given moment in the elections context is how much time it takes for detection to catch up with circulation.

Why is the timing so important?

Imagine it’s the night before a big debate, or Election Day itself.

Suddenly a video is everywhere that appears to show a candidate saying something outrageous, or engaged in some kind of inappropriate conduct. If the veracity of that material is unclear for the succeeding 12 or 24 hours or more, that could have an effect on voters’ attitudes.

Or imagine the mirror image of this scenario: Suppose the clip appears and what it depicts is real — but the candidate involved denies what it contains and says it’s a fake, citing all the discussion about fabricated media.

Other evidence might emerge proving that the activity in the video or audio was real, but what if all the facts weren’t sorted until hours or days later?

Is this problem only about fake video or audio?

No. Some Americans already have been taken in by things they read online in simple text. And in 2016, people were contacted by Russian government influence specialists posing as American campaign volunteers. They organized real-life political rallies and took part in online conversations all while posing as voters and activists.

Thinking bigger

Specialists also warn about an even larger-scale disruption than a video or audio clip.

David Doermann of the State University of New York at Buffalo told the House intelligence committee in June that when he worked for the Pentagon’s research agency, he worried about the creation of a whole false storyline.

“One thing that kept me up at night was the concern that someday our adversaries would be able to create entire events with minimal effort,” Doermann said. “These events might include images of scenes from different angles, video content that appears from different devices and text that is delivered through various mediums providing an overwhelming amount of evidence that an event has occurred — and this could lead to social unrest or retaliation before it gets countered.”

For example, imagine “reports” about a terrorist attack in a sensitive area at a sensitive time, complete with different “perspectives” and what appear to be real posts about it in real time on Twitter or another social network.

An adversary might want to try to make Americans act, or not act, to serve its own ends.

What is the government doing?

Law enforcement and national security officials say they’re working seriously to defend the election against interference, but so many of the solutions to fake media can only be applied after it appears.

At least one state has considered legislation that would outlaw distributing election-oriented fake videos.

Experts have suggested that social networks must be held accountable for hosting fake materials in order to create incentives for them to police themselves.

Others told Congress that Americans must learn more about fakes and the peril they can pose — but as Danielle Citron of Boston University law school told the House intelligence committee, no single key will unlock the problem.

“I feel like our panel is going to be in heated agreement that there is no silver bullet,” she said. “We need a combination of law, markets and really societal resilience to get through this.”

Copyright 2019 NPR. To see more, visit https://www.npr.org.

Want a digest of WHYY’s programs, events & stories? Sign up for our weekly newsletter.

Together we can reach 100% of WHYY’s fiscal year goal