The cost and hidden silver lining of COVID-19 misinformation

Because of the pandemic, researchers are learning more about how falsehoods spread on social media — and hopefully how to stop them.

Listen 10:52
Conspiracy theories have flooded social media since the pandemic struck — giving researchers a chance to learn more about why and how misinformation spreads. (Konstantin Savusia/Big Stock Photo)

Conspiracy theories have flooded social media since the pandemic struck — giving researchers a chance to learn more about why and how misinformation spreads. (Konstantin Savusia/Big Stock Photo)

This story is from The Pulse, a weekly health and science podcast.

Subscribe on Apple Podcasts, Stitcher or wherever you get your podcasts.


Since the coronavirus shutdowns began, social media has become more important than ever. It’s a lifeline to our old lives — a way to stay connected with loved ones, to hear the latest news, and sometimes to try to forget what’s happening altogether.

But there’s a downside to all this. Case in point: “Plandemic,” a documentary-style conspiracy video that recently went viral.

In case you missed it, “Plandemic” features discredited scientist Judy Mikovits making unsubstantiated, and often bizarre, claims about the ongoing pandemic — including that COVID-19 was “manipulated” in a lab, that National Institute of Allergy and Infectious Diseases director Anthony Fauci had profited from some kind of cover-up, and that wearing masks is actually making people sicker.

Facebook and YouTube scrambled to take the video down, but over just a few days it managed to rack up millions of views and tens of thousands of shares.

As it turns out, “Plandemic” is just the tip of the iceberg when it comes to a problem that’s spreading even faster than the coronavirus — something U.N. Secretary-General António Guterres recently called an “infodemic.”

“This is a time for science and solidarity,” Guterres said in a video message. “Yet the global mis-infodemic is spreading. Harmful health advice and snake oil solutions are proliferating; falsehoods are filling the airways; wild conspiracy theories are infecting the internet. Hatred is going viral, stigmatizing and vilifying people and groups.”

You’ve probably heard a few of them — for instance, that the coronavirus was created by Bill Gates, or is being spread by 5G radio waves, or can be cured by drinking bleach. In that sense, there’s a very good chance that social media helped shape how the pandemic has unfolded, and not in a good way.

But the opposite is also true: The coronavirus has helped researchers learn a lot about how social media work as vectors for misinformation. And it’s even started to push real change — for example, Twitter’s latest move to start fact-checking false claims about COVID-19 (including ones that come from the president.)

Here’s what researchers have discovered so far.

Why the sudden surge in misinformation?

Fake news is nothing new, but the recent tsunami of misinformation surrounding COVID-19 is arguably unprecedented in its scope and persistence. What is it about the coronavirus that seems to have tripped this giant worldwide game of Telephone?

According to Kate Starbird, a professor at the University of Washington and co-founder of the Center for an Informed Public, it’s not as unusual as you might think.

“Rumors are actually a typical part of a crisis event,” Starbird said. “It’s natural human behavior.”

That’s because humans crave information in the wake of crises, Starbird said — information that could be crucial to their survival, such as which services have been affected, which roads are blocked, and where they can go for help.

“And so under those conditions, what we as humans do is we try to resolve that uncertainty and that anxiety,” she said.

The way we do it is by talking to one another.

“We try to find that information and come up with explanations,” Starbird said. “And those explanations, we talk about it as collective sense-making.”

Sense-making on a global scale

Those explanations can be right, but they can also be wrong. When they’re wrong, the result is rumoring.

Historically, sense-making has happened on a local level — but thanks to social media, our collective hunt for information about COVID-19 has turned into a worldwide conversation.

“It’s truly global,” said Kathleen Carley, a professor of computer science at Carnegie Mellon University who also runs the Center for Informed Democracy and Social Cybersecurity. “So that means people around the globe are spreading disinformation and it will get picked up by people in other countries.”

Usually, disaster-related rumors start dying away as more questions are answered. But that hasn’t always been the case with the coronavirus, thanks to ongoing uncertainty about how it works, where it came from, how to treat it, and what governments are doing about it. As the rumor mill churns, these germs of misinformation have continued to spread, as fast — if not faster — than a real virus.

The problem is, we can’t exactly social distance on social media.

“It’s hitting at this moment where our information systems are already sort of characterized by persistent, pervasive misinformation, disinformation, and the strategic manipulation of these online spaces,” Starbird said.

And even though we’re all facing the same threat, we’re not all coming at it from the same perspective. There can be miscommunications from one language to another, and even intentional deception between groups that don’t have each other’s best interests at heart.

“From all parts of the world, people can be exploiting other people right now,” Starbird said. “So it’s just really this kind of perfect storm.”

Subscribe to The Pulse

Who’s spreading misinformation — and why?

According to researcher Kathleen Carley, misinformation is coming from all around the world.

“It’s everybody,” she said. “Everywhere and everybody.”

Some rumors have been traced back to conspiracy theorists, some to so-called “black news” sites. Even the Chinese government, along with a few of our own politicians, have pushed a few. But in general, Carley said, the origins of rumors can be tough to find.

“So two things that are extremely hard in social media,” Carley said: “One is attribution, that is, knowing exactly who started it; and the second one is knowing motivation.”

Motivation is important for making a crucial distinction, she said, between what counts as misinformation and what counts as disinformation.

Both refer to inaccurate information — the difference is that disinformation is intentionally spread by people who know it isn’t true, while misinformation is spread essentially by accident.

Let’s say, for instance, your mom sends you a message to get cash out of the ATM because a friend of a friend’s accountant says there’s going to be a run on banks — that’s misinformation. But someone else sharing that same information with the intention of creating financial havoc in the United States counts as disinformation.

Sometimes, it can be almost impossible figuring out which one you’re looking at, Carley said, but researchers have been able to find clues by tracking the propagation patterns of rumors.

It was while doing so that Carley and her team made a startling discovery.

“A lot of the misinformation that is being spread is being spread by bots,” she said, adding that the bots hail from all over the world. “They are usually not the originators of the messages, but they are the ones who promote it and make sure that it stays live.”

In recently released research, Carley and her colleagues analyzed 200 million tweets about coronavirus, and found that more than half of them are likely coming from bots. In addition, they found that 41 of the 50 most influential retweeters are also bots.

What all of this points to, Carley said, is that a number of rumors are actually orchestrated campaigns of disinformation.

How much of this bad information is making its way to our screens depends on a number of individualized factors, Kate Starbird said.

“The dynamics of how we organize our lives online — both how we choose our friend relationships, and how we seek out information that aligns with our biases,” she said. “And now, we can find all sorts of different kinds of information that aligns with those biases.”

Recommendation algorithms work to reinforce those biases by sending us more of what we like to keep us coming back, Starbird said.

“There’s something about our information environment right now, which is highlighting old vulnerabilities and perhaps taking advantage of them in new ways,” she said.

The cost of rumoring

The results can be dangerous — as, for instance, with the people who have been trying to set 5G cell towers on fire after hearing they’re causing the coronavirus. Carley said there are longer-term risks, too.

“First, it reduces your trust in institutions,” she said. “So it reduces your trust in the medical community and governments and so on. And when you get that reduction in trust for those entities, chaos can come out of it.”

One striking example: the protests against social distancing at the Michigan State Capitol, which experts worried would worsen the spread of the virus.

There has also been concern that some of the fake cures promoted on social media could increase spread of the virus.

“So it can cost money, it can cost lives, and it reduces trust in your institutions,” Carley said.

One of the biggest concerns, experts said, is that misinformation and disinformation could affect the November presidential election. In fact, Carley and her colleagues found that many of the bots spreading false information about COVID-19 are also sowing political division among the American public.

But there might be a kind of silver lining to the current “infodemic” — it’s given researchers more data than they could have dreamed of to help zero in on and possibly stop misinformation in its tracks.

Concerns over the immediate harm COVID-19 misinformation could have on human health have also pushed social media giants such as Twitter and YouTube to pledge crackdowns on misinformation — a pledge that’s expanding to cover political misinformation, as well.

Kate Starbird hopes it can be a wakeup call for regular people, too.

“Because this event is really about human health and we’re all kind of vulnerable, I think that there’s an opportunity here for people to see, like, ‘Oh yeah, we are all vulnerable,’” she said. “This is a huge system where we’re all sharing information together. And so we can become stronger collectively, and make for a healthier information space, by all just being a little bit more aware that we’re all vulnerable.”

WHYY is your source for fact-based, in-depth journalism and information. As a nonprofit organization, we rely on financial support from readers like you. Please give today.

Want a digest of WHYY’s programs, events & stories? Sign up for our weekly newsletter.

Together we can reach 100% of WHYY’s fiscal year goal