Nearly 30 percent of anti-Semitic online attacks are bots

Stars of David memorialize Jewish congregants killed at a synagogue in Pittsburgh on Saturday.
(Kyodo News via Getty Images)

Stars of David memorialize Jewish congregants killed at a synagogue in Pittsburgh on Saturday. (Kyodo News via Getty Images)

Anti-Semitism and hate crimes have surged in the U.S. over the last couple of years, and almost 30 percent of accounts repeatedly tweeting against Jews on Twitter appear to be bots, according to a recently released study from the Anti-Defamation League.

Researchers at the ADL analyzed 7.5 million Twitter messages between Aug. 31 and Sept. 17. Billionaire George Soros, who is Jewish, was a leading subject of anti-Semitic tweets, according to the researchers.

The study reports that while human users still account for the majority of derogatory Twitter traffic in the lead-up to the mid-term elections, “political bots—which explicitly focus on political communication online—are playing a significant role in artificially amplifying derogatory content over Twitter about Jewish people.”

The individuals behind the automated bots remain a mystery. “The facelessness of the people who are targeting the Jewish population is a major problem,” said Sam Woolley, an author of the study and director of the Digital Intelligence Lab at the Institute for the Future. “It’s something that has to be addressed, because it makes it impossible to either stop or report.”

The ADL study was released a day before a mass shooting at a Pittsburgh synagogue killed 11 people.

Katie Joseff, an author of the study and research manager at the Digital Intelligence Lab, says the ease of creating bot accounts means anyone could be behind them.

“It wouldn’t be at all out of the realm of question for Nazis or anyone on the alt-right to be able to use bot accounts,” Joseff said. “They are very accessible, and people who just have normal social media followings, or even high schoolers, know how to buy fake accounts.”

Woolley says more work needs to go into finding who may be behind automated accounts tweeting anti-Semitic slurs. “We’ve known in the past that Russia has used the façade of things like white nationalism, or blue lives matter or black lives matter, or other groups, in order to infiltrate groups,” Woolley said. “So one of the things that the companies and intelligence officials…need to do is figure out whether or not the hate speech that is generated by these seeming white national accounts, is actually being generated by foreign entities.”

Twitter’s Head of Site Integrity, Yoel Roth, released a statement on the reported surge in anti-Semitic attacks via the social media platform:

“Since 2016, we’ve been learning and refining our approach to new threats and challenges. We’ve expanded our policies, built our internal tooling, and tightened our enforcement against coordinated platform manipulation, including bot networks — regardless of the origin.

We’ve also shared all the content connected to potential state-backed operations on Twitter with researchers to help inform the public and to promote independent analysis. Our goal is to try and stay one step ahead in the face of new challenges going forward. Protecting the public conversation is our core mission.”

Woolley and Joseff say to combat disinformation and abuse by bots, tech companies should label bots as automated accounts, and also improve the chain of command so when a user reports extreme trolling and online threats, the companies can respond quickly to protect the user.

But the push for increased scale and revenue discourages social media platforms from combatting automated bots, Woolley says.

“Companies are incentivized to actually continually grow their platforms, and to continually grow their user base. In fact their stocks are quite intimately tied to the amount of growth that the site experiences or the amount of shrinkage of the amount of monthly users,” Woolley says. “This is a real reason why we can’t rely upon companies alone to mitigate the usage of things like bots….We have to have some kind of regulation from government. We have to have civil society action. We have to have individual user-based action.”

Automated accounts are a force multiplier and allow for users disseminating hate speech online to be “small but mighty,” but Woolley and Joseff say people attacking people online remains a tougher problem for technology companies to fight. “That’s a case of free speech,” Joseff says.

According to the ADL, 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. For many Jewish people, the study reports, “especially those in the public eye, social media platforms have become inhospitable for both general communication and as forums for discussing public life.”

The study’s authors interviewed several Jewish Americans about their experiences with online abuse, including a reporter at a major U.S. news outlet:

“Each time one of his stories got traction online—or featured details on topics such as white nationalism, Donald Trump, or libertarianism—he was sent photoshopped images of his face in a gas chamber or was threatened with the public release of his address and contact details.”

The ADL study says, “It is time for technology companies to design their products for democracy. This does not just mean a facile attempt to protect or prioritize free speech. It means protecting minority ethnic and religious groups,” which the study says are disproportionately targets of online abuse.

“Social media companies cannot escape responsibility by claiming not to be ‘arbiters of truth’,” the study says, concluding it is time for companies “to inject ethics and—more strongly—human rights, into the heart of product design.”

Copyright 2018 NPR. To see more, visit http://www.npr.org/.

Want a digest of WHYY’s programs, events & stories? Sign up for our weekly newsletter.

Together we can reach 100% of WHYY’s fiscal year goal