Donate

A beheading video was on YouTube for hours, raising questions about why it wasn’t taken down sooner

Signage is affixed at the YouTube Space offices in Los Angeles, Oct. 21, 2015. (AP Photo/Danny Moloshok, File)

A graphic video from a Pennsylvania man accused of beheading his father that circulated for hours on YouTube has put a spotlight yet again on gaps in social media companies’ ability to prevent horrific postings from spreading across the web.

Police said Wednesday that they charged Justin Mohn, 32, with first-degree murder and abusing a corpse after he beheaded his father, Michael, in their Bucks County home and publicized it in a 14-minute YouTube video that anyone, anywhere could see.

News of the incident — which drew comparisons to the beheading videos posted online by the Islamic State militants at the height of their prominence nearly a decade ago — came as the CEOs of Meta, TikTok and other social media companies were testifying in front of federal lawmakers frustrated by what they see as a lack of progress on child safety online. YouTube, which is owned by Google, did not attend the hearing despite its status as one of the most popular platforms among teens.

The disturbing video from Pennsylvania follows other horrific clips that have been broadcast on social media in recent years, including domestic mass shootings livestreamed from Louisville, Kentucky; Memphis, Tennessee; and Buffalo, New York — as well as carnages filmed abroad in Christchurch, New Zealand, and the German city of Halle.

Middletown Township Police Capt. Pete Feeney said the video in Pennsylvania was posted at about 10 p.m. Tuesday and online for about five hours, a time lag that raises questions about whether social media platforms are delivering on moderation practices that might be needed more than ever amid wars in Gaza and Ukraine, and an extremely contentious presidential election in the U.S.

“It’s another example of the blatant failure of these companies to protect us,” said Alix Fraser, director of the Council for Responsible Social Media at the nonprofit advocacy organization Issue One. “We can’t trust them to grade their own homework.”

A spokesperson for YouTube said the company removed the video, deleted Mohn’s channel and was tracking and removing any re-uploads that might pop up. The video-sharing site says it uses a combination of artificial intelligence and human moderators to monitor its platform, but did not respond to questions about how the video was caught or why it wasn’t done sooner.

Major social media companies moderate content with the help of powerful automated systems, which can often catch prohibited content before a human can. But that technology can sometimes fall short when a video is violent and graphic in a way that is new or unusual, as it was in this case, said Brian Fishman, co-founder of the trust and safety technology startup Cinder.

That’s when human moderators are “really, really critical,” he said. “AI is improving, but it’s not there yet.”

Roughly 40 minutes after midnight Eastern time on Wednesday, the Global Internet Forum to Counter Terrorism, a group set up by tech companies to prevent these types of videos from spreading online, said it alerted its members about the video. GIFCT allows the platform with the original footage to submit a “hash” — a digital fingerprint corresponding to a video — and notifies nearly two dozen other member companies so they can restrict it from their platforms.

But by Wednesday morning, the video had already spread to X, where a graphic clip of Mohn holding his father’s head remained on the platform for at least seven hours and received 20,000 views. The company, formerly known as Twitter, did not respond to a request for comment.

Experts in radicalization say that social media and the internet have lowered the barrier to entry for people to explore extremist groups and ideologies, allowing any person who may be predisposed to violence to find a community that reinforces those ideas.

In the video posted after the killing, Mohn described his father as a 20-year federal employee, espoused a variety of conspiracy theories and ranted against the government.

Most social platforms have policies to remove violent and extremist content. But they can’t catch everything, and the emergence of many newer, less closely moderated sites has allowed more hateful ideas to fester unchecked, said Michael Jensen, senior researcher at the University of Maryland-based Consortium for the Study of Terrorism and Responses to Terrorism, or START.

Despite the obstacles, social media companies need to be more vigilant about regulating violent content, said Jacob Ware, a research fellow at the Council on Foreign Relations.

“The reality is that social media has become a front line in extremism and terrorism,” Ware said. “That’s going to require more serious and committed efforts to push back.”

Nora Benavidez, senior counsel at the media advocacy group Free Press, said among the tech reforms she would like to see are more transparency about what kinds of employees are being impacted by layoffs, and more investment in trust and safety workers.

Google, which owns YouTube, this month laid off hundreds of employees working on its hardware, voice assistance and engineering teams. Last year, the company said it cut 12,000 workers “across Alphabet, product areas, functions, levels and regions,” without offering additional detail.

___

AP journalists Beatrice Dupuy and Mike Balsamo in New York, and Mike Catalini in Levittown, Pennsylvania, contributed to this report.

___

The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

Never miss a moment with the WHYY Listen App!

Share

Recent Posts