AI and sexual harassment

A case out of the Radnor School District, where a student allegedly used AI to generate nude images of classmates, is raising urgent questions about sexual harassment and AI.

Listen 51:42
The majority of the victims of non-consensual sexual imagery are women.

The majority of the victims of non-consensual sexual imagery are women.

A case out of the Radnor School District, where a student allegedly used AI to generate nude images of classmates, is raising urgent questions about sexual harassment in the age of artificial intelligence.

The school community there is actively grappling with how to proceed in a world where guardrails haven’t kept pace with technological advances. This tension was on full display in January when Elon Musk’s Grok chatbot created and shared nearly 2 million deepfake sexualized images of women in what critics called an “industrial-scale abuse of women and girls.”

On this episode of Studio 2, we’ll take a closer look at the growing issue of nonconsensual AI-generated sexual imagery and what it means for victims. 

If a deepfake nude portraying you appears online, what legal protections do you actually have? What responsibility does big tech have in preventing this kind of abuse? What can parents do to keep their children safe? And what legislative solutions should lawmakers pursue to meet the moment?

Guests:

  • Amanda Levendowski Tepski, law professor and founding director of the Intellectual Property and Information Policy Clinic at Georgetown University
  • Samantha Cole, journalist and co-founder at 404 media

WHYY is your source for fact-based, in-depth journalism and information. As a nonprofit organization, we rely on financial support from readers like you. Please give today.

Want a digest of WHYY’s programs, events & stories? Sign up for our weekly newsletter.

Together we can reach 100% of WHYY’s fiscal year goal