New York
CNN
 — 

“All we have to have is just a human form to be a victim.” That’s how lawyer Carrie Goldberg describes the risk of deepfake porn in the age of artificial intelligence.

While revenge porn — or the nonconsensual sharing of sexual images — has been around for nearly as long as the internet, the proliferation of AI tools means that anyone can be targeted by this form of harassment, even if they’ve never taken or sent a nude photo. Artificial intelligence tools can now superimpose a person’s face onto a nude body, or manipulate existing photos to make it look as if a person is not wearing clothes.

Terms of Service with Clare Duffy - Square
A new kind of deepfake revenge porn is sweeping the internet. Using artificial intelligence, bad actors can do things like superimpose your face on a nude body, creating convincing and harmful images. Tech companies and lawmakers are trying to play catch up, but the truth is these tools are still easy to access. So how can you and your loved ones stay safe? Carrie Goldberg, a lawyer specializing in digital harassment and sex crimes, has some answers.
Nov 12, 2024 • 28 min

In the past year, targets of AI-generated, nonconsensual pornographic images have ranged from prominent women like and Rep. Alexandria Ocasio-Cortez to .

For someone discovering that they, or their child, have been made the subject of deepfake porn, the experience is typically scary and overwhelming, said Goldberg, who runs the New York-based firm C.A. Goldberg Law representing victims of sex crimes and online harassment. “Especially if they’re young and they don’t know how to cope and the internet is this big, huge, nebulous place,” she said.

But there are steps that targets of this form of harassment can take to protect themselves and places to turn for help, Goldberg told me in an interview on CNN’s new tech podcast, Terms of Service with Clare Duffy.

aims to demystify the new and emerging technologies that listeners encounter in their daily lives. (You can listen to the full conversation with Goldberg .)

Goldberg said that for people targeted by AI-generated sexual images, the first step — however counterintuitive — should be to screenshot them.

“The knee-jerk reaction is to get this off the internet as soon as possible,” Goldberg said. “But if you want to be able to have the option of reporting it criminally, you need the evidence.”

Next, they can seek out the forms that platforms like , and provide to request removal of explicit images. Nonprofit organizations like and can also help facilitate the removal of such images across multiple platforms at once, although not all sites cooperate with the groups.

A bipartisan group of senators in August calling on nearly a dozen tech firms, including X and Discord, to join the programs.

The fight to address nonconsensual explicit images and deepfakes has received rare bipartisan support. A group of who had been affected by AI-generated porn testified at a hearing on Capitol Hill, where Republican Sen. Ted Cruz introduced a bill — supported by Democratic Sen. Amy Klobuchar and others — that would make it a crime to publish such images and require social media platforms to remove them upon notice from victims.

But, for now, victims are left to navigate a patchwork of state laws. In some places, there are no criminal laws preventing the creation or sharing of explicit deepfakes of adults. (AI-generated sexual images of children typically fall under child sexual abuse material laws.)

“My proactive advice is really to the would-be offenders which is just, like, don’t be a total scum of the earth and try to steal a person’s image and use it for humiliation,” Goldberg said. “There’s not much that victims can do to prevent this … We can never be fully safe in a digital society, but it’s kind of up to one another to not be total a**holes.”