Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

California governor signs bills to protect children from AI deepfake nudes

California Gov. Gavin Newsom signed two proposals to crack down on harmful sexual imagery of children created by artificial intelligence tools

By Trn Nguyn
Sunday 29 September 2024 20:40 EDT
California-AI-Deepfakes-Children
California-AI-Deepfakes-Children (Copyright 2024 The Associated Press. All Rights Reserved.)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

California Gov. Gavin Newsom signed a pair of proposals Sunday aiming to help shield minors from the increasingly prevalent misuse of artificial intelligence tools to generate harmful sexual imagery of children.

The measures are part of California's concerted efforts to ramp up regulations around the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

Earlier this month, Newsom also has signed off on some of the toughest laws to tackle election deepfakes, though the laws are being challenged in court. California is wildly seen as a potential leader in regulating the AI industry in the U.S.

The new laws, which received overwhelming bipartisan support, close a legal loophole around AI-generated imagery of child sexual abuse and make it clear child pornography is illegal even if it's AI-generated.

Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person, supporters said. Under the new laws, such an offense would qualify as a felony.

“Child sexual abuse material must be illegal to create, possess, and distribute in California, whether the images are AI generated or of actual children," Democratic Assemblymember Marc Berman, who authored one of the bills, said in a statement. “AI that is used to create these awful images is trained from thousands of images of real children being abused, revictimizing those children all over again.”

Newsom earlier this month also signed two other bills to strengthen laws on revenge porn with the goal of protecting more women, teenage girls and others from sexual exploitation and harassment enabled by AI tools. It will be now illegal for an adult to create or share AI-generated sexually explicit deepfakes of a person without their consent under state laws. Social media platforms are also required to allow users to report such materials for removal.

But some of the laws don't go far enough, said Los Angeles County District Attorney George Gascón, whose office sponsored some of the proposals. Gascón said new penalties for sharing AI-generated revenge porn should have included those under 18, too. The measure was narrowed by state lawmakers last month to only apply to adults.

“There has to be consequences, you don't get a free pass because you're under 18,” Gascón said in a recent interview.

The laws come after San Francisco brought a first-in-the-nation lawsuit against more than a dozen websites that AI tools with a promise to “undress any photo” uploaded to the website within seconds.

The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more accessible and easier to use. Researchers have been sounding the alarm these past two years on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters.

In March, a school district in Beverly Hills expelled five middle school students for creating and sharing fake nudes of their classmates.

The issue has prompted swift bipartisan actions in nearly 30 states to help address the proliferation of AI-generated sexually abusive materials. Some of them include protection for all, while others only outlaw materials depicting minors.

Newsom has touted California as an early adopter as well as regulator of AI technology, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in