Politics

What state laws protect kids against AI-generated deepfakes?

Pinterest LinkedIn Tumblr

(NewsNation) — Incidents of explicit, AI-generated deepfakes of children have rapidly cropped up in recent years, leading to a charge among several states to pass laws to protect against them. 

Lawmakers in over a dozen states have passed a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of kids. 

Deepfakes are video, photo or audio recordings that appear to be real but have been manipulated with artificial intelligence. A deepfake can depict someone appearing to say or do something that they, in fact, never said or did.

Most of these laws are targeted at sexually explicit or pornographic video images, with some expanding existing nonconsensual intimate image laws, according to the National Conference of State Legislators. 


Explicit ‘deepfake’ images of nearly 50 teens rocks Pennsylvania school

States with laws protecting children against deepfakes 

Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing and Exploited Children.

Fourteen states have laws in effect that include specific references to children in order to protect them against deepfakes and other AI-generated content, according to an analysis from MultiState Associates shared with NewsNation.

These include Utah, Idaho, Georgia, Oklahoma and Tennessee.

Another five states have laws that will take effect by the beginning of 2025.

In September, California closed a legal loophole around AI-generated imagery of child sexual abuse and made it clear child pornography is illegal even if it’s AI-generated.

The previous law did not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials depict a real person, but under the new laws, such an offense qualifies as a felony.


AI nudes spread of teen. Now she’s fighting against deepfake porn

South Dakota updated its laws against child sexual abuse images in July to include those created by artificial intelligence. The law includes mandatory minimum prison sentences of one, five and 10 years for first-time offenses of possession, distribution and manufacturing, respectively.

There are no federal laws currently addressing nonconsensual deepfake pornography, but there is proposed legislation to address the issue for adults.

The Disrupt Explicit Forged Images and Non-Consensual Edits Act, or Defiance Act, would allow victims of deepfake pornography to sue as long as they could prove the deepfakes had been made without their consent.

The Take It Down Act would require platforms to remove both revenge porn and nonconsensual deepfake porn.

However, Justice Department officials say they already have the tools under federal law to go after offenders for such imagery.

A federal law signed in 2003 bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed “obscene.” The Justice Department has used that law to charge cartoon imagery of child sexual abuse and notes there’s no requirement “that the minor depicted actually exist.”

Will deepfake laws work to protect children? 

While laws are an important tool for criminal prosecutions, they will likely not curb the behavior, especially when it is other students who are creating the deepfakes, said Justin Patchin, a criminal justice professor at the University of Wisconsin-Eau Claire and co-director of the Cyberbullying Research Center. 

“Teens are not deterred by the threat of formal punishment. They’re deterred more by informal punishment, like what their friends would think, what their parents might do or how their teacher might feel about them,” he said.  

He adds that laws are “a necessary, but not a sufficient response,” to nonconsensual explicit deepfakes.

While technology is outpacing legislation and likely will continue to, many argue that laws are necessary to help law enforcement and prosecutors go after perpetrators. 

“We’ve got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it,” Steven Grocki, who leads the Justice Department’s Child Exploitation and Obscenity Section, said in an interview with The Associated Press. “And if you’re sitting there thinking otherwise, you fundamentally are wrong. And it’s only a matter of time before somebody holds you accountable.”

“These laws exist. They will be used. We have the will. We have the resources,” Grocki also said.

After the California legislation became more far-reaching, Ventura County District Attorney Erik Nasarenko said that it cleared the way for his office to prosecute eight cases involving AI-generated content between last December and mid-September. 


AI nude photo scandal prompts calls for more oversight of tech

Patchin said that it’s more important to focus on education and awareness of the dangers of deepfakes both at schools and by parents.

The Associated Press contributed to this story.