Hosting
Wednesday, February 5, 2025
Google search engine
HomeArtificial IntelligenceUS prosecutors see a growing threat from AI-generated child sex abuse images

US prosecutors see a growing threat from AI-generated child sex abuse images


  • Defendants are accused of using AI to create explicit images of children
  • AI-generated images can undermine efforts to help victims
  • Cases can raise new legal questions
  • Advocacy groups are vouching for AI companies’ pledge to check for offensive images

WASHINGTON, Oct 17 (Reuters) – U.S. federal prosecutors are stepping up their pursuit of suspects who use artificial intelligence to manipulate or create images of child sex abuse, as law enforcement agencies fear the technology could unleash a flood of illegal material.

The U.S. Department of Justice filed two criminal cases this year against defendants accused of using generative AI systems, which create text or images in response to user cues, to produce explicit images of children.

“There’s more to come,” said James Silver, head of the Justice Department’s Computer Crime and Intellectual Property Division, predicting more similar cases.

“What we’re concerned about is the normalization of this,” Silver said in an interview. “AI makes it easier to generate these types of images, and the more that are available, the more normal this becomes. That’s something we’re concerned about. I really want to thwart and be at the forefront.”

The rise of generative AI has raised concerns at the Justice Department that the rapidly advancing technology will be used to launch cyberattacks, increase the sophistication of cryptocurrency scammers and undermine election security.

Child abuse cases represent one of the first times prosecutors have tried to apply existing US laws to alleged crimes involving AI, and even successful convictions could be appealed as courts question how the new technology will change the legal landscape surrounding child exploitation can change.

Prosecutors and child safety advocates say generative AI systems could allow perpetrators to morph and sexualize ordinary photos of children and warn that a proliferation of AI-produced material will make it more difficult for law enforcement to identify and track real victims of abuse. locate.

The National Center for Missing and Exploited Children, a nonprofit that collects tips on online child exploitation, receives an average of about 450 reports related to generative AI each month, said Yiota Souras, the group’s chief legal officer.

That’s a fraction of the average of three million monthly reports on total online child exploitation the group received last year.

UNTESTED EARTH

Cases involving AI-generated images of sexual abuse are likely to break new legal ground, especially when no identifiable child is depicted.

Silver said prosecutors can charge obscenity crimes in such cases if child pornography laws don’t apply.

Prosecutors charged Steven Anderegg, a Wisconsin software engineer, in May with charges including transmitting obscene material. Anderegg is accused of using Stable Diffusion, a popular text-to-image AI model, to generate images of young children engaged in sexually explicit behavior and sharing some of those images with a 15-year-old boy, according to court filings documents.

Anderegg has pleaded not guilty and is seeking to dismiss the charges, saying they violate his rights under the U.S. Constitution, court documents show.

He has been released pending trial. His lawyer could not be reached for comment.

Stability AI, the maker of Stable Diffusion, said the case involved a version of the AI ​​model released before the company took over development of Stable Diffusion. The company said it has made investments to prevent “the misuse of AI to produce harmful content.”

Federal prosecutors have also charged a U.S. Army soldier with child pornography, in part for allegedly using AI chatbots to alter innocent photos of children he knew would generate violent images of sexual abuse, court documents show.

The defendant, Seth Herrera, pleaded not guilty and has been sentenced to prison pending trial. Herrera’s attorney did not respond to a request for comment.

Legal experts say that while sexually explicit images of real children are covered by child pornography laws, the landscape around obscenity and purely AI-generated images is less clear.

The U.S. Supreme Court struck down as unconstitutional in 2002 a federal law that criminalizes any image, including computer-generated images, of minors engaged in sexual activity.

“These prosecutions will be difficult if the government relies solely on moral repulsion to prevail,” said Jane Bambauer, a law professor at the University of Florida who studies AI and its impact on privacy and law enforcement.

Federal prosecutors in recent years have secured convictions against defendants who had sexually explicit images of children that also qualified as obscene under the law.

Advocates are also focused on preventing AI systems from generating offensive material.

Two nonprofits, Thorn and All Tech Is Human, received commitments in April from some of the biggest players in AI, including Alphabet’s Google (GOOGL.O)opens a new tabAmazon.com (AMZN.O)opens a new tabFacebook and Instagram parent company Meta Platforms (META.O)opens a new tabOpenAI and Stability AI to prevent their models from being trained on child abuse images and to monitor their platforms to prevent their creation and distribution.

“I don’t want to portray this as a future problem, because it isn’t. It’s happening now,” said Rebecca Portnoff, director of data science at Thorn.

“As for whether it’s a future problem that will spiral completely out of control, I’m still hopeful that we can act within this window of opportunity to prevent that.”

Sign up here.

Reporting by Andrew Goudsward; Editing by Scott Malone and Bill Berkrot

Our Standards: Thomson Reuters Trust Principles.opens a new tab

Buy licensing rights



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular