For many children visiting Disney World in Orlando, Florida, it was the trip of a lifetime. For the man that filmed them on a GoPro, it was something more nefarious: an opportunity to create child exploitation imagery.

The man, Justin Culmo, who was arrested in mid-2023, admitted to creating thousands of illegal images of children taken at the amusement park and at least one middle school, using a version of AI model Stable Diffusion, according to federal agents who presented the case to a group of law enforcement officials in Australia earlier this month. Forbes obtained details of the presentation from a source close to the investigation.

Culmo has been indicted for a range of child exploitation crimes in Florida, including allegations he abused his two daughters, secretly filmed minors and distributed child sexual abuse imagery (CSAM) on the dark web. He has not been charged with AI CSAM production, which is a crime under U.S. law. At the time of publication, his lawyers had not responded to requests for comment. He entered a not guilty plea last year. A jury trial has been set for October.

“This is not just a gross violation of privacy, it’s a targeted attack on the safety of children in our communities,” said Jim Cole, a former Department of Homeland Security agent who tracked the defendant’s online activities during his 25 years as a child exploitation investigator. “This case starkly highlights the ruthless exploitation that AI can enable when wielded by someone with the intent to harm.”

The alleged criminal activity is perhaps the grimmest example yet of AI image manipulation, one that may have victimized many Disney World visitors. Disney, however, said it hadn’t been contacted by law enforcement about the alleged activities at its park. The U.S. Attorney’s Office for the Middle District of Florida declined to comment further on the case. The DHS, which led the investigation into Culmo, didn’t respond to requests for comment.

Cole told Forbes that global law enforcement agencies have been after Culmo since 2012, explaining he was “one of about 20 high priority targets” among global child exploitation detectives for more than a decade.

Using facial recognition, detectives pursuing Culmo were able to identify one of his victims and trace manipulated images of them back to him. While arresting him, they found more child abuse images on his devices; Culmo admitted to creating them, including those of his daughters, the complaint states.

The case is one of a growing number in which AI is used to transform photos of real children into realistic images of abuse. In August, the DOJ unsealed charges against army soldier Seth Herrera, accusing him of using generative AI tools to produce sexualized images of children. Earlier this year, Forbes reported that Wisconsin resident Steven Anderegg had been accused of using Stable Diffusion to produce CSAM from images of children solicited over Instagram. In July, the U.K.-based nonprofit the Internet Watch Foundation (IWF) said it had detected over 3,500 AI CSAM images online this year.

Cole said that Stable Diffusion 1.5 has been the most commonly used generative AI tool used by pedophiles, largely because it can be run on their own computers without storing illegal images on Stability AI or other AI provider servers, where they might be detected. “There are no built in safeguards. It’s why offenders use it almost exclusively,” said Cole, now a founding partner at Onemi-Global Solutions, a consultancy assisting tech companies and non-profit organizations with child protection.

In 2023, Stanford researchers found that an early version of Stable Diffusion had been trained, in part, on illicit images of minors. Stability AI told Forbes earlier this year it was not responsible for Stable Diffusion 1.5, which was originally released by AI tool developer Runway, and that it had invested in features to prevent misuse in more recent models since it acquired control of them. Runway hadn’t responded to requests for comment at the time of publication.

With Stable Diffusion 1.5 out in the wild, there’s little to be done to prevent its misuse. Stanford Internet Observatory’s chief technologist David Thiel told Forbes that its original developers should have better vetted their training data for explicit imagery. “There is nothing that Stability can do about this, other than not repeating the same mistakes,” he said.

As for how the government will go about prosecuting AI CSAM creators, a current federal child exploitation investigator, who was not authorized to speak on record, suggested that in cases where AI had been used to sexualize images of real children, charges would likely be equivalent to those in standard CSAM cases.

Illegal images entirely generated by AI might be charged under American obscenity law. “Basically, in those cases, it’s treated as if they’re very realistic drawings,” the investigator said. Animated child pornography has long been prosecutable in the U.S. and the Justice Department’s recent comments on charging Herrera indicate it plans to take a hard line on all illicit AI-created material. “Criminals considering the use of AI to perpetuate their crimes should stop and think twice — because the Department of Justice is prosecuting AI-enabled criminal conduct to the fullest extent of the law and will seek increased sentences wherever warranted,” said deputy attorney general Lisa Monaco.

More On Forbes

Share.
Exit mobile version