What you need to know about the ongoing fight to prevent AI-generated child porn

What you need to know about the ongoing fight to prevent AI-generated child porn

Deepfake and artificial intelligence-generated pornography have dominated headlines involving everyone from Taylor Swift to middle school students in a small town in Alabama. However, the dark underbelly of this technological menace extends even further.

An October 2023 report from the UK-based watchdog, Internet Watch Foundation, exposes how artificial intelligence can be used for sexual abuse through the use of AI to create child sexual abuse imagery (AI CSAM).

“We’re not talking about the harm it might do,” said Dan Sexton, the watchdog group’s chief technology officer, told the Associated Press in October. “This is happening right now and it needs to be addressed right now.”

IWF’s report details how AI-generated child sexual abuse imagery has become a growing problem for law enforcement working to prosecute the people producing and distributing these images. as AI has advanced and become more accessible, so has the proliferation of deepfake images and AI porn. A deepfake is a manipulated video or other digital representation produced by sophisticated machine-learning techniques that yield seemingly realistic, but fabricated, images and sounds.

Most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM. The most convincing images are visually indistinguishable from real CSAM, even for trained analysts, IWF officials said in the report. 

While the criminal sexual images may not be real, these AI-generated images can still harm children, IWF said.

“Amid all the focus on realism, photorealism, and hyperrealism, and complex debates about legality – simply stated – this technology allows perpetrators to generate dozens, even hundreds of child sexual abuse images at the click of a button,” the IWF report said.

As countries and families grapple with this dark reality, worried advocates say it’s imperative to understand the global bipartisan efforts underway to protect children and survivors from the pervasive threat of AI-generated exploitation through fake porn. Here’s what you need to know:

Is AI child porn ethical?

IWF says AI child porn is not ethical, as some forum threads have argued among discussions about what is and isn’t child pornography. Some commenters in online legal forums have argued that AI-generated child porn is more ethical than child sexual abuse captured on camera because the image created by AI didn’t actually happen.

The legality of AI child porn was the subject of a May post in the reddit forum r/legaladviceofftopic, where reddit users can ask legal questions that are not suitable for work (NSFW). One user questioned whether images created by combining images of multiple children would be legal.

“What if the AI takes images of two different children and combines them in a way that makes it look like a real 3rd child? Or images of 1000 different children?” u/aiaor asked.

Despite this question, which gets at the issue of creating fake images that appear like a real person, IWF is firm on its stance on all AI CSAM: It’s never ethical.

“It’s concerning to read some of the perpetrator discussions in forums where there appears to be excitement over the advancement of this technology. What’s more concerning for me, is the idea that this type of child sexual abuse content is, in some way, ethical. It is not,” IWF chief executive Suzie Hargreaves, said in the report.

IWF’s October report found AI CSAM has increased the potential for the re-victimization of known child sexual abuse victims, as well as for the victimization of famous children and children known to perpetrators. The IWF has found many examples of AI-generated images featuring known victims and famous children.

The report didn’t outline cases of specific famous children whose faked sexual abuse images were analyzed, but IWF said this AI CSAM of famous children is especially troubling since there is likely more data and images of famous children the AI can use to create a new image.

“The IWF has for a long time seen many examples of ‘shallowfake’ and deepfake images featuring these well-known individuals, now the IWF is seeing entirely AI-generated images produced using fine-tuned models for these individuals,” the report said.

Republicans and Democrats are united on this

In the US and countries around the world, officials are signing laws that address the issue of AI-generated sexual images. Major AI development companies have issued statements about the issues surrounding their technology being used to create sexual scenarios or perform sexual services.

U.S. lawmakers from both sides of the political aisle are looking to hold perpetrators accountable through legislation that would criminalize the production of these fake images and allow victims to sue for damage. The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act was introduced by Senate Majority Whip Dick Durbin (D-IL), joined by Sens. Lindsey Graham (R-SC), Amy Klobuchar (D-MN), and Josh Hawley (R-MO).

“We greatly appreciate Senator Durbin and Senator Graham for working with us to introduce the DEFIANCE Act to address and prevent non-consensual deepfake pornography.  Currently, there are no laws addressing deepfake pornography.  Victims are unable to get justice and the problem is increasing due to a lack of consequences,” said Omny Miranda Martone, Founder and CEO of the Sexual Violence Prevention Association (SVPA). SVPA worked with lawmakers to draft the DEFIANCE act.

It builds on a provision in the Violence Against Women Act Reauthorization Act of 2022, which added a similar right of action for non-faked explicit images.

The DEFIANCE Act would add a civil right of action for intimate “digital forgeries” depicting an identifiable person without their consent. The provision would let victims collect financial damages from anyone who “knowingly produced or possessed” the image with the intent to spread it, the bill reads.

“There are some issues that are just not partisan issues and this would be a perfect example,” said Indiana state Rep. Sharon Negele (R) told Bloomberg last week. Negele filed a bill in the Indiana State Senate that would make it a crime to create or share intimate images that are generated using artificial intelligence without the victim’s knowledge.

Her bill is one of more than a dozen state-level bills addressing AI CSAM in the 2024 legislative session. More than half of all US states have existing laws addressing CSAM. Federal laws criminalizing the content are also on the books and carry penalties of up to 20 years in prison.

This is an issue around the world

Countries around the world are taking legal action on AI child porn.

In September, a man was sentenced to 2 1/2 years in prison in South Korea for using artificial intelligence to create 360 virtual child abuse images, according to South Korea’s criminal court system.

Australia has already outlawed AI child porn, and has created new regulations aimed at making tech companies like Google, Firefox and DuckDuck Go more vigilant in their efforts to prevent the spread of CSAM.

“The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree,” Inman Grant said. “When the biggest players in the industry announced they would integrate generative AI into their search functions we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Julie Inman Grant, Australia’s eSafety Commissioner, told Reuters in September.

But before AI, there were legal questions about cartoons or non-realistic pornographic images.

Before AI porn was in the headlines, there were other cases involving cartoon images of child sexual abuse material. One comic book owner, Christopher Handley was sentenced in Iowa in 2010 after he pleaded guilty to charges of possessing “obscene visual representations of the sexual abuse of children.”

The 40-year-old was charged under the 2003 Protect Act, which outlaws cartoons, drawings, sculptures or paintings depicting minors engaging in sexually explicit conduct, and which lack “serious literary, artistic, political, or scientific value.”

Handley was the nation’s first to be convicted under that law for possessing cartoon art.

What’s next?

Both federal and state legislation will make its way through the vetting process this legislative, increasing the number and span of the laws on the books that address AI porn.

Each state’s top justice department official, the attorney general, in September signed a letter to Congress asking it to examine the growing risk of AI technology and create legislation that explicitly bans AI CSAM.

“We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act,” the attorneys general said in the letter.

To read more from Reckon on issues related to the internet and sexuality: