AI ‘learns’ to generate objectionable images on child porn – study

AI ‘learns’ to generate objectionable images on child porn – study

[ad_1]

The AI-based image generators are “learning” from candid photos of children, the researchers said.

This is stated in a new report by the Stanford Internet Observatory, the Associated Press reports.

The training database of popular image generators can contain thousands of images of child sexual abuse.

They taught the artificial intelligence system to create realistic candid images of children’s characters, as well as to “undress” minors in the photo with the help of AI.

The researchers urged companies to address this flaw in the technology.

Illustrative photo: mimagephotos/Depositphotos

After many AI-powered tools appeared on the Internet, human rights activists and ordinary users pointed to one major flaw – some AI-powered tools create objectionable images of children or describe scenes of sexual abuse when asked.

Until recently, it was believed that AI simply combined information obtained from adult pornography and harmless children’s photos. But it turned out that AI also “learns” from child pornography.

The Stanford Internet Observatory found more than 3,200 images believed to contain child sexual abuse in the giant artificial intelligence database LAION. This is the database used to train leading AI image generators like Stable Diffusion.

A monitoring group from Stanford University worked with the Canadian Center for Child Protection and other charities to identify illegal material. They reported the original links to the photos to the police. Approximately one thousand of the found images have already passed external verification.

Rejected images represent only a fraction of LAION’s database of about 5.8 billion images. However, the Stanford group believes that these photos may contribute to the generation of “malicious” results with the help of AI tools.

According to David Thiel, chief technologist at the Stanford Internet Observatory, this is not an easy problem to solve. In particular, it is related to the fact that many generative AI projects are “rushed to the market” and they become widely available due to high competition in this field.

“Taking the entire Internet and turning it into a dataset to train models on is something that should be limited to an exploratory operation. It’s not something that should be in the public domain without a lot more scrutiny.”– said Thiel.

LAION responded to the incident and informed AR that they are temporarily deleting their datasets. The network said it has a “zero-tolerance policy for illegal content” and therefore deletes data so that it does not pose a risk.

We will remind, earlier in Spain, attackers used AI to generate photos of naked underage girls.

Also in Britain, a scheme to sell content with violence against children, created by AI, was exposed.

Read also: “I’m 18” is no longer enough. In Great Britain, with the help of AI, they will check the age of porn viewers

[ad_2]

Original Source Link