Yer fond of me dogfood aint’ ye? I seen it - yer fond of me dogfood! Say it! Say it. Say it!
Yer fond of me dogfood aint’ ye? I seen it - yer fond of me dogfood! Say it! Say it. Say it!
Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”
The model hasn’t necessarily been trained with CSAM, rather you can create things called LORAs which help influence the image output of a model so that it’s better at producing very specific content that it may have struggled with before. For example I downloaded some recently that help Stable Diffusion create better images of Battleships from Warhammer 40k. My guess is that criminals are creating their own versions for kiddy porn etc.
I mean, have you encountered these Real Gamers™ before? They go out of their way to be enraged at this stuff. It’s all deliberate.
ONE CANNOT CRITICISE A HERO OF THE PLOLATARIAT, COMRADE