Originally published at: https://boingboing.net/2024/04/23/uk-court-bans-convicted-sex-offender-from-using-generative-ai.html
…
My upstanding of child porn laws, at least in the USA, is that hand-drawn images are treated similarly to actual photographic images. Therefore, I’m at a loss that AI-generated images aren’t also treated the same, with similar penalties.
Someone in possession of thousands of actual CP images would be in jail for a long time.
Huh, well that taught me. Due to the way it’s pronounced (both pronunciations, sway or soo), I could have sworn it was spelled “psuedo” as in “psuedonym” not “pseudo” as in “pseudonym”, but I am apparently wrong.
Every day is a learning day!
And this story’s eerily reminiscent of Charlie Stross’ 2011 novel “Rule 34”
The difference is children weren’t harmed making these images.
It is a weird area though. Disturbing as it is to think about people enjoying these images does access to fake images increase or reduce harm to children? I personally have no idea
Current research doesn’t have any idea either. There’s a massive, ongoing debate among folks in the field of treating offenders about whether AI or animated images could be used as harm reduction or whether it fuels the underlying paraphilic patterns. In the end, the legal and the political aspects of the question are so deep that it’s difficult to even ask these questions without being villainized, let alone get a decent research project started. I’ve been on both sides of the debate myself and I’m still not sure where I land other than the acknowledgment that we need more knowledge than we have.
Yeah, this is very much one of those areas where the logic of the situation (e.g., there are no actual children involved in this) is very much in conflict with the emotional feel surrounding it.
If there are actual children involved in anyway I have absolutely no issue with (reasonable) punishment for the offender. A drawing? Yeah, I find it distasteful to say the least, but I’m far from convinced the nebulous chain of “what if” that could be drawn between ink on paper to a real kid is a concern.
An AI image …? The more realistic a depiction the more I do see it as a problem, but I still have strong concerns that’s my emotional reaction overriding reason. I can see fair arguments that they’re much harder for a brain to differentiate real vs. imaginary and they undoubtedly make it harder to discern real images that definitely are harmful to real children.
The idea is that images are a “gateway drug” and will induce the viewer to engage in the depicted acts.
I think it’s worth asking, in their reckless aggregation of all publicly available data to train their LLMs, if companies like Google and Meta haven’t included CSAM. It’s worth remembering companies like Meta use AI to auto-mod Facebook and Instagram (which is why I got in trouble, with no recourse to human oversight, for posting a video of my four-year-old being silly because she wasn’t wearing a shirt. Something I, a normal adult, did not immediately see as sexual because she’s a god damn toddler who doesn’t like wearing clothes all the time. I also have very stringent privacy settings and don’t have a huge social circle).
Ahem. Anyway, I ask because, if they have used this data, then the images generated are potentially drawing from material that was created by abusing children. And, just like those images, any consumption furthers the abuse of those victims. I don’t know. It’s a very complicated, very sensitive subject.
One of the very good questions we don’t have any even remotely scientific answer for. At least in terms of this topic.
See also: video games make people violent….
This topic was automatically closed after 5 days. New replies are no longer allowed.