YouTube pushes children's videos to pedophiles through content recommendation engine

I’m totally on board with vigorously prosecuting actual kiddie porn. But I suspect that there are no more pedophiles per capita, and no more actual molestation per capita, than there was twenty, or fifty, or a hundred years ago. Back in the day it was kept quiet, while today it’s the subject of scare headlines on 24/7/365 cable news. And we as a people scare pretty easy.

I was referring to the (apparently not satirical) suggestion that “any decent society should ban child actors”.

If you can stand the shock, check out this vile pedo bait that was made available to millions of Americans back in Great Grand-dads day ----

I suspect there is considerably less than there was 100 years ago. There is less violent crime in general, and recently we have started going after abusers, even those in respected positions. But we don’t want to live by 1900 standards for clean drinking water, maternal mortality or life expectancy, why would we want to live by them for child abuse?

We can debate the science on whether an individual going on youtube to find images of children is causing any real harm. But youtube shaping people’s behaviour to consume more images of children is really, really fucked up.

5 Likes

Yes, in the sense that having access to a cornucopia of such images feeds this aberrant behaviour to the point of escalation. I once listened to an interview with a paedophile that tried to explore the paraphilia in a relatively non-judgmental manner. As I recall, the interview subject talked about the voyeuristic aspect of watching children who were not aware they were “on display for him” being part of the transgressive thrill, one which eventually led him to seek out actual child pornography once he became aware of its existence. That he never actually molested anyone directly didn’t lessen his self-loathing and his wish that he didn’t have this attraction, nor did it change the fact that child pornography victimises children.

YouTube records a lot of user activity in the name of tracking engagement: interests, time spent watching videos, repeat viewings, pauses, time of day watched, etc. If someone is watching videos of other people’s kids to the degree that the algorithm is pushing it as a top interest, YouTube really should also flag the activity for further investigation based on more granular activity and then on sampling the content of the videos being viewed.

Instead, it’s “we see you really like watching videos of strangers’ non-professional kids in the bathtub and the swimming pool and such and spend most of your time here doing so, so here are some more!”

Of course not. But that doesn’t change the fact that people post such public videos, sometimes thoughtlessly and sometimes because deliberately because they’re funny in the same way cute animal videos are.

Which one it is comes down to a matter of convenience for the corporate entity. YouTube and Facebook and Twitter describe themselves as common carriers when they want to escape responsibility for their users’ offensive speech and as publishers when they want to avoid anti-trust or utilities regulation. Activity in recent days by the government suggests that they’re eventually going to be classified as common carriers.

3 Likes

I think it’s important to keep clear the difference between child pornography (which depicts children as the subject of sexual activity) versus non-pornographic images of children which have the incidental effect of titillating pedophiles. I remember instances in the pre-digital camera age when dropping off a roll of family beach photos could result in the photo lab calling the police - that was outrageous.

Could you elaborate a little. What sort of investigation, and reported to whom. Could this be used by some YouTube tech to SWAT someone with which s/he had a grudge?

It is important, and I did keep the difference clear. I pointed out that, based on the interview, access to the latter could escalate into seeking out the former.

I agree. This isn’t about keeping an eye on the people posting beach videos of the kids, though, but about watching out for people who take an unusually keen interest in watching such videos.

Investigation by YouTube, based on the data they collect. It could be a deeper algorithmic probe at first, resulting in an alteration of the recommendation algo for that users and perhaps followed by scrutiny by a properly trained human moderator.

Let’s say that the algorithm detects a user watching a lot of low-traffic home-movie style of small kids posted by many different users, to the point it sees it as a top-3 interest and recommends others. I think you’ll agree that’s not really normal behaviour for most people over age 13.

Taking that as a given, another algorithm might kick in to check other patterns: are many of the videos being re-watched again and again in a short period of time? Are some of them being paused in the middle with some frequency? Are the videos being watched late at night? Are the videos being watched posted from random people? What kinds of comments are being posted under these videos? Are the comments from the same group of users? Basically taking other data YouTube collects and programmatically checking if the unusual behaviour is even more unusual.

At that point, maybe the main recommendation algorithm would just be told “stop recommending these kinds of videos to this user”, which would address the problem described in the article. Or maybe a human moderator would be required to make a final decision before throwing the switch.

The point is to cut off the flow of wank material to a paedophile instead of giving him more, more, more – as is the case now. And if it’s a false positive and he’s not a paedophile then he can probably still live without having YouTube recommend videos of random kids running through sprinklers.

Of course. Any system can be abused by a bad actor. It would take a lot of effort and expertise to build a false trail and hide the tampering, though, so while it’s possible it’s unlikely. Preventing those outlier cases is where training and hiring and a good reporting structure and good algo design comes in.

1 Like

I’m fine with YouTube adjusting it’s software so that looking at a video of young children doesn’t serve up a buffet table of lots of other videos of young children. That seems a reasonable thing to do.

But having YouTube detecting and monitoring (and reporting?) patterns of use of what is clearly legal material in which no kids are harmed honestly gives me the heebie jeebies. It amounts to a private company doing its own investigations of pre-crime. And being falsely accused of this particular crime is among the very worst things that can happen to someone in today’s world.

1 Like

I’m fine with that, too, but they have to monitor some additional patterns to make the adjustment, even if it’s just noting that low-traffic videos featuring young children should be excluded from a recommendation list.

Their algorithms already do a lot of detecting and monitoring in service of the engagement business model. It would be nice if they used some of that expertise in the service of societal good, like not serving up a constant stream of stroke material to paedophiles (and, while they’re at it, not serving up a constant stream of advocacy of fascism and bigotry to anyone).

No-one here, including Xeni, is advocating reporting activity to the authorities and Google doesn’t seem inclined to do that without a warrant either, so your concern there is beside the point.

2 Likes

ds9-sisko-what

3 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.