#Elsagate: a subreddit that's sleuthing out the weird videos of Youtube Kids

Originally published at: https://boingboing.net/2017/12/01/pattern-recognition.html


Here’s a video from April 2016 talking about an early version of this super-creepy algorithm-exploiting kids video trend.

I’m sure Hubertus Bigend will get Cayce Pollard right on this.


Kathy, I as thinking exactly the same thing!!

I’m also skeptical of the “big bad AI” theory, and think it’s sweatshop employees cutting and pasting stuff for ad revenue, and the creators have as little understanding about “appropriateness” as a guy who spams dick pics at any woman he finds on the internet.


Thank you. Anyone who would be capable of creating a novel AI with the ability to mass-produce this kind of content would be able to make a LOT of money elsewhere.

It’s like saying “Well, there have been a lot of muggings lately. Obviously it’s a rogue AI developed by a mastermind creating convincing human-like cyborgs to steal people’s wallets”.

Just… No.


Personally, to post some this stuff on kid’s channels is a level of baseness I can’t even understand.


What really surprises me isn’t that people are producing inappropriate click-bait for kids; that was bound to happen sooner or later, in one incarnation or another, because kids are huge revenue producers for these channels.

No, what surprises me is that YT was caught with their pants down around this issue. Even if you only have a few people manually reviewing kids’ videos for appropriateness, many of these had a huge number of views! It should have been caught, FFS! And you can’t just sit back with young kids’ content like you can with some of the older kids or adult content (e.g. the reactionist MRA channels, the white supremacists targeting teens, etc.), because the backlash will be gigantic - and it was!

Seriously, I understand that they use algorithms for detecting a lot of types of inappropriate content, but you HAVE to use a lot of humans to train those algos, and because SEO-type people are always adapting their content to defeat your algos, you have to keep training them with human input, rinse, repeat ad infinitum.

So how did high-view-count videos aimed at kids slip through the cracks for so long?!




Well this is pretty much required now…

If they are, it’s their own fault - they’re one of the richest companies in the entire damned world, there’s no excuse for not having enough trainers to tag & flag high-view-count videos (and a random selection of new videos) to stay ahead of the SEO scumbags.


I prefer to think it’s a bizarre side-effect of Alphabet pointing Alpha Go at Youtube as a training source. “I unnerstan teh huminz. I make video 2. Is goode video.”

I have no reason to believe that other than it would be awesomely weird.


Even the “legitimate” videos on YouTube are terrible for kids. It’s all advertising posing as play with no TV-style regulation. Just endless hours of commercials.https://m.youtube.com/watch?v=cgT4MBj2sEE


It’s a trait of the nu economy. Facebook, Valve, Google, Twitter… all companies that refuse to hire actual human beings to keep their algorithms in check. So you get stupid shit happening like coming out videos being flagged for inappropriate content while kids getting tied up gets a free pass.


It’s weird how so many of these videos were deleted after Buzzfeed asked about them! https://www.buzzfeed.com/charliewarzel/youtube-is-addressing-its-massive-child-exploitation-problem?utm_term=.ejvvOG4g2#.tv7502rMO


This! So much this! There is nothing protecting children on the internet and sadly, many/most parents just set their kids free into the environment without any supervision. We really need to start teaching parents how to handle this and helping them to understand the impact it has on their children.

I let my kid watch youtube kids with supervision and ended up taking it away. Through experimentation, I’ve decided that we have to approach this the same we approach letting our kids free in the real world. We start with controlled environments and slowly add more freedom. I made a nice walled garden for my kid. He’s 3.5 so right now we use kidzone on my phone. He can watch pbs kids videos or use any apps I’ve loaded into his profile. On his fire tablet we use freetime to limit him to content we’ve selected. I don’t give him access to prime videos because they’ve started bringing in a ton of the youtube crap and he ends back on that. Instead we loaded vlc and our own digital files of shows/movies that are appropriate. He does use amazon prime on my husband’s phone, but only when he is literally sitting next to us so we are completely supervising his content.

1 Like

That perhaps understates it. The tech sector is really, really keen on automating absolutely everything possible right now. If you make your money with a never-ending firehose of media, having to pay people to personally vet that media is a huge money sink that gets bigger the more successful you are; they want to eliminate it entirely and replace it with algorithms, and it’s quicker to put semi-functional algorithms into production and wait for them to break than it is to test them exhaustively before deployment. It’s exactly the same thing that happened with the Facebook fake-news thing last year.


These videos, of which this is a quite typical example of the more dodgy kind, are hardly made by algorithms - as the live-action will show.

But, it’s the exact same themes and exact same kind of spammy behaviour.
Here’s something I wrote to a mailing list about these videos after reading Bridle’s article and watching too many more of these videos than I would ever wish for:

These videos’ modus operandi is that a parent hands a toddler a tablet
or a cell phone and finds them a YouTube video. If youTube is on
autoplay, a “similar” video is played after that, and so forth. So, if
many parents hand their toddlers an Internet device, videos that are
“similar” to other videos that parents would find for their toddlers
will receive many views. Like, if the video comes up when searching for
“nursery rhymes learning video”. If you haven’t experienced this before,
I suggest you try to enter that search and see what comes up. Click on
one of the videos and look at the suggestions.

Now click on one of those and look at the suggestions. It seems clear
that a toddler left alone with a tablet is more or less bound to be
sucked in by a feedback loop of eerily similar videos, meticulously
crafted to imitate thousands of other videos that happen to have
received astronomical numbers of views, often in the millions.

This video:

is typical inmany ways - it’s simple, strange, creepy in an eerie way, and
algorithmically situated to come up as “related videos” when a 2-3-year
old is binge-watching YouTube. It has 15 MILLION views.

“Wrong heads” and “learn colors” is one of the most popular tropes.
Right now, there are three million(!) of them on YouTube. And this video
alone has more than six million views.

Another trope, which we could call “finger family learning video” yields
7 million hits on YouTube. The top search result (this

) has 82 MILLION views.

So how did 7 MILLION of one kind of video, three MILLION of another, and
MILLIONS more of other tropes, e.g. “bad baby learns colors” which has
about 8 million, all come into existence the last few years?

What the actual fuck is going on here? Who make them and how does it work?

Of course, this has been under some discussion since James Bridle’s
article (linked above), among other things under the topic #elsagate:

While many of the videos already discussed are sickeningly repetitive,
with titles obviously designed to attract algorithmic matches rather
than human eyes (“Wrong Dress Frozen Elsa Sofia Talking Angela Hulk
Finger Family Learn Colors For Kids”), others are decidedly
inappropriate and not a little creepy, including urination, defecation,
pregnancy and strong sexual allusions, all in material very obviously
targeted at toddlers without adult supervision.

And that is part of Bridle’s and other people’s angle: Can
binge-watching these, surprisingly very popular and apparently
numbering in the hundreds if millions, of videos be harmful for children - what
effect can this algorithmic tour de force have on toddlers left
hour after hour to themselves?

However, I also have another suggestion - that this video industry and
its objective, being watched by toddlers, displays a very strange
feedback loop that tells us something about the toddler’s minds. In a
way, this video industry is a kind of AI investigating the ways of
thinking that appeal to very small children, what they feel is funny,
what can preoccupy them … like looking at colors, listening to
recognizable nursery rhymes, seeing favorite cartoon characters do funny
stuff, watch unboxings of fancy toys, on the innocent side - but also
tricking people, not being able to go to the toilet, peeing in
inappropriate places, etc.

So in a way, this video artist/toddler feedback loop is an AI
investigation of the psychology of very small children, where the AI is
implemented not just by computers, but by thousands (apparently) of
video production houses doing both cartoons and live action all trying
to find the exact sweet spot where the toddlers just want to keep
watching - by pressing the very buttons that appeal to children that
way. As a feedback loop always changes what it reproduces, this also
appears to be an example of what the Brazilian pshychologist Fabiane M.
Borges (disclaimer, a very good friend of mine) calls “hacking the
unconscious” - the videos is hacking the unconscious, to use the
psychoanalytical term, of millions of very small children, thus
displaying what goes on in them - but, also changing them and being
changed by them at the same time, all mediated by YouTube’s algorithms
(but possibly not for a lot longer, YouTube appears to be cracking down
and maybe this phenomenon will soon be history).

This may be only a much more blatant and in-your-face version of the
feedback loop we have seen with adults and television for many years -
but in that case, I find it quite eye-opening. If anyone has the stomach
for a further analysis (I’d generally recommend people not to spend
too much time on these videos, if they value their peace of mind) I’m
open to hearing other takes.


From certain indicators in the first video, it appears the production is sourced in some southeast/east asian country.
Disregarding the actors themselves, I usually look for clues like electrical socket patterns, appliances and wall decor, even building construction. There’s also two juicy clues when the camera leaves the immediate area of the house with the drone shot of the surrounding neighborhood and the beach shot showing a busy mixed-use shipping channel on the horizon, and the lush, green, hilly landscape in the distance.
In this particular video, you have very stylish architectural design, but at the same time, very cheap portable appliances and easy decor (notice the “fashion” wall art that appears to be lifted straight out of a catalog mailer).
Having never been to Asia, I hate to hazard a guess as to which country/territory this could be. Hong Kong?