YouTube demonetizing videos where LGBTQ keywords are said

Originally published at: https://boingboing.net/2019/10/02/youtube-demonetizing-videos-wh.html

10 Likes

26 Likes

So seem that one of my favourite channel has all videos demonetized?
Seems to me that for some people and organization demonetizing isn’t a big deal, because they get money from somewhere else, but for other cases this could be a problem.

2 Likes

Why, it’s almost like we shouldn’t let an opaque corporation decide what sort of speech is or isn’t welcome on a major platform for modern discourse using secret processes which are subject to change without notice!

18 Likes

They probably use more Q-codes than YouTube’s bot is comfortable with. :thinking:

7 Likes

The denial is ridiculous. How hard is it to say “there’s obviously a problem here, and we’ll do our best to fix it ASAP”? I’d have a lot more respect for “slow AIs” in general if they let their human servants know that they’re not infallible.

14 Likes

Platforms in 2010: Here are some fun ways to share your world!
Platforms in 2019: Scream into a box - maybe other people hear you and/or maybe we pay you for it.

14 Likes

The words “Minnesota” and “Missouri” are also on the demonetize list. Not sure why.

13 Likes

5 Likes

Probably because they’re the states that sound the most like Millennials. /s

4 Likes

Ehh, I hate to defend YouTube but the framing of this article is really dishonest. The algorithm can’t tell context, it can only pick up on specific keywords. So how is it supposed to know the difference between a non-hate video using the word “queer” and a hate video using the word “queer”? You can train the algorithm all you want but there are still going to be cases where homophobes and transphobes can bypass it by just dancing around the language. Remember the goal here is to avoid advertisers being associated with hate content. How do you programmatically detect that without taking a “scorched earth” approach toward language? The list of demonetized words bans plenty of other innocent words as well simply because they’re often used in hateful content.

The algorithm just reflects how our language is used. Blame the bigots for using “gay” as a slur, not the software for detecting that.

The answer to that is not asking an algorithm to preemptively react to the word “queer”, without human involvement.

28 Likes

You’re vastly underestimating how impossible it is to scale any human moderation system with the amount of content that is uploaded to YouTube. People like to complain about YouTube but they forget just how many hours of video is uploaded to the site every minute. What you’re asking is an impossible task especially considering YouTube already loses Google money.

So maybe they should reprogram the algorithm so that other things trigger it? It’s not like legit LGBT videos can’t/don’t/won’t use these words, and I refuse to believe that whoever set up the algorithm was not aware of that.

15 Likes

The list of words is not hand-programmed, again that doesn’t scale. If I allow the word “gay” but ban the phrase “gay conversion therapy” then the fundamentalist Christian homophobes will just rebrand to “gay relief therapy”. You HAVE to use machine learning for this problem because it’s too easy to bypass censorship. Just look at neo Nazis using dog whistles such as parenthesis around names of Jewish people. Manually defining trigger word lists does not work.

1 Like

A lot of Nazis use the word Trump in videos, but I bet that’s not a word that’s going to automatically trigger a video’s features being shut down.

Advertisers were leaving YouTube because they were being associated with dodgy videos, not because they wanted to be associated with dodgy videos.

14 Likes

Poor YouTube. They’re so helpless to train their algorithms to make these distinctions. It’s not like they have tonnes of money and genius-level staff to throw at the problem.

These aren’t isolated outlier incidents, but something that even a child can reproduce. It’s just lazy training of of a half-baked ML algorithm. Maybe making that the basis of your business model isn’t going to work out so well (see also Facebook, Twitter, Uber, etc., etc.)

If it is indeed impossible to do this on a purely programmatic basis, perhaps introduce some humans into the mix to evaluate edge cases and help train the ML algorithm. Around here they’re called professionally trained, adequately compensated moderators. But hey, those cost money.

Or maybe, y’know, just delete accounts (as opposed to individual videos) flagged by community members as promoting racist and sexist speech or inciting violence. Even at scale and allowing for bad actors abusing the system it’s not impossible, especially if partner organisations like the ADL and the SPLC are brought in to help.

15 Likes

2 Likes

Another iteration of “they should censor things I don’t like but not anything else.”

The problem is, if it’s a large global platform, this quickly degenerates to “censor anything that any significant number of people finds controversial, because you’re not special.

The other obvious way out of it is censor nothing, but that’s not, on the surface, compatible with keeping their paying advertisers. Maybe they could get around this by, rather than outright demonetizing videos, just classify them by the way they’re potentially controversial, and let individual advertisers choose which classes of controversial videos they do and don’t want their ads to show up on. One advertiser might be ok with advertising on queer videos and not gun videos, and another might be the reverse. Even if they did something as obviously neutral as this, though, I’m sure plenty of people would find a way to be outraged about it.

3 Likes

The linked spreadsheet has a link to a more detailed report:

Although it shows that there is a bias against LGBT words, and some other things like get rich schemes and brand names, the methodology is very limited.
They assume a model extremely simple by only measuring the words alone, while they recognize that the algorithm probably takes word combinations in account.
This means that what they are measuring is not exactly the word acceptableness but mostly noise.

The report findings that substituting some LGBT words for a neutral word, like “happy” or “friend,” are better to show the problems of the words being the reason to the video demonetization.
But, it still has the same problem as the first one since the variation is too small, and we are trying to find simple patterns in a sea of noise.