Last year, Google reported revenues of 110.8 billion dollars. I find the concept that they “can’t afford” anything ludicrous at best.
Or maybe, y’know, just delete accounts (as opposed to individual videos) flagged by community members as promoting racist and sexist speech or inciting violence. Even at scale and allowing for bad actors abusing the system it’s not impossible, especially if partner organisations like the ADL and the SPLC are brought in to help.
Again, you really have a skewed sense of just how much content is uploaded to YouTube. There are over 500 hours of content uploaded to the site every minute. That’s 82 years of video a day. How do you address that with human moderation? You’re being incredibly unreasonable to expect a site that again, already bleeds money, to be able to do this and not shut down in a few weeks. It would bankrupt the entirety of Google to do that and they’re one of the richest tech companies in the world.
None of you really seem to understand what it’s like to run a product at this scale. The tone I get from this article and community is that YouTube engineers (many of whom are LGBTQ+ people themselves) are bigots and unfairly targeting queer people. That’s a conclusion that might make you feel better because it’s easy to look at YouTube and Google as the big bad evil, but the problem is not as simple as you are all making it to be. It also ignores the real problem here which is not algorithms or YouTube moderation or even the Internet. It’s the normalization of bigoted viewpoints and a culture of hate that runs rampant on the Internet. Look at all of the big sites, Facebook, Twitter, Reddit, etc. They are ALL ineffective at stopping this because there’s too damn much of it and these groups come up with new words and phrases and memes to bypass the software.
Last year, Google reported revenues of 110.8 billion dollars. I find the concept that they “can’t afford” anything ludicrous at best.
And none of that is made from YouTube. In fact, YouTube cuts into that number.
All-or-nothing arguments just sound like, “If we’re going to let a twenty year old talk about wanting to go to a gay prom, well, I guess we’ll just have to put up with random explicit snuff videos.”
C’mon.
And none of that is made from YouTube. In fact, YouTube cuts into that number.
YouTube makes revenue. That revenue is not split out as a separate line item from other Alphabet revenue.
It’s probably structured so that YouTube operates at a “loss” with ad revenue directed into other parts of Alphabet.
They should nerd harder.
All-or-nothing arguments just sound like, “If we’re going to let a twenty year old talk about wanting to go to a gay prom, well, I guess we’ll just have to put up with random explicit snuff videos.”
C’mon.
I never made that argument. The unfortunate answer to this problem and the one YouTube seems to be taking is to deprecate monetization entirely and only advertise on hand-picked channels that have been manually vetted for all by YouTube. Not a great outcome for small content creators but again, it’s the only way YouTube can really keep the trust of advertisers because human moderation of 82 years of video every day is impossible and algorithmic moderation will always fail. Expect YouTube to look more like cable TV in the years to come.
Hm. If they were being lazy, a training group of hate videos that used LGBTQ keywords would tend to train the algorithm to recognize those as hate flags.
Did they forget to include non-hate videos that also used LGBTQ keywords?
I never made that argument.
That post wasn’t responding to you.
Hm. If they were being lazy, a training group of hate videos that used LGBTQ keywords would tend to train the algorithm to recognize those as hate flags.
Did they forget to include non-hate videos that also used LGBTQ keywords?
Unfortunately if I had to guess, there is a LOT more hate content being uploaded (and posted on the Internet in general) that uses LGBTQ keywords than non-hate content. How you delineate the two is another problem that I don’t think ML can solve. There are plenty of crypto fascists who say incredibly hateful things with a nice voice, no cursing or charged language, coded language etc. It’s basically a genre on YouTube: people who call themselves “skeptics” but are really bad-faith actors trying to spread white supremacist viewpoints.
It’s immaterial if they never intended to discriminate - they did discriminate. Now their choice is whether they wish to continue to discriminate knowingly.
Intent won’t matter to your users as an excuse - and it won’t pass a disparate impact test.
Again, you really have a skewed sense of just how much content is uploaded to YouTube.
I wasn’t talking about content uploaded in the bit you quoted, I was talking about user accounts – a much smaller problem set to be addressed.
None of you really seem to understand what it’s like to run a product at this scale.
With approx. 25 years in the industry, including advising some major content sites in their earlier stages, I do understand the challenges of scalability. I also know that there are ways to address these problems more effectively – if the company is willing to assign the appropriate resources in what is an on-going battle with bad actors (I’ve spent all too much time trying to get corporate boards and major shareholders to understand that it’s worth it from a business POV).
It’s not a trivial problem, to be sure. And addressing it won’t be anywhere near cheap. But it’s not impossible if the corporate will is there to make the platform better and, ultimately, more beneficial to both users and advertisers in the long term (as opposed to making the quarterly numbers).
The tone I get from this article and community is that YouTube engineers (many of whom are LGBTQ+ people themselves) are bigots and unfairly targeting queer people.
Then you’re misreading the tone, because what’s being discussed here is the laziness, tunnel-vision, corporate corner-cutting, Californian Ideology free-speech absolutism, and techno-utopian Libertarianism that’s plagued the tech industry since at least the early 1990s – all of it mixed with a horribly flawed engagement-based advertising business model.
It also ignores the real problem here which is not algorithms or YouTube moderation or even the Internet. It’s the normalization of bigoted viewpoints and a culture of hate that runs rampant on the Internet.
The Internet, and especially the monopolistic social media walled gardens that a disturbingly large number of people experience it though, is in large part responsible for the normalisation of those viewpoints over the past 20 years. These companies want all the benefits of an old-fashioned lowest-common-denominator mass media business without any of the (costly) responsibilities.
And none of that is made from YouTube. In fact, YouTube cuts into that number.
That’s why I think Alphabet’s first order of business if it’s broken up is to jettison YouTube (which, due to its comments section, has always been a cesspool). But right now, financial losses taken into account, it serves the company’s purposes in other ways.
They should nerd harder.
They should spend some bloody money on the problem instead of just hoping that machine-learning systems can instantly replace the human nerds.
Another iteration of “they should censor things I don’t like but not anything else.”
No, another iteration of a privately owned social media platform that’s cheaping out on trying to give the appearance of not tolerating intolerance, and failing spectacularly in the process.
Maybe they could get around this by, rather than outright demonetizing videos, just classify them by the way they’re potentially controversial, and let individual advertisers choose which classes of controversial videos they do and don’t want their ads to show up on.
They run into the same basic problem of what and/or who is doing the classification. It’s going to cost them money to do that, and they’re not willing to spend it if they can pretend an algorithm can do it.
Again, you really have a skewed sense of just how much content is uploaded to YouTube. There are over 500 hours of content uploaded to the site every minute. That’s 82 years of video a day. How do you address that with human moderation?
This would be a great point if only anyone you’re responding to was calling for pre-screening every video that gets uploaded. Instead people are calling for things like moderating videos that have been flagged by users:
Or maybe, y’know, just delete accounts … flagged by community members as promoting racist and sexist speech or inciting violence.
YouTube is the bad guy here in that they allow advertisers to select “I don’t want to be associated with LGBTQ+” content.
Doesn’t matter if their automated system is working or not… making that an option is discriminatory.
Ehh, I hate to defend YouTube but the framing of this article is really dishonest. The algorithm can’t tell context, it can only pick up on specific keywords
From my post, once more unto the breach…
Tech companies sometimes like to hide behind the suggestion that algorithms—computers making automated decisions—can’t be bigoted. This is a example that makes clear how empty that argument is, and how an automatic process can baldly reflect human bigotry.
Whether they feel responsible for their code’s decisions is immaterial. They will be held responsible for it, one way or another.
You’re vastly underestimating how impossible it is to scale any human moderation system with the amount of content that is uploaded to YouTube.
It’s not so much that people underestimate the cost of scaling human moderation. They’re demanding an end to the bigoted behavior, automated or otherwise, irrespective of what that costs YouTube. If the cost ruins it, too bad.
I guess that YouTube will develop a more fine-grained ad system that deemphasizes monetization status. YouTube might not want to do this because its current advertising products are stable and clearly differentiated. So perhaps YouTube will ultimately just allow the discrimination in a more formal way and take its chances with the backlash.
If their business model depends upon discriminating - they have an illegal business model. Sorry - close up shop or fix it.
You’re being incredibly unreasonable to expect a site that again, already bleeds money, to be able to do this and not shut down in a few weeks. It would bankrupt the entirety of Google to do that and they’re one of the richest tech companies in the world.
You say this like it’s a bad thing. I’m not sure that’s a unanimous view.
The unfortunate answer to this problem and the one YouTube seems to be taking is to deprecate monetization entirely and only advertise on hand-picked channels that have been manually vetted for all by YouTube.
Couldn’t they simply enforce a set of guidelines, and fine any channel’s owners who post materials that don’t adhere? Sort of like the FCC fines tv channels who allow stuff to be broadcast that break the rules?*
*im not saying I agree with all the fcc guidelines, but I do agree that when someone provides a global platform for people to amplify their views, and gets revenue from it, there should be some accountability for the content.
They should spend some bloody money on the problem instead of just hoping that machine-learning systems can instantly replace the human nerds.
Sure. China solved this problem a while ago, there’s no reason why Google with all their money can’t do the same thing.