Google and YouTube executives ignored warnings on toxic video content, now we're all paying the price

Originally published at:


Living in SF back in the day I would of course routinely meet Google engineers. Then there came a period when the only new Google people I met were marketing bros, and then I had a good sense of where the company was going.



The takeaway from both articles is that, however hard the mods work, the executives are undermining them because they’re more concerned with keeping toxic users and their content on the platform to keep up engagement and MAUs and advertising revenue.


Need to update my core economic principles:

  1. Theft is the most profitable business model.
  2. Slavery is the most profitable labor model.
  3. Toxic content is the most profitable media model.

A couple of years ago somebody sent me a link to a very long YouTube video entitled “Adolph Hitler vs the Jew World Order.” It’s exactly as horrible as it sounds from the title, with every antisemitic lie anyone has ever come up with. I just checked – it’s still there. It doesn’t look like Google really cares even about actual hate speech, as long as it draws eyeballs.


Bloomberg? Did they ever retract the story about the Chinese implanting magical little chips inside hardware that gives the Chinese government access to everything?

If not, I don’t see how you can trust that this story isn’t also made up to move the market.


Regardless of what you think the rules should be, I think transparency about the rules is essential to prevent abuse. If you don’t think it belongs on your platform, change the rules, remove it, and make clear what you did and why. What this employee was asking for is a separate set of secret rules that would be applied arbitrarily, and I think not doing that was the correct response.


The is the same YouTube whose Chief Product Officer gave an interview to the NYT last week, denying the rabbit hole effect of YT recommendations. Either he doesn’t understand how the product he oversees works (where autoplay is enabled by default, periodically re-enables itself without your permission, and can lead to even the most benign forays quickly devolving into algorithm-hacking conspiracy theories if allowed to run on its own*), or he’s a lying sack of shit.

If you believe him when he says that the autoplay recommendation doesn’t use play time and engagement as a signal to drive what gets queued up for automatic delivery, I have a bridge for sale that you might be interested in.

* As an example, yesterday I had to dig up a Minute Physics video on why the Earth is fucking round because a Discord server I help moderate got invaded by an honest-to-god flat-earther, and the autoplay recommendation beside that video of “10 reasons why the Earth is round” was an hour-long conspiracy theory video about the Earth actually being flat.

Links to Wikipedia beneath the player window are not going to fix this.


Those two options are not mutually exclusive. Many people have demonstrated that they are perfectly capable of being both ignorant AND liars.

He’s may or may not be lying out of ignorance; but he’s almost certainly lying to avoid taking blame for not shutting down the extremist rabbit hole. I’m betting their speaking policy is “deny first, deny often, and double down on denials when pressed.” (He probably had a hand in writing that, too.)


Toxic user-generated content, mind you. The kind you don’t even have to pay for the production of!


Start with the conspiracy theories they try to pass off as news on the dying Boomer-controlled mediums first.:roll_eyes:


Scientists in the '90s: We cloned a sheep! We landed a robot, on Mars!

Google launches: 1998
YouTube launches: 2005
Facebook launches: 2006

Scientists in the '10s: Look, for the last fucking time; the Earth is a sphere, and vaccines are safe and effective.



I don’t get it.

Just what price are they paying? A stern finger-wag?


1 Like

What is an MAU?

You believe that Bloomberg is trying to manipulate the stock market?

It’s a metric for growth on a Web site, short for “Monthly Active Users”. The execs want it to keep going up-up-up, even if some of them are Nazis or incels or whatever.


The headline reads “now we’re all paying the price”.

1 Like

I have a long history of companies that produce things that I like enough to purchase at what I consider a fair price go bankrupt. It’s likely my tastes are too rarified, or even more likely, a fair number of people like the product, but are unwilling to spend what it takes to keep it in business, especially when there are free alternatives.

This leads me to wonder, what if you can’t actually have what I consider a decent social media? What if business practices that are compatible with a healthy society mean that all social media beyond a thousand or so people die in bankruptcy?

Certainly that was the model for many a year. A community would slowly grow, then it would reach a certain point and as the costs mounted but the income didn’t, it would collapse. There simply wasn’t a business model for being decent and being free. (Most of the communities I belonged to were bankrolled by a hobbyist until it became too costly or was no longer fun.)

I hope I’m being pessimistic, but I think there’s a decent chance that the awful practices of most social media success stories may be the result of the fact that “good” companies cannot exist for long, so “bad” companies are the only ones we see.

I’ve always thought the business model for most of these companies didn’t make any sense. And then I assumed I was wrong (not surprising, there are reasons I’ll never be an millionaire.) But just perhaps I was right, and the business model doesn’t make any sense - unless you toss away all the ethical strictures that I had assumed went into the running of a company, but may also be what stands between bankruptcy and viability.