It’s not even that, IMHO.
Most of these networks have polices that sound good in practice - either by promising open, uncensored communication, or by offering heavy moderation of certain types of speech. While this sounds good in practice (and indeed, we use heavy moderation here), it doesn’t scale, either because:
1 - “free speech zones” will inevitably be infested by nazis or trolls, and
2 - “heavily moderated zones” will fail at scale because they either will not have the size of moderation team required, will try algorithms which have yet to be successful, or will be forced to water down their moderation because of these two issues.
“revenue” isn’t the driver here, just plain realities of the population at large, and the realities of community inertia.
I’m fairly certain, knowing what I do about moderation, that one of the biggest issues surrounding the FB/twitter/YT sized giants is the reality of nuance in messaging. We keep our community guidelines here fairly nonspecific because the stricter you make them, the easier it is for trolls to rules-lawyer you. The more you try to define what is hate speech, the more nuance lets crap like the FB post you posted above get through. But, broaden your guidelines too widely and you get stupid shit like posts critical of hate speech or even discussing hate speech censored instead.
I’m not sure what the solution here is, other than smaller communities gathered around common interests, and perhaps curated, targeted discussions. These don’t really exist yet outside of places like the BBS or similar smaller groups, and worse, they risk creating concentrations of like-minded folk who are never exposed to novel thought. But I don’t see how you can even attempt to get all of humanity into one system given how different cultural and societal norms can be even within one group, let alone the scale of the big guys.