Twitter fails to moderate hatred, again

There’s a lot of hate for Twitter out there. Any BBS topic about the company will have several replies with some version of “Twitter is a raging dumpster fire and I’m glad I avoid it like the plague.” That usually bothers me, because I’ve seen a lot of the good Twitter can do. I’ve seen people come together to support each other, emotionally and financially, in hard times. I’ve seen valuable information shared, and friendships forged between participants half a world away from each other. If used responsibly, Twitter can be a force for good in the world.

But if any online community is only as good as its moderation team, then yes, I must say it: Twitter is a raging dumpster fire. I’ve seen countless examples of rules applied inconsistently, where some accounts are banned over innocent comments, and others who are dedicated to spewing hate and vitriol get away scott-free. I have a prime example of the latter to show you all.

Twitter has a variety of “safety practices” in place to protect users on their platform, including the following:

So you’d think, if you saw an account pulling this nonsense, it would naturally be a violation of said policy, right?

Apparently not, as my report of the account got this response:

Twitter has been aware of this account since January 29th, according to the first Tweet I managed to find complaining about it, which included both Twitter Safety and Twitter Support in its mentions. Yet somehow, The Powers That Be at the Birdhouse consider its existence perfectly acceptable and compliant with their rules, based on their silence and inaction.

Because of the high volume of content Twitter handles, reports are often evaluated by computer algorithms instead of human beings. In my humble opinion, this does not excuse Twitter from its legal and ethical responsibility of maintaining safety on its website; clearly the AI they use isn’t trained well enough to recognize egregious violations of their own rules. I’ve heard other users say they had success in sending follow-up emails protesting the erroneous decisions, at which point the case comes to a human’s attention and is dealt with appropriately. This failed for me, as the email address I replied to is “unmonitored” and therefore ignored.

So what exactly does Twitter consider offensive enough to act on? Here’s a recent tweet that earned the user a suspension under the Hateful Conduct rule (which has been reversed):

You’d think a simple statement of facts wouldn’t be considered problematic enough to garner official attention and action. But vocal advocates for LGBTQ+ rights find themselves brigaded by Gender Critical folk and other opponents more and more often, and for whatever reason, these mass-report campaigns are given priority over accounts that literally advocate for hatred and discrimination in their username.

To be fair, one anecdote is not data. However, this can easily be seen as one droplet in the growing riptide of harassment and persecution of trans people in particular and LGBTQ+ people in general, propagated by Republicans, Gender Criticals/TERs, and other reactionary forces. As a major social media platform, Twitter has legal and ethical obligations to stand in defense of its users-- obligations that it seems to be willfully ignoring. I feel it is the responsibility of those of us with functioning consciences to call out that ignorance and inaction wherever and whenever we can, in order to facilitate much-needed improvement and change in the systems we inhabit, both online and in the Real WorldTM.

(And if I haven’t said it recently, I’m incredibly grateful for @orenwolf and all of the moderation team of the BBS, who go to great efforts to ensure that such hatred and bigotry does not find a toehold in this forum. I’m also grateful to everyone in the community that makes the effort to flag bad posts-- thank you, and please, continue to fight the good fight. You are appreciated!)


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.