Twitter "sorry" for "mistake" of posting "Kill All Jews" as a trend

I don’t think anyone has said that, especially at Twitter. They have made it much less likely for this kind of thing to happen over the last 10 years. I’ve had a few of my own Tweets marked as false positives.

I think this is like any job, where the solution to problems look easier to people from the outside. My partner works in automation and I work in software, and we both come home with stories. People who have partial knowledge make suggestions that seem obvious to them. “Why don’t you make this part of the process manual?”, or “Just tweak this part of the algorithm”.

You probably know from moderating that there is a lot of grey area when it comes to rules, context, wording, intention etc.

I promise you guys, they are working to fix it, and I’m sure we can all relate to times when our jobs looked easy to someone on the outside. They didn’t intentionally post “Kill all Jews”, it’s just the result of an imperfect system.

Pointing the finger at neutral third parties detracts from the real causes of racism like inequality and poor leadership.

Nope, not good enough. They have massive resources, and if someone had said “We’re not going to allow hate to trend”, they’d have found a solution sooner than this. Or do you think hate appearing in tweets is a new problem?

If Jack Dorsey walked into a meeting and said “Zero tolerance for trending topics that violate our guidelines on hate speech”, it would happen. Something as crude as a filter that looked for common words used in hate speech that went to the moderation queue before being posted would have caught this. Such a solution doesn’t scale, but that’s not the point - you start from a moral position - we are not going to allow this, then you work towards automating the solution so that it’s scalable and functional going forward.

The problem is, Twitter, like so many other platforms, went from the other direction - we’ll allow everything and deal with the issues later - and that not only makes them culpable for enabling this behaviour, but also for amplifying it.

I call bullshit. We are capable of both attacking the root causes of bigotry and those that enable the behaviour at the same time.

There is a moral responsibility to choose what sort of user-generated content your platform will carry, and to act on that as a priority. Twitter was late on both counts, and do not deserve compassion because solving the problem in an algorithmic way is difficult. People die from twitter bullying, lives are ruined from bigotry and misogynists attacking others. Twitter makes a choice when they put profits above solving these moral dilemmas of their platform.

It’s really that simple.

16 Likes

It’s much harder now than it used to be for racists on Twitter. I’d be willing to bet it’d be easier to bully and make bigoted comments on Discourse. I’ve seen some pretty nasty bullying right here, but we blame Twitter and make excuses for Discourse.

In this case, they actually didn’t allow hate to trend. It sounds like what was trending was the phrase “Kill All Jews”, taken out of context from people commenting on a story, many of them sympathizers.

It would be like the trend: “…terrorize Americans” coming up after 9/11. It’s not an instruction, but a common phrase among sympathizers.

I completely agree that corporations should put morals before profits. However, I don’t think this is a problem that can be completely solved immediately with any amount of resources. They absolutely need to put large amounts of effort into this, but I also think what you’re asking is impossible without a computer that understands the meaning of language.

Twitter (and many other organizations) would love to be the first with this technology. It’s a holy grail; but no amount of yelling from their CEO will make this happen overnight. Sometimes, advanced, civilization-changing technologies can’t be built on a whim. It doesn’t mean they aren’t working hard on it.

This topic was automatically closed after 5 days. New replies are no longer allowed.

Again, Discourse is a dozen engineers. I’ll bet twitter has that many employees responsible for their lunch menu. Resources matter in these situations.

Context matters. “Kill all Jews” out of context is a hateful term, and Twitter made a decision not to care if trands like that are displayed. The “why” doesn’t matter. It’s just excuses for decisions they’ve made on what they’re going to care to police.

This is the perfect solution fallacy. Could Twitter create a list of “watched words” that result in a human having to approve a trend? Of course they could. Would it require a lot of resources initially? Yes. Then, a lot of people much smarter than I about big data would find ways to make that process less labour-intensive. But if they approach it from a “once is too many” perspective, instead of a “we can’t slow this process down because we choose to wait for the perfect solution” perspective, It would be the right thing to do.

I’ve worked for organizations where senior management makes a simple edict, like “This can’t happen, we can’t allow this to happen. Fix it now with whatever means are necessary, then find a sustainable fix afterwards” is not easy, but it is worlds away from cannot be solved “with any amount of resources”.

What’s missing is the will. And that’s been a recurring issue with engineering-focused tools, like Twitter, Facebook, and the like. Moral imperative (or the slow realization that their products are having real effects on the world) is changing that. But, IMHO, too slowly.

12 Likes