Twitter suspends academic who quoted feminist STEM research

Originally published at: https://boingboing.net/2018/09/29/platform-censorship.html

3 Likes

But harassment, racism, and outright misogyny is a-okay, because FREEZE PEACH! /s

13 Likes

#DELETETWITTER

something something something

6 Likes

Orwells’ version of newspeak was very clunky and low-tech, compared to the version we have today. But the overall intention still carries through, framed in terms of shareholder value in this instance.

1 Like

This thread got the poster suspended, today (he’s back, now).

You, of all people, can probably guess why (hint: twitter’s reporting system is really easy to gamify).

9 Likes

You think Twitter has human mods? It would be interesting to calculate their throughput - I’ll bet it’s in gb/sec.

2j0ywl

3 Likes

And it’s across all platforms - I read people complaining about how their Patreons are getting shut down/restricted because they’re perceived, often incorrectly, as being vaguely adjacent to “adult” content - ASMR, links to romance novels. (Just being something that women might like seems to make content suspect/the target of false reports.) While hate speech is considered fine.

All reporting systems seem to have been set up to be easily to gamify. Not necessarily intentionally, but…

5 Likes

1

Substitute the generic default “man” with the more accurate modern “person” and it’s as relevant today as it was when Russell was thrown in jail.

Also worth noting that while you may be smarter in every way than a lot of people, no matter how smart you are, you’re almost certainly not the smartest person in every way, and thus none of us are immune to this trap.

Add requiring mods to skim/speed read content flagged by algorithms trained by flawed humans and the results are entirely predictable. Which isn’t to say we must give up. But this isn’t a problem that can be solved by technocracy.

10 Likes

this isn’t a problem that can be solved by technocracy.

Knowing full well that it will be attempted, what CA modules can fix it? Next mvp from the Technically Wrong (book) authors? Is BB missing out on the market for “No I will not Fix Your Computer^W^W^Wmoderate your Incel Forum” collaborations between gothic blouse-makers and Effin’ Birds?

4 Likes

Re: But, Peterson writes, he also believes “that the implementation of these policies and processes can’t be this dumb.”

Short answer: Yes, they can.

Longer answer: If the machine learning courses at Stanford are any indication, this is the state of the art. Just about all of it was Bayesian analysis of word sequences leading to a “scoring” with regards to some factor. This works well enough for some aggregate answers. The class assignment, for example, involved looking at movie reviews and counting the positive and negative ones.

Think of it as a Fermi solution. Take a bunch of reasonable enough estimates and hope that the pluses and minuses cancel out. For rating movies with lots of reviews, odds are machine learning does a good enough job. If the algorithm misses some sarcasm or a “NOT!” at the end of the review, it washes out when taking the mean.

This kind of thing doesn’t work for one offs, like deciding if one particular post is offensive enough to ban the posting party. Maybe if the algorithm looked for a pattern of posts, it might be useful, but machine learning is statistical in nature, not semantic. It doesn’t have a clue.

This gets us to the real problem. Sites like Twitter and Facebook are publishers. They just don’t want to admit it. If they did, then they might be guilty of libel or defamation and subject to lawsuits for damages. Plan A was to just post stuff with no filtering, and we’ve seen where that takes us. Plan B seems to be running a Stanford undergraduate class exercise over the comments and having it ban the “worst”. The argument is that it isn’t exercising editorial control if an undergraduate CS student can write the editorial staff as part of a weekly problem set.

Some sites admit they are publishers, and they hire, bribe, cajole humans to vet all comments before they are published. For example, Amazon or the Washington Post, both Bezos companies. They accept that as the price of having user provided content. The tech companies are loath to do this. If nothing else, it costs money. It opens them up to claims of editorial bias. I suppose they are imagining a Plan C where software gets better and better, way beyond Bayesian. Maybe AI can reliably do language comprehension and exercise judgement. This is a dead end though. If their AI gets good enough, they are publishers, just publishers with a non-human editorial staff. I think they know they are stuck, but they figure that every year they stay in denial is a year of good, solid profit and stock pricing.

3 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.