It’s almost like building a system whose scale and scope are beyond the human capacity to manage or administer, and then trusting a bunch of incredibly stupid adding machines to run it competently instead, was actually a bad idea.
We should probably shut down the internet entirely, as it has grown too large and unwieldy. It’s clearly a cesspool for racism. This is the only option as shutting down Twitter would only be one of racism’s many ugly heads on the internet. Thank you for showing me that I was wrong about the Nazis, Twitter is clearly the issue here.
If you think that’s bad, make the founder a privilege-blind Libertarian free-speech-absolutist dudebro – the kind of guy who, had he been less lucky in winning the SV lottery, would be making ridiculously broken analogies in an Internet comment forum instead of on CNBC.
Not bad enough? Put him at the mercy of shareholders who only want to see higher quarterly growth and rock-bottom costs, even if the price is normalising the espousal of anti-Semitism and undermining liberal democracy.
Bullshit again. We’re not talking about hiring people to read ALL of Twitter. Just enough to moderate what goes trending and handle alerts and flags. A hundred teachers would be more than enough, and that’s not going to run you a million a year at the going rate.
What about BoingBoing? They’ve posted “kill the Jews” at the top of this page. Everybody who subscribes by RSS got that phrase in their feed. I’m sure some people shared this article on Twitter. So BoingBoing caused the phrase to appear in all their followers feeds. Does BoingBoing need to hire moderators to make sure offensive phrases don’t appear in their article titles?
What about it? There’s a difference between a site’s human editor/author manually posting what’s known as a “headline” over editorial content and an algorithm programatically generating a trending topic from aggregated end-user input in a comments section.
Not everyone who uses RSS gets the headline because not everyone who uses it subscribes to BB; everyone who uses Twitter sees the trend.
The headline uses scare quotes, and if Twitter’s algorithm can’t parse that (or if Twitter skimps on human moderators) that’s on its crappy system.
This is the core of the issue: Twitter has the resources to develop more precise algorithms and/or hire and train more human moderators. It chooses to do neither, instead constantly and grudgingly saying “sorry” and doing minor tweaks to the algorithm (or simply adding new filters) that don’t solve the larger problem.
BoingBoing’s authors and publisher handle the article titles (or “headlines”); it hires very well-trained and experienced moderators (like @orenwolf) to make sure offensive comments and others that violate the ToS don’t last very long in the BBS section (user-generated content, similar to Twitter’s tweets).
I’m glad to further explain the difference between a site’s front-page articles written by editorial staff and its user-generated comments on said articles if you need help with the concept, but if you don’t need that then you might grasp the distinction between a site that’s pretty much all user-generated comments (e.g. Twitter) and a site that isn’t (e.g. BoingBoing).
I hope you now understand a bit more about the business, content, and technological issues underlying the discussion.
They do moderate and handle flags. This trend was removed while it was still at the local level.
@ildrk You’re totally correct, BBS also has also displayed racist content, which was later removed by a moderator. That’s the standard response on the internet, no matter how your user content is organized or displayed.
The difference is that here on the BBS, people post something and it immediately goes out into the world, so the only thing you can do is react. Trends are algorithmically-generated by Twitter themselves based on user content. This means that there can be someone manning the valve, so to speak, to prevent offensive things from showing up in the public trends list at any level by having to manually vet every trend before it leaves an internal quarantine zone. There is no reason a well-moderated system should ever “accidentally” surface racist/sexist/anti-Semitic/homophobic/etc. trends.
The BoingBoing BBS programmatically shows latest and top threads.
Not everyone who uses RSS gets the headline because not everyone who uses it subscribes to BB; everyone who uses Twitter sees the trend.
[/quote][/quote]
…
Actually, the article uses regular quotes around the phrase not scare quotes.
[quote=“gracchus, post:131, topic:132429, full:true”]Finally, BoingBoing’s authors and publisher handle the article titles (or “headlines”); it hires very well-trained and experienced moderators (like @orenwolf) to make sure offensive comments and others that violate the ToS don’t last very long in the BBS section (user-generated content, similar to Twitter’s tweets).
[/quote]
But then didn’t they fail? If quoting the phrase itself is the problem irrespective of the reason it ends up being quoted, then they shouldn’t have let it through. In the present case, it was people talking about an act of vandalism. Similarly, in the “headline,” BoingBoing was talking about the trending topic.
My only real gripe here is with the suggestion that this trending topic surfaced a racist trend on Twitter.
This conversation is a silly false-equivalency. We are not the size of Twitter. Even here on Discourse we can hold posts for review if they contain certain keywords. Twitter certainly has the technical expertise to create a list of trending keywords that require manual ok before appearing as a trending topic.
The “a computer did it, therefore, it is too hard to fix” angle to this conversation completely ignores the fact that this is fixable and reflects how we got to this situation in the first place - silicon valley looking at problems as an engineering one rather than making sure their shit does the right thing and putting engineering efforts towards that, instead of declaring it programmatic, and therefore not worth improving.
Shows the headlines, doesn’t generate them. I’ll let someone else explain the difference.
Around the phrase “Kill All Jews.” This is done so that it’s clear the author himself doesn’t believe that, any more than he believes Twitter is actually sorry or made a mistake (hence the scare quotes around those words).
No. The authors are given latitude to write their headlines by the publisher. It’s clear to the human reader (at least one with more than an 8th grade reading level, which you may not have) what the author is trying to convey.
I think I’ll do the same with this conversation, since one way or another you’re not understanding basic topics or now grammatical structure. Good luck.
I can only imagine how frustrating this excuse-making for Twitter must be to a professional who does this day-in and day-out. The Discourse folks whose software supports your work must be similarly disgusted by Twitter’s attitude to fixing the problem.
I’m not sure that one is me. Scare quotes are quotation marks used to convey irony or skepticism. In this headline, “kill all Jews’” requires quotation marks because it is a quote. They are performing the intending purpose of letting the reader know this is exactly what was said.
“Sorry” and “mistake” are a little more interesting. You can call those scare quotes because the quotation marks are not only there for the purpose of setting off a quotation but to emphasize the writer’s disbelief. The quotes in that case make the headline closer to “Twitter claims it is sorry for so-called mistake of posting ‘Kill All Jews’ as a trend…”
Sometimes a quotation mark is just an inverted comma.