Let’s say a shop’s wall has been vandalized with messages of hate.
You could then point at the shop and say, “Look at the message that shop is displaying!”, or, “if this shop can display such messages, it shouldn’t exist”.
If everything that can be abused for hatred didn’t exist, we wouldn’t have anything. As we’ve seen, any online service can be abused. We could take your rationale a step further and say the internet shouldn’t exist because it “posts” hatred.
The real problem are the Nazis, not Twitter, or the vandalized shop, or the internet.
The point is that pinning things on the algorithm is a convenient excuse for Twitter’s management, especially since their own privilege-blind and complacent (but probably not anti-Semitic) technologists created and trained the algorithm.
It also allows the company to focus the discussion on the algorithm (which can presumably be improved) rather than on the lack of properly trained and bottom-line-impacting human moderators they need to work in concert with it.
Finally, pinning things on a supposedly neutral codebase allows Dorsey and his privileged team to continue claiming they’re trying to champion free speech through technology when all they really want is quarter-on-quarter MAU growth (which requires making the platform somewhat welcoming for anti-Semites, bigots, misogynists, etc.).
It’s not a neutral statement once it’s released to and impacts the public. Algorithms like this are trained or have filters hard-coded into them by their creators. The problem in this case is not anti-Semitic intent by the code engineers but their naive and privileged assumption that “kill all [insert ethnic or religious group here from standard corpus]” would not show up as a trend at all. The deeper problem is that when it does show up Twitter embraces it as just another opportunity for user engagement and marketing targetting.
Twitter could consult with experts in mass media about such matters – anyone who’s worked in broadcasting could helped them figure out what to watch out for and shunt into a reporting area not available to the general public.* It could and should accomplish something like this through a combination of algorithmic and human moderators, both properly trained.
[* I can see Twitter recording the trend for internal use, and perhaps releasing it as a group of related trends through the reputable news media. The point is not to enable a toxic feedback loop with the users or to profit from it]
Or when the shop owner sets up a totally neutral and cost-saving Photobot to take the snaps and then uses that to deflect blame and claim an innocent devotion to “free speech” as the reason they do anything before grudgingly stopping the sale of certain photos.
We can refine the analogy further to accurately reflect Twitter’s business and development practises, but under the circumstances I don’t know if it’s worth the effort.
Also, for some odd reason I feel compared to share this again:
Yes, the shop counter inside is for advertising sales, because the shop owner paints their statements on the wall based on which photos the advertiser likes.
And I have a feeling yours won’t be the last comment to revisit the highly flawed analogy.
That’s a crappy analogy. Twitter didn’t get vandalized. They advertised the hate message that was being promulgated by a number of users. A correction to your analogy: It’s like a hardware store where some neonazis were looking for improvised weapons to go kill some Jews with. The store has a big electronic sign out front that randomly selects parts of conversations that are going on inside the store, to advertise them and direct more users to join the conversation going on inside the store. It picked up the conversation between the neonazis and flashed it repeatedly on the electronic sign on the front of the store. As a result, a bunch more neonazis came into the store to help with the shopping.
The store then apologized for the hate speech that they advertised on the front of their building, but didn’t actually change how it worked, ensuring that something similar will certainly happen again.
We’re also “conveniently” ignoring the many examples of commerce which have been proscribed because they - on balance - cause too much harm compared to the benefits which accrue to a few suppliers. Asbestos, tetra-ethyl lead, chloroflurocarbons, lead(II) carbonate, lawndarts, insider trading … they are all nominally useful products which are gone because the cost isn’t worth the benefit. The companies which delivered them are free to diversify into other activities, but they can’t keep selling social poison.
As other people mentioned, the shop isn’t really in the business of social media. My point was that, anything you can think of can be abused for hate.
Unlike the shop, Twitter’s business is offering a service that includes displaying popular tweets in a locality. I kind of like being able to see what hashtags are trending in my area, as do many others.
With today’s limited technology, and the popularity of Twitter, there isn’t a way to offer this feature in such a way that every tweet and hashtag is checked for a meaning of hatred.
We don’t want to block all hashtags with the word “Jew” or all hastags with the word “Kill”, and many users want to see what local trending hashtags are. So until the technology is good enough to analyze the meaning of a sentence, they have to take a reactive approach.
Your stance seems to ignore the good Twitter has done for activism. If we’re being utilitarian, and we consider that they remove trends like this, but not trends related to human rights, I think you’d find they’ve done more good overall.
Pointing out flaws in the analogy that you started is not a valid way to argue your point.
Bullshit. As has been pointed out many, many times, Twitter and Facebook both have the capacity and resources to use extensive, trained, human moderation to augment their algorithms. They choose not to.
Asbestos is an amazing, incredible material. It makes efficient brakepads, its a great building material, it’s heat proof, it can be used to lag pipes and seal joints, and a host of other really important and valuable purposes.
It’s still not worth the cost.
And, amazingly , other products have moved in to fill the gaps, at least as well as asbestos and in some cases better. It’s the same story with tetraethylead, and CFCs: wonderful products that simply aren’t worth the cost and which have had their niches filled as well or better.
Despite the good uses to which it can be put, if Twitter can’t or won’t fix the problems they created with their own product then they can fuck off. Someone else (or, even better, someone elses) will figure it out.
For sure, if another product moves in that can detect and remove bigotry immediately, I would happily use it. That said, blaming Twitter for not using a technology that doesn’t exist is blaming the wrong group of people, when compared to say… the Nazis…
I didn’t point that out, I acknowledged it. It would be flawed to suggest an analogy is exactly like the subject. If it were exactly the same, it wouldn’t be an analogy.
I’ve seen a couple people suggest they could hire moderators to read posts, which is probably in the billions a day. Even if they could, human error would allow stuff like what’s above to get through, and boingboing would have another article on how Twitter “posts” “Kill All Jews”.
I haven’t seen any evidence that the technology exists for a computer that actually has a deep level understanding of our language. That would be really big news. Our best entrants in the Turing competition are generally gimmicks. I don’t think conscious computers that understand the world as we do are predicted to be here for at least another 5 - 10 years.