It is unlikely to help that âContent IDâ and friends are notoriously opaque and twitchy, so any âanti harassmentâ mechanism based on similar concepts will take about 4 seconds to become a playground for trolls to lead victims into triggering it and being silenced (which is, after all, half the fun).
This would be interesting if I had ever written that we should use a system like Content ID to stop online harassment. But I never suggested that - I just used it as an example of how when money is on the line, companies make things happen fast. I get the feeling that maybe you just read Jeongâs article and not mine (which isnât linked to as far as I can see so people can see my argument in full). Honestly, I think a retraction is in order.
Hereâs my original article - you can see I never made any such argument.
Yea, the article is a little different than portrayed.
A couple of questions/comments, though:
And it shouldnât need to rise to being a question of constitutional law.
Seems to me that deciding if on-line speech is or isnât the same as speech IRL in terms of harassment, this is an entirely valid use of SCOTUSâs time.
And one other point:
If these companies are so willing to protect intellectual property, why not protect the people using your services?
Just to be clear: blocking someone from threatening someone else isnât the same as protecting them, in the same way that talking Craigslist out of accepting sex worker ads ends sex work.
Tech companies can and should provide better tools for their users to block harassing speech, but I donât think that should be cast as providing any real protection.
On the other hand, if you want to somehow create a monitoring system to automatically report things to LEO, a) Coreyâs comments are germane, and b) that probably needs the aforementioned noodling by SCOTUS.
The most ludicrous thing is you call the guardianâs favourite clickbait writer (you know this cause of just how often the guardian tweet and facebook an articlenofnhers) ânormally sensibleâ.
Iâm very pro feminist but I have my suspicions that Jessica Valentini is actually a shill for the patriarchy because she is so utterly absurd, badly thought out and almost every article clearly written to just provoke response by being so stupid an un-nuanced. She is far and away the worst regular feminist writer for the guardian, she would be the worst regular writer but they do employ Owen Jones who seems to also be on a crusade to discredit the progressive left.
Her latest one is a joke - she complains that we shouldnât buy into the consumerism of Christmas at the start then blames her husband for not buying into or caring about the consumerism of Christmas and gifts and nonsense as she does. She is projecting her issues onto a supposed wider issue because her husband has different values to her.
[quote=âGreg_Sheppard, post:6, topic:47973, full:trueâ]
âŚ[/quote]
[citation needed]
That sure seems like your intent, although you can argue that you only meant that somebody like YouTube should use something like ContentID. That would be missing the point.
Those tools you use as examples work terribly. They cause pain and suffering every day. The task of finding popular songs is infinitely easier than the task of accurately identifying harassment. And the pain and suffering caused by censoring speech is vastly greater than the pain of censoring music.
No algorithm will ever understand the nuances that separate humor from harassment; even humans are terrible at the task. And what happens when the next President decides that political criticism is a form of harassment? What if Richard Nixon had had this technology at his command?
I just read the article, and Iâm inclined to agree. Real shame I have zero pull here.
EDIT: I meanâŚhere we are, on a forum, built by someone whoâs trying to build a platform for civilized discourse. And IIRC, it already has auto-banning for certain types of behavior, though Iâm not sure if thatâs relevantâŚanywayâŚIâm wondering why Cory read this piece, thought, âhey, sheâs proposing a Content ID-style system,â and felt the need to write it up that way. When I think of people going off half-cocked, I donât think of Cory Doctorow.
Good golly, thatâs some weapons-grade hyperbole.
Is it? One hundred hours of video are uploaded per minute now. If an average video is ten minutes long, thatâs 600 uploads per minute, or about a million uploads per day.
Even if Content ID was 99% accurate, there would be 10,000 false positives every day - 10,000 perfectly legal videos blocked for no reason. Maybe a few hundred legitimate, non-infringing users permanently banned every day.
You think thereâs any pain and suffering involved?
I agree totally. It just conjured up a loop of âDonât Fear the Reaperâ in my head, which is going to take a little while to fade out. Iâm certainly not complaining, though.
Jessica, I did read your article when it was first published, and I didnât write it up at the time because I thought youâd made some really significant errors that I didnât have time to go into, so when I saw that Sarah had addressed those errors in depth, I linked to her piece.
Here is a closer set of reactions to your piece:
When money is on the line
It wasnât money, it was an existential threat to the system itself. The context for the current Content ID rules is Viacomâs $1B lawsuit against Google through which the company intended to have control of Youtube transferred to it (internal emails released in discovery reveal Viacom execs arguing over who would head up Youtube when it was a Viacom division).
The distinction matters, because the context that created the extremely problematic Content ID system is a belief that anyone who creates a platform where anyone can speak should have to police all speech or have their platform shuttered.
Content ID was an attempt to âshow willingâ for judges and the court of public opinion, but itâs obvious at a cursory glance that Content ID makes no serious dent in copyright infringement.
internet companies somehow magically find ways to remove content and block repeat offenders.
No, they donât. Youtube can disconnect your account for repeat offenses, but not you â indeed, the number one rightsholder complaint about automated copyright takedown is that people just sign up under different accounts and start over (the exact same complaint that is fielded about online harassment).
just try to bootleg music videos or watch unofficial versions of Daily
Show clips and see how quickly they get taken down.
As Sarah points out, Youtube is full of Daily Show clips and music videos that havenât been taken down â but the analogy is a false one, since the whole set of âworks that Youtube is policing forâ can be contained in a database, while âharassmentâ is not a set of works or words, but nuanced behaviors and contexts.
Interestingly, Content ID is asked to police within these sorts of boundaries in the case of fair use. When your material is taken down because of a Content ID match, but you believe that the use is fair, Content ID has a process to address this.
And this process is easily the least-functional, most broken, least effective part of Content ID.
In other words, the part of content monitoring that most closely resembles an anti-harassment system is the part that works worst.
But a look at the comments under any video and itâs clear thereâs no real screening system for even the most abusive language.
Again, that sounds right to me. A system that just blocked swears would not make much of a dent in actual harassment (as weâve seen since AOLâs chatrooms and their infamous anti-profanity filters, it is trivial to circumvent a system like this).
Meanwhile, swearing â even highly gendered and misogynist swearing â isnât the same thing as harassment. For starters, people who have been harassed often have need to directly quote that harassment, and an automated system that blocks âthe most abusive languageâ would prevent that.
There is also the problem of substring matching (âScunthorpeâ), discussion about words themselves (âHere is my video on the etymology of the word âwhoreââ) etc.
âIf Silicon Valley can invent a driverless car, they can address online harassment on their platforms.â
As Sarah points out, Silicon Valley canât invent driverless cars.
But while Twitterâs rules include a ban on violent threats and âtargeted abuseâ, they do not, I was told, âproactively monitor content on the platform.â
This sounds right to me. How would you âproactively monitorâ more than 1,000 tweets/second? (Unless, of course, you were using something like Content ID, which you say youâre not advocating).
==
To sum up (and to reiterate my original post): there are things that Silicon Valley can do to fight harassment, but none of the suggestions in your column:
-
Adapting Content ID for harassment
-
Blocking âabusiveâ language
-
Pro-actively monitoring tweets
are things that would be good for this, and all pose significant threats to free speech.
Further, all the examples in your column of hard things that Silicon Valley has done that are similar in scope to blocking harassment are not things that theyâve actually done:
- Terminating repeat offenders
- Blocking music videos and Daily Show clips
- Making a self-driving car
These three problems are actually canonical examples of the kinds of problem that no one has figured out how to do on unbounded, open systems:
-
Build a judgement system that can cope with rapidly changing contexts and a largely unknown landscape (Googleâs cars are a conjurorâs trick: http://boingboing.net/?p=339976)
-
Uniquely identify and interdict humans (rather than users or IP addresses)
-
Preventing people from re-posting material that they believe should be aired in public
There are good and bad reasons to perfect all these technologies (yes to self-driving cars, no to better military drones; yes to detecting corporate astroturfers, no to unmasking whistleblowers; yes to blocking harassment, no to blocking evidence of human rights abuses). They are all under active research at universities and corporations and governments.
But none are actually the kinds of things that we can expect to do much about harassment today, and today is when we need things done about harassment.
This topic was automatically closed after 5 days. New replies are no longer allowed.