Tech companies should do something about harassment, but not this

Jessica, I did read your article when it was first published, and I didn’t write it up at the time because I thought you’d made some really significant errors that I didn’t have time to go into, so when I saw that Sarah had addressed those errors in depth, I linked to her piece.

Here is a closer set of reactions to your piece:

When money is on the line

It wasn’t money, it was an existential threat to the system itself. The context for the current Content ID rules is Viacom’s $1B lawsuit against Google through which the company intended to have control of Youtube transferred to it (internal emails released in discovery reveal Viacom execs arguing over who would head up Youtube when it was a Viacom division).

The distinction matters, because the context that created the extremely problematic Content ID system is a belief that anyone who creates a platform where anyone can speak should have to police all speech or have their platform shuttered.

Content ID was an attempt to “show willing” for judges and the court of public opinion, but it’s obvious at a cursory glance that Content ID makes no serious dent in copyright infringement.

internet companies somehow magically find ways to remove content and block repeat offenders.

No, they don’t. Youtube can disconnect your account for repeat offenses, but not you – indeed, the number one rightsholder complaint about automated copyright takedown is that people just sign up under different accounts and start over (the exact same complaint that is fielded about online harassment).

just try to bootleg music videos or watch unofficial versions of Daily
Show clips and see how quickly they get taken down.

As Sarah points out, Youtube is full of Daily Show clips and music videos that haven’t been taken down – but the analogy is a false one, since the whole set of “works that Youtube is policing for” can be contained in a database, while “harassment” is not a set of works or words, but nuanced behaviors and contexts.

Interestingly, Content ID is asked to police within these sorts of boundaries in the case of fair use. When your material is taken down because of a Content ID match, but you believe that the use is fair, Content ID has a process to address this.

And this process is easily the least-functional, most broken, least effective part of Content ID.

In other words, the part of content monitoring that most closely resembles an anti-harassment system is the part that works worst.

But a look at the comments under any video and it’s clear there’s no real screening system for even the most abusive language.

Again, that sounds right to me. A system that just blocked swears would not make much of a dent in actual harassment (as we’ve seen since AOL’s chatrooms and their infamous anti-profanity filters, it is trivial to circumvent a system like this).

Meanwhile, swearing – even highly gendered and misogynist swearing – isn’t the same thing as harassment. For starters, people who have been harassed often have need to directly quote that harassment, and an automated system that blocks “the most abusive language” would prevent that.

There is also the problem of substring matching (“Scunthorpe”), discussion about words themselves (“Here is my video on the etymology of the word ‘whore’”) etc.

“If Silicon Valley can invent a driverless car, they can address online harassment on their platforms.”

As Sarah points out, Silicon Valley can’t invent driverless cars.

But while Twitter’s rules include a ban on violent threats and “targeted abuse”, they do not, I was told, “proactively monitor content on the platform.”

This sounds right to me. How would you “proactively monitor” more than 1,000 tweets/second? (Unless, of course, you were using something like Content ID, which you say you’re not advocating).

==

To sum up (and to reiterate my original post): there are things that Silicon Valley can do to fight harassment, but none of the suggestions in your column:

  • Adapting Content ID for harassment

  • Blocking “abusive” language

  • Pro-actively monitoring tweets

are things that would be good for this, and all pose significant threats to free speech.

Further, all the examples in your column of hard things that Silicon Valley has done that are similar in scope to blocking harassment are not things that they’ve actually done:

  • Terminating repeat offenders
  • Blocking music videos and Daily Show clips
  • Making a self-driving car

These three problems are actually canonical examples of the kinds of problem that no one has figured out how to do on unbounded, open systems:

  • Build a judgement system that can cope with rapidly changing contexts and a largely unknown landscape (Google’s cars are a conjuror’s trick: http://boingboing.net/?p=339976)

  • Uniquely identify and interdict humans (rather than users or IP addresses)

  • Preventing people from re-posting material that they believe should be aired in public

There are good and bad reasons to perfect all these technologies (yes to self-driving cars, no to better military drones; yes to detecting corporate astroturfers, no to unmasking whistleblowers; yes to blocking harassment, no to blocking evidence of human rights abuses). They are all under active research at universities and corporations and governments.

But none are actually the kinds of things that we can expect to do much about harassment today, and today is when we need things done about harassment.

5 Likes