Originally published at: http://boingboing.net/2016/09/19/jigsaw-wildly-ambitious-g.html
…
Interesting stuff!
Hopefully some of these tools will make it out of the googlesphere, perhaps even being standalone or open source, so we can use them without leaking data to google right and left…
Unfortunately, I imagine it will be hopeless for most of them, esp anything using machine learning, where they will likely want to keep refining the tool using all the new data they scoop up.
I am cautiously optimistic, for now.
I disagree: I think a lot of the solution will just be getting enough pattern recognition in place; trolls are, by their nature, pathologically inclined towards specific behaviors, and are thus identifiable.
People are not as complex as popularly believed, they’re just complex enough to be difficult for us to understand fully as observers.
It’s an interesting hypothesis! The question remains, though, whether all that data to scoop up will remain too tempting to stay away from…
This sounds like a step in the right direction. I hope it’s not the last step. Even if these tools are abominably bad, Google has told the world these are problems worth solving.
The person they open the article with herself seems to be a bit of a trolley:
In what was meant to be a hyperbolic joke, she tweeted out a list of
political caricatures, one of which called the typical Sanders fan a
“vitriolic cryptoracist who spends 20 hours a day on the Internet
yelling at women.”
The ill-advised late-night tweet was, Jeong admits, provocative and absurd
driving trollies is not harassment, of course. . I don’t want driving trollies censored – it’s a wonderful art form for exposing
hypocrisy and unexamined positions. Actual harassment (as she underwent) has no upside, but far too often I’ve seen driving trollies, or even mere disagreement reported as harassment. I worry very strongly about how these AI tools will be trained.
It’s sad when we need AI’s to regulate our moral behavior. But that’s been true since humanity began. Maybe if they’re good enough, they’ll be the role models that our leaders fail to be.
Oh, against.
Ah! Censorship by algorithm. What an amazing idea. I mean goodness knows that YouTube’s automatic copyright enforcement just always works. And it would sure suck if a form of activism (you know, with slogans and similar behaviors, and so on) got classified as ‘harmful’ and hidden from view.
I do not want unaccountable private institutions deciding on the implicit moral code of the future, I do not want them limiting what can be said in public discourse, and I especially don’t want them doing so using algorithms they can pretend are impartial (machine learning is a method for automating bias, after all).
Which is rather the fundamental problem mentioned at the start of the OP. If you don’t want the above or if you don’t want to institute a Government Office of Rightthink, then you are stuck with a world in which harassment is possible. Because someone has to decide where saying to someone that they are terrible, terrible people is legitimate speech and when it is harassment absent obvious threshold points (threats and so on which are already against the law). And whoever gets to decide will abuse it, because this is the ultimate ring of Gyges. Irresistible power to shape the planetary discourse while outwardly being the picture of virtue.
We must certainly be on guard against machine learning as a tool to render bias “impartial”. As for human moderators, I’d say that transparency and accountability can, to an extent at least, be a counter to megalomanic tendencies. Examples of entirely unmoderated discussion on the internet are somewhat discouraging, too.
This topic was automatically closed after 5 days. New replies are no longer allowed.