Perspective is an API that makes it easier to host better conversations. The API uses machine learning models to score the perceived impact a comment might have on a conversation. Developers and publishers can use this score to give realtime feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information, as illustrated in two experiments below. We’ll be releasing more machine learning models later in the year, but our first model identifies whether a comment could be perceived as “toxic" to a discussion.
Google is trying out an engine for rating comments as unhelpfully toxic, or otherwise.
Their intro page allows you to see example comments and tune their viability based on their toxicity rating, as well as enter sample text to see how toxic it is.
Partners listed include Wikipedia, New York Times, The Economist, and The Guardian.
It will be interesting to see this in the field, where the trolls can try to adapt to it.
From the developers’ page, to use it there are a number of legal conditions that have to be agreed to, such as “Do you have people under 13 using your product, whose data would be submitted to this API? *” and “I’ll protect private data. For non-public data, or any data I would not like to not be stored by Google, I will use the doNotStore flag described in the API documentation. If I have not enabled the doNotStore flag, I request that Google store data submitted to the API to, for example, provide scores related to the context of a conversation and/or improve model performance. *” What?
If it leads to the evolution of a better class of abusive cyber bullying by trolls who can really use their badinage and puckish epigrams for best effect, well, that’s a win.