Google's new product identifies whether a comment could be perceived as “toxic" to a discussion

Probably not, but if they had, there’s room for actual workable sentiment analysis. There are certain misspellings (some unintentional, but some definitely intentional “ironic” spellings") along with choices of vocabulary that are mainly used by toxic people. These are so prevalent that I could probably judge toxicity from the chi-squared distribution of such words and phrases alone, without even doing any kind of whizbang go-to-fuck-yoself machine learning on any of it. Granted, the vast majority of words and phrases would almost certainly classify as non-toxic.

The next step would be using words and phrases in context. This could be used to lift talking points out of the background noise, but would definitely require some machine learning (Support Vector Machine, maybe?) over a cleaned set of statistically significant features. This would still be relatively hit-or-miss, but could be serviceable.