I’ve been experimenting with feeding the corpuses of commenter’s posting histories through a Bayesian filter - basically, through a spam filter that’s trained to classify trolling rather than spam.
It’s a very effective approach, but by no means foolproof. I have limited resources, but the trials I’ve run are very promising, digesting walls of text in political forums, flagging personal abuse, tagging ideologue’s talking points plagiarised from professional pundits.
After working with it for quite some time, I think several things: it would be a very useful tool for moderators, and it’s imperative that it not be automated — Robots do heavy lifting, and discourse is not heavy lifting;
Our discourse should not be governed by those who can figure out how to manipulate or evade a Bayesian filter, or train it to censor speech they merely disagree with. It only knows to recognise what people flag as trolling — or fallacies, or personal abuse, or “shitposting”;
Forgetting or never learning how to recognise sociopaths and narcissistic behaviour makes us all more susceptible to being taken advantage of, not less;
Training sociopaths and narcissists to mask the easily recognisable behaviours that act as warning signals only sharpens their skill set.
I think the article — and my own experience — make an argument that moderators of fora are both skilled and undertaking a hazardous profession.