General Moderation Topic

“It looks like you’re a fascist simp. Have you considered fucking off?”

image

31 Likes

“Full Self Moderating” is about as far along as “Full Self Driving.”

24 Likes

Where I think AI can help (and we’ll be testing this soon, more to come!) is in recognizing tone and intent in responses. Most posts by sealions or the disruptive types tend to be insensitive and combative - Discourse has a shiny new plug-in to try that might help us to catch those before they can derail conversations. We shall see.

25 Likes

Can we teach AI disdain, loathing, and sarcasm? Sure. At a level high enough for content moderation? Probably not in my lifetime.

23 Likes

I assume, but can’t say for sure - since I have never tried to type, much less post them here - that there are many, many words in addition to the “T” word on the list of things the BBS checks for before letting a post through, which prevents obvious, blatant hate speech.

Acting like a pinniped (third try to figure out why this very post goes to moderation :grin:) and other techniques, though, are hard even for humans with context to detect right away. The idea is that by staying within the letter of the rules you derail and muddy things over multiple posts, which I can’t imagine AI having an easy time with. At this point having community flags and human leaders and mods keeping an eye on things seems like the best option.

I mean “Was a law broken?” could be a perfectly valid thing to ask in many threads - only with a lot of context does it become clear that in a certain thread, accompanied by other posts, that someone is not acting in good faith

ETA: Yep! that was it!

12 Likes

I’ll be on the lookout for Captchas with those elements.

21 Likes

“Please select the items or persons below that are worthy of your scorn”

23 Likes

I want to be clear that I wasn’t suggesting

8 Likes

… maybe they can train pigeons to peck at bad comments on the screen :bird:

18 Likes

Please select the boxes containing only genuine expressions of extreme views.

15 Likes

14 Likes

COO…L

16 Likes

It might be worth also checking into one that can recognise comments that parrot back garbage from Russian and other disinformation sites. I wouldn’t be surprised if someone’s already fed that corpus o’ crap into a model.

14 Likes

They don’t know what comments are fowl, they just wing it.

10 Likes

You’re talon the truth!

12 Likes

And snark, don’t forget snark

10 Likes

Exactly what an AI trying to slip under the radar would say…:thinking:

Kind of. I think almost every time I’ve seen someone go down the road of equating ‘legality’ with ‘morality,’ they are not coming from a place of good faith. The cases where it’s asked in good faith, the tone is totally different.

Nooooooo! You will pry my snark from my cold, dead hand!

18 Likes

I mean, sure, he had a Nazi flag on his wall, and Nazi memorabilia all over the place, and a sign that said “No Jews Allowed” on the door, but were any laws broken? I think not! (/s)

21 Likes

I agree! What I was trying to get at was that I think people (at least today) do better detecting “Kind of” than machine learning tools. Especially if you are talking about tools that prevent someone’s ability to post something or sends it to moderation before allowing it through.

I was just pulling the least obvious of that poster’s posts, though by the time they said it was obvious what they was doing already from context to us meat folks.

Sorry if I wasn’t clear about that! :sweat_smile:

9 Likes

Oh, ya, I got your point, and agree. It’s the nuance that makes the idea of AI moderation so tough. I was just expanding on the notion, because those legality/morality debates really get my guff when they’re made in bad faith.
Thinking out loud, it is totally legitimate to be curious about valid legality on stuff. It’s shitty to pretend that’s the only bar we should be holding things to, and that’s where this certain kind of tr0ll usually shows their hand.

11 Likes