So let me start off with this as an example: https://bbs.boingboing.net/t/guns-dont-kill-people-toddlers-do/
Is that thread not a bubbling mess of… blah and blec?
Now think of some innocent BBS member entering this thread unsure of what to expect only to end up running straight into a frothing open sewer of personal opinions and egos. Not only is this a wholly unpleasant experience for a newer BBS member but if they have come here as a refugee from one of the more repulsive corners of the internet (such as 8chan, 4chan, 7chan, or certain parts of reddit[The dark web shall remain unmentioned]) this experience might trigger their PTSD.
“So… how would you solve this problem @TailOfTruth ?”
Well, thank you for asking! BoingBoing BBS should have a built in warning system that acts as a trigger warning to let people know what kind of threads they are entering. For instance, in the case of the example given above I would have anyone entering the thread first view a warning page with a “I understand and accept that the following thread could trigger Internet PTSD and will likely be filled with egotistical assholes who all think they are right.” button closely followed, or preempted, by this gif:
While I support your request not to have triggering stuff sprung on you without warning, I can’t help feeling that you’d have to be very naive not to expect a post about guns… I was going to write more, but really its anything about guns… not to descend into unsanity in the comments section, considering that’s what happens each and every single time the subject comes up on the Anglophone internet.
There’s no point in using trigger warnings because only responsible commenters will use them and trolls will ignore the convention and continue to obfuscate with impunity by participating in the discussion only by rubbishing others ideas without taking a proactive approach.
This might be related to @beschizza proposing a special “high risk discussion” mode. Because it is certainly true that some topics are just inherently controversial.
The key observation is that regulars, known users, are generally OK and can be trusted with these kinds of topics but the general public… notsomuch
Thus, a lockdown where new users have to do a bit more to post:
maybe they have to prove they read every prior comment
maybe they have to prove they read the original article
maybe all new posts by new users have to be approved
maybe there are even more restrictions on what new users can post
etc.
Reading through my reply to that topic, I still favor a basic algorithmic check to see if a topic is “high risk” and trigger this high risk mode which really only affects new users…
btw @TailOfTruth this is an excellent meta topic, it’s exactly why we create this category on every install of Discourse. All real community is predicated on internal discussions of “why are we doing this, and should we?” by people that, y’know, care about what goes on there.
I really like the idea of a “high risk” category combined with new accounts needing to have their posts approved on those threads. It will go some way to mitigate the drive-bys whilst not unduly penalising new posters with a genuine interest in discussion.
And you could have the “high risk” tag generated by an algorithm or have the original poster auto-tag it. Because, let’s be honest, we know which comment threads are likely to be the problematic ones.
Having a “high risk mode” does actually sound like it might fix some of the more controversial threads/topics, but would it incorporate this wonderful gif as part of the warning? A gif is worth 1000x[# of frames]=number of words… and thus this gif is worth quite a few thousand words, I think it could be a wonderful addition to any future “high risk mode”. No?
I suggested three days of moderation for all new accounts and you basically told me to go get bent when I did. It is good to see that when a Boing Boing author/editor suggests it, you consider it…