Probably a “what have the Roman ever done for us, except [fill in list]” sort of post.
Back to topic: Discourse isn’t half bad now, what?
Probably a “what have the Roman ever done for us, except [fill in list]” sort of post.
Back to topic: Discourse isn’t half bad now, what?
Something clicked when I read this. Which is that replying to a troll is in itself a violation of community guidelines, and thus would itself be flaggable.
It seems like users who reply to trolls fall into two basic groups—
New or naive users who just don’t seem to recognize trolling, and/or aren’t fully aware of the community guidelines, and who inadvertently take the bait.
Long-time users who recognize trolling, and know the community guidelines, and who nevertheless reply.
Users in group 2 seem to be sub-trolling, helping the trolls do their work. I’ve flagged trolls, but it never occurred to me before to flag replies to trolls, but now I think it would be a good idea if that was encouraged.
I would be.
I agree that it might be nice to have the option of finding out what/how/when status changed but I wouldn’t want to get a ‘warning’.
The physics of trolling are extremely unforgiving: if only one of a thousand users takes the bait, the game is afoot and the replies start rolling in. And the more replies you get, the more likely it is that even more replies-to-replies will roll in as everything snowballs out of control
Which means from a game theory perspective you basically can’t win a troll fight. There’s no way to guarantee every single logged in user will follow the "don’t feed the trolls, please it instead " rule to the letter. It is a hugely important philosophical guideline, for sure, and it should be taught to everyone… but it is not a panacea.
There’s a great article on this I highly recommend everyone (in this topic) give a close read, since if you’re already reading this, you’re in the target audience
Because the phyics of trolling are so unforgiving, I’d say there is no substitute for active moderation. Now, when making those active moderation decisions, whether you
… that depends on the whims of the site owners, doesn’t it? In any case it is my very strong recommendation that all important moderation decisions are handled collaboratively by a group of at least three people. From a community perspective, “the absolute law of one man” always ends badly in my experience.
That’s definitely not the case. What you might be reacting to is “key stakeholder decisions”.
Also, I found this exchange on Hacker News fascinating.
I think the relevant guideline puts it well:
Comments should get more civil and substantive, not less, as a topic gets more divisive.
The inadequacy of this guideline and couching most moderation along its lines is why the problem and ‘dynamics’ as tptacek puts it, exist in the first place.
The site selects for and breeds civil, substantive racists and misogynists (along with the hyper-sensitized responses) like a hospital breeds antibiotic-resistant superbugs.
I can see selects for, but breeds seems a stretch. Unless you mean breeds civility within racists and misogynists, which seems beneficial?
Yes, mostly the second thing. It’s the opposite of beneficial - because the guidelines say ‘don’t be a meanie/obvious blowhard’ and most people who get called out for anything are called out for something along those lines, bigots who adapt to these can and sometimes do last on the site for years.
HN’s mods put in a great deal of effort in and are surprisingly successful at containing the far more basic and common human impulse to be a jerk to strangers online. They have rules, they enforce them, they publicly shame rulebreakers, etc. You are explicitly not allowed to be an asshat on HN and everyone knows it. The place would be better if ‘don’t be a bigot’ got the same treatment. All caps users and transgressors against HN’s fundamentalist quotation marks cult are exposed to more public opprobrium than your typical “human biodiversity” sea lion.
I had never thought about it this way, but he’s right – a racist, misogynist, or bigoted line of argument is far more dangerous when it is draped in the robes of overt civility. So to the extent that we are teaching people …
Hey, it’s OK to say racist / sexist / bigoted things, as long as you say them in a civil, substantive manner
… we are indirectly creating much more powerful racists / sexists / bigots , who will become immune to less sophisticated moderators who will only see “well, what this person is saying is kind of morally abhorrent, but they aren’t saying it in a mean way…”
Some long-time members of the BBS were eventually asked to leave because they had learned ”civil disobedience” - they would work very hard to follow the letter of policy while violating the spirit of it. Only through careful observance by an active mod team (which often in these cases included regulars helping to compile data) would these patterns come to light and these ”attention/energy vampires” be successfully exorcised.
True, but I don’t get demoted by FB. There aren’t any consequences that affect me in a way I care about (though I assume it may affect the feed algorithm one way or another). I don’t loose any function on FB by not visiting it.
As to the changes, I really appreciate the “BB” for easy main site navigation. A simple but very handy change.
I do look forward to an ignore feature. There is a person in the forum who really doesn’t like me and it would be good for both of us to put each other on ignore. One question about ignore, though, is whether posts that quote that person will be visible. I think that posts by people on the ignore list should be click to view, both their original post and any quotes of their posts (the rest of the post with their quote should be visible normally, just the quote should require a click to view).
Really like the quote system here. It is much faster to use for specific quotes than other comment forum or BBS systems. And and an ignore feature as clever as the quote system will help really put it over the top.
You absolutely do, they just don’t tell you anything about it – ever.
I just want to politely point out that this is exactly the sort of thing the private TL3 lounge area was intended for
The problem was that the users in question were hanging out and participating enough they made it to TL3…
I think I could be convinced by a list of all the mods since tribes and which software was running when they left, if that supported your assertion. Which I don’t believe it would, but I am always willing to reconsider in light of new data.
That is what I’d say many advanced amateur/professional trolls do, they learn the rules to the letter and then push them to the very limit to goad other people into crossing the line in response. It’s their game, and one done by regulars who have some friends on the forum, and who can gang up on others by vindictively flagging posts, which multiplied by their trusted status can get their “enemies” bounced quickly when they respond to the deliberate goading, allowing the trolls to take over a forum to one degree or another. Clearly, creating and maintaining nice culture on a forum is hard work, work that can often easily be undone by just one or more trusted people if they aren’t supervised.
How so, other than the feed algorithm I mentioned? I’ve heard vague mentions of trust scores but I don’t know what the consequences to me are, nor how how often I visit FB does or doesn’t affect it.
Hmm, thinking back, maybe it should have been called the “Situation Room” instead of the “Lounge” … I’m afraid we took “Lounge” seriously and hung out and seriously enjoyed it
I hope we’ll learn more about how an ignore feature would work. I’m not familiar with them.
Couldn’t it potentially make conversations kind of disjointed? I mean, people posting without knowing what some others have already said in the thread could get kind of weird. It could become like people talking past each other, no?
I recall something here not too long ago, where Person A posted something pretty far into into a thread, and Person B got offended because it was similar to something that Person B had already posted earlier in the thread. Some assumptions were made, and as I recall it Person B wanted acknowledgement of their earlier post and an apology.
It turned out (not too surprisingly) that Person A had replied to an early post as they were reading along, without having read through the whole thread, and was unaware that Person B had already posted the same thing. So…we do already have the potential for that to happen, if people don’t read the whole thread before responding—but if it isn’t even possible to see someone else’s posts, I could see that kind of weirdness and misunderstanding happening more often.
Or am I not understanding how an ignore feature works?
If said “something” is a link, Discourse will warn you that the link was already posted earlier in the topic, at least.
Thanks for asking and congratulations on BBS’s meta-caek day. Your willingness to listen to the site’s users is one of the many things that makes this forum the only one I comment in on non-business matters. The other thing I particularly appreciate about the system is how well it supports excellent and thoughtful moderators like @orenwolf and his former colleagues like @falcor. As far as I’m concerned everything is working and the only suggestions I can offer go to the kaizen process made evident by your requests.
The main improvements I would like to see (triviality level of implementation aside) are those that give the mods more fine-tuned analytics and feedback tools to do their jobs (esp. the one Orenwolf describes above to combat the problem you describe here), specifically:
Item 1: That the Ignore User feature you are implementing tie back to the mods’ admin and analytics dashboard and be tied to threshold/alert settings to allow the mods to see when various users of various trust levels are ignoring (and thus refusing to engage with) a particular user, making it easier for the mod to zero in on a potential problem user and perhaps solicit more feedback on why the user is seen that way before taking action.
In order to prevent abuse of such a feature and to encourage open discourse I would further suggest that the number of ignored users per user be capped at various levels based on trust level. I doubt anyone would find this to be a hardship. For example, I currently use the Tampermonkey/Greasmonkey Muting script mentioned here, but the quality of discourse on the BBS is such (thanks to the mods and the system) that at any time I’ve muted fewer than 7 obnoxious and no-value-added users (sometimes including suspected duplicate accounts). On a site of this quality you could cap it at 10 ignored users per TL3 user at any given time with no issue.
On a similar note,
Item 2: A special new flag called “Bad-Faith Argument” would be welcome fine-tuning. This flag option would have a required text element (similar to the current “Something Else”) where the flagger would be required to explain why she believes a particular comment is being made in bad faith. This flag (as I hope others already are) would be tied to the mods’ admin and analytics dashboard so that the mods could determine the validity of the complaint and begin to track patterns of commenters who are acting in a consistently trollish nature on specific topics or across the BBS. If possible such a flag would not autodelete a comment that exceeds the threshold of bad-faith flags but rather put it temporarily on-hold pending a judgment by the mod of the consistency and validity of the flag(s).
In order to prevent abuse of such flag, I would suggest that a limited number of them be handed out each month to users based on Trust Level and that the text element require a minimum number of characters (perhaps the average length of a sentence) to cut down the number of flag cases the mods have to review and provide them with site-policy-based cause to ban anyone who abuses them through other means.
An argument might be made that “Something Else” already does this job, but given the uptick we’ve seen in bad-faith, fallacious, and truth-challenged arguments in larger society I believe that a separate and more specific flag might be useful.
Finally, a bit outside the scope of the discussion and (understandably) most controversially…
Item 3: I would suggest that the tech, moderation, and business management explore the possibility of implementing a very modest annual or perhaps one-time non-refundable fee to give a user the ability to comment at any trust level. Based on other sites I’ve seen that use this system it seems that USD$5-12/year is the standard.
Acknowledging that even a nominal fee may be a hardship for and barrier to entry to many users I would further suggest that any such system must include a sponsorship mechanism so that another paid member might be allowed to cover the costs for a limited number of people (between 1 and 3) who can’t afford it. This might be done through direct requests to individual user or through contributions to a general pool to be distributed by site management on a vetted-request basis.
Based on the other places I’ve seen this working, I believe such a policy and mechanism would immediately eliminate the large number of “drive-by” and casual trolls we’ve seen invading the site in the past year or so via various avenues (troll farms, astroturfers, hobbyhorse searches via FB and Google news alerts, etc.), creating less work for the mods and less needless and tiresome turmoil on the BBS. It would likely reduce the number of duplicate and sockpuppet and ban-evasion accounts on BBS. Finally, it would help fund moderation and development of what most of us agree is a high-quality BBS.
To be clear, I don’t see any of these suggestions silencing those voicing “unpopular” or “politically incorrect” or “edgy” opinions, just ones propounded in bad faith and/or in a consistently disrespectful manner (all too often those doing the latter here defend themselves by claiming they’re being flagged due to one of the former – a bad-faith defense of a bad-faith comment).
Thanks again for five great years. Looking forward to a year 6 of similar quality d/Discourse.
I don’t think we need a return of the TL3 Regulars Lounges. They caused a lot of needless turmoil for that very reason. Nothing good came of people who were consistently criticised in the Lounges gaining TL3 and seeing what other trusted members really thought of them.
The main thread of ha ha you hang out here too much was fun though.
Badges are amusing. Perhaps it might be worth introducing more badges for earning Popular Link or Nice Reply and so on a certain number of times? Or would that be nuts?
Of course there are trolls everywhere and I could never say this place is magically troll-free, but to some extent it works both ways, does it not? Are there not those who will pounce on something that seems to bear the slightest whiff of dissent, something innocent crafted with no cunning whatsoever, and declare that a person who probably agrees with them completely on most things must actually be the lowest form of nut-job bottom-feeder?
In the Tampermonkey script I mentioned in my comment above it simply hides the Muted user’s comments from view. Combined with the existing ability to Mute alerts about responses from a given user it means that the Muted user effectively vanishes from the BBS for the person using the functionality.
The Muted user may find it odd at first that someone is not responding to them but that can happen without an Ignore/Mute feature. I’ve never had an issue with it and no-one I’ve Muted has ever contacted me via DM to ask what’s going on.
I really don’t see a lot of this on the site, again largely due to the mods and a system that’s thoughtfully designed to support them. What’s pounced on for the most part is not dissent but bad-faith arguments, willful ignorance and disrespect.
If the system gives the mods more fine-tuned analytics and alerting tools in connection with flags and Ignore, the rare instances of what you describe will likely become rarer.
Isn’t every thread the questions thread?