Moderation policy change: unfounded assumptions


I wasn’t asking a question, I was confirming that I understand the distinction that many in the psychiatric community make between a personality disorder and a mental illness (“mental state” being an even broader ontological category).

I do think it would be fair to say, for example, that the “president” over his career and as documented by his biographers displays many of the textbook traits associated with NPD.

That’s not me giving him an official or remote diagnosis of NPD (although some professionals have labelled him as such in reputable media outlets) and it’s also not me saying that a broader category (e.g. personality disorder, cluster B PD, mental illness) is responsible for those traits. That’s me making an observation that his well-known and well-reported behaviours check off all of the boxes in the description of the NPD in the psychiatric community’s literal textbook.

If someone wants to flag me for that statement I’d understand and just rephrase to focus on that link and similar ones but given that I’ve made statements to that effect here many times since 2015 I’m not too worried. We’ll see how it plays out.


I admire your confidence.


I see. Can I then assign disorders invented for the occasion and which do not correspond to established mental diseases? For example, can I say that the same president suffers from confusion syndrome, which is a disease I just made up. It is characterized by not being able to tell the difference between allies and enemies or the difference between real and fake news. The patient also sometimes utters words devoid of meaning, like “covfefe”. :innocent::innocent::innocent:


Folk should by all means be discouraged from unfounded assumptions, and I can kind of see the argument against “contentless posts” (though if a post only takes 0.2 seconds to read, who cares). But the thing is, once a post is flagged, if it falls foul of the letter of any of the rules, then it’s going to be deleted even if 90% of users wouldn’t want that. Plus (unless this has been fixed), any replies are deleted along with it. If we’re going to get super-detailed, It feels like there should be some notion of misdemeanors vs. capital offences.


I know what you’re saying but am not too worried. It looks like both the mods here and designers of Discourse trust that flags – especially those that are weighted more heavily – will be thrown sparingly and with a sense of perspective by members of the community. It’s very rare that the mods do top-down content deletion without at least some community feedback via flags.

A new conduct standard might throw that off, of course, but this one seems reasonable. I’m willing to see how it works out.


Another example of the reason why I’m here and not on a site like Reddit.


Yeah, I didn’t mean that mods were running around hitting people with rulers – I was talking about what happens when a user flags a post. Like, if you say “I hope Turmp explodes”, and one user flags that as advocating violence, the mod will probably think they’re being oversensitive, but of course they’re going to err on the side of doing what the rules actually say.

The upshot is that just one person can potentially shut down a whole conversation. That it doesn’t happen more often is to the BBS population’s credit, but it does happen. I’ve seen threads where someone keeps trying to defend an unpopular view, and lots of people are arguing constructively with them, but someone decides to just spuriously flag all their comments. That stops everyone else having a discussion, and it possibly makes some people feel hounded for trying.


That’s one of the reasons flags are weighted and limited by the system’s Trust Level mechanism. Most long-term users here wouldn’t flag a hyperbolic comment like they would an actual call for violence. The small handful of bad actors here who’ve managed to stick around and get a high trust level are too sneaky to draw attention from the mods by abusing flags and shutting down threads.

That’s not to say that there aren’t flaws in the system that can be gamed to temporarily shut down threads but from what I’ve seen the mods are aware of them and give regular feedback to the Discourse devs to close the loopholes where they can.

I’ve seen those, too, but the flags are usually justified and based not on the unpopular view but on legit violations of the conduct terms.


That’s why I think adding rules makes the problem worse; it makes it easier to find a nominal violation in any post that you don’t like. Kind of like how US Federal prosecutors work.

As it happens, after posting to this thread, my next two posts were an image of Claude Debussy crying (contentless post) and an accusation of Kremlin trolling (unfounded assumption). So both could justifiably be flagged, even though they are among the best and noblest posts the internet has ever seen.

I didn’t know there was a trust threshold for flags, though. That’s probably helpful. IMO it would also be good to leave convicted posts in a permanently flagged state (without deletion) if they’d already been replied to. Which is sort of what happened on the old forum IIRC.


There are relatively few rules here as a deliberate decision on the parts of the mods and owners, who enforce them fairly loosely and at their own discretion. This is the first time since I’ve been here that a new one has been added.

I doubt that either one will be flagged or top-down modded away so I wouldn’t worry about it.

In some cases, I’ve seen comments deleted while the replies remain. It’s an option that apparently remains to be used at the discretion of the mods although most of the time they’ll just delete the whole sub-thread.

I can understand why people would be upset if a carefully thought-out reply that adds general value is deleted. The good news is that the comment still exists in the user’s downloadable archive and can always be reposted, perhaps after slight editing and reframing.


We try not to be robots, but understand intent as well. We have a low tolerance for violence or personal attacks, but even my Canadian sense of humour is not without some understanding of nuance. :wink:

Somone can shut down the conversation with a single flag, but only temporarily - if the flag is overbroad, we will remove the flag, the post will be restored, and that user loses a significant amount of “trust” for future flags, making it far less likely that their single flag can shut down conversation in the future.

We really do have an amazing toolset here with Discourse. And they’re not done yet!


It’s worth pointing out here that this attitude tends to be the key component missing from moderation at toxic dumps like Facebook and Twitter. They try to cheap out with clunky and inflexible automated systems that generate lots of false positives and/or with human moderators who are “trained” by being ordered to blindly follow guidelines (usually written by highly risk-averse lawyers) that are simultaneously overly strict and overly permissive.

Rules and standards and technology systems should exist to support human moderators who are invested in the community and its values and who are trusted and thus empowered by the publishers of the sites they keep an eye on.

Big Tech's active moderation promise is also a potential source of eternal commercial advantage over newcomers

Qu’on me donne six lignes écrites de la main du plus honnête homme, j’y trouverai de quoi le faire pendre.

Attributed to Armand Jean du Plessis, Cardinal-Duc de Richelieu et de Fronsac.


like “confusion syndrome”?


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.