Maybe. I believe people are mostly good, but also avoid conflict. I know that hateful speech chills dissent. I don’t think that bad users cause good users to be bad, but is more like good users leave or only lurk when bad users are around, excepting those who fight trolls.
Unlike a broken window neighborhood, moderation can solve and remove the “damage” as it happens. I’m not saying send moderators to trolled forums, I’m saying well moderated online spaces don’t have the chilling effect on other speech. An online community is not the same as a physical space in the first place.
However, I could see how the comparison could be true. Do you have a better solution or a better model for the problem?
Apologies, I wasn’t meaning to imply that there was anything wrong with the theory you’ve been putting forward; it just occurred to me that if you interpret hateful comments present on a board as “visible damage”, and moderators removing those comments as people “fixing” that visible damage, then it seems like a very similar argument, and thought I’d put that out there to see what people thought.
And yes, the “broken windows” theory of crime isn’t very popular right now, at least among folks who like evidence. But there’s still something about the idea that makes it seem intuitively logical, like it ought to be true. And maybe it is, for virtual worlds. (And perhaps for the real world as well; perhaps any effect was simply lost amidst the larger, nationwide crime-rate plummet which was happening at the same time. I guess we don’t really know for sure whether it might have had a weak effect. And I’m absolutely not advocating for it to be tried again in order to find out; I think we all saw the serious problems that came from that style of enforcement the first time.)
I don’t have a better solution to suggest; benevolent-dictator moderators absolutely do seem to be the most effective current method of keeping discourse civil… but we haven’t actually seen many of those in online worlds which aren’t discussion boards. I don’t know how you can wrangle enough moderators to moderate (for example) League of Legends. Or whether that would even work. Riot and Blizzard have both been playing with algorithmic approaches to trying to reduce the toxicity of their player bases, but my impression has been that not a lot has really changed. It often seems like once you have enough people to overwhelm the moderation staff, civility breaks down.
My impression has been that once a community reaches a large enough size, it becomes a target for people who want to exploit it for their personal amusement. The single thing in common between the nicest communities I’ve been a part of are (a) they’re small enough for everyone to know everyone else, and (b) they have moderators to maintain the peace. And you can even often get away without the moderators, if the community is particularly small.
I think the Riot and Blizzard approaches are incredibly interesting and hope they release more data on the subject after running those systems for a couple of years.