The platforms suck at content moderation and demanding they do more won't make them better at it -- but there ARE concrete ways to improve moderation

Originally published at: https://boingboing.net/2019/04/30/no-magical-thinking.html

3 Likes

I’m hearing that a $5B fine is no big deal for FB. So why would 10,000 added moderators at $50K each - one-tenth of that per year at $500M - be much of a burden? (I know the fine is about privacy and this is about moderation - the point is that they do have the resources.)

I don’t think AI can make difficult calls, but I do think it can immediately dismiss 99% of FB traffic as unrelated to anything but family and BBQs and so forth. FB may have a billion people, but I bet only 10% of that post at all on a given day. Divide by 100 again with AI, and you have 10,000 moderators skimming 1 million posts a day, a hundred each.

I’m just tossing around a very rough estimate - to the order of magnitude - here, but my point is that I think they can afford decent moderation. They obviously would rather not spend $500M per year, and won’t until regulated.

7 Likes

That’s a lot of pretty words that don’t take into account the harsh realities of the world. The idea of applying one global community standard is going to immediately run into problems with a million regional cultural standards. Do you ban swastikas globally because of the Germans, but then anger the Hindus? Do you ban images of Winnie the Pooh globally or are you ok with getting shut out from 1.4 billion people? Are you going to go to court to defend yourself against charges of being pornographers because teenagers know how to flip bits in the settings?

Overall the article advocates for a very light hand to censorship, but doesn’t seem to do a good job of answering the question “but what about all of the garbage that gets dumped on platforms with light censorship?” Basically it suggests you make Reddit, but with looser standards and then “work with experts” to solve the whole “nationalist trolls and conspiracy theorists are flooding my discussion forum” problem.

6 Likes

have we tried cloning orenwolf ?

or cross-breeding orenwolf with some sort of anger badger with wifi ?

9 Likes

But that’s the problem. Their desire to be global is what creates the issue. They can financially afford moderators, but they can’t politically/commercially afford to commit to critical decisions if they want to be the one-stop-shop for the entire world. It’s a novel downside of monopolies in social media (all the other ones are well-trodden) that as soon as they become de facto privately-owned-public spaces they open themselves up to needing to be democratically responsive. And that is impossible without re-creating the apparatus of government, a task that governments aren’t even super consistent at yet!

ETA:
A specification for an interchangeable standard for social media sharing (with email as a model), with localized hubs that can tap into the flow and present, shuffle, curate, moderate and reorder however is necessary seems like the only way forward.

5 Likes

Ultimately, moderation is broken on the big platforms because of business decisions predicated on unlimited and endless growth, monopoly-seeking, business models that treat users as products, and the desire to create centralised walled gardens. Interchangeable standards for sharing, distributed hubs, and tools that empower users fly in the face of those business priorities; the same goes for spending money on more and better-trained moderators and on systems that support them.

Decentralised and federated platforms like Mastadon and Diaspora would solve a lot of problems in the way you describe, but they won’t stand a real chance until the government imposes anti-trust legislation on Facebook and Twitter, which will break their stranglehold enough for big non-social-media brands to feel comfortable offering their users federated instances as alternatives to those craptastic platforms.

The executives of these companies would gamble on getting a $5-billion fine (or not) rather than deliberately dedicate 10% of that amount to a cost centre. In the first case, they can always blame the bad ol’ gubmint if the shareholders complain; in the second their jobs are on the line over a reduction in profits that they can’t pawn off on anyone else.

(also, the privacy and moderation issues are somewhat related)

8 Likes

This seems like an odd contrast to leaked Twitter meetings discussing how they can’t tackle white nationalism aggressively or else it will de-platform “legitimate” racists. It’s almost as if they shuck responsibility onto “balanced” policy when the issue doesn’t have a natural power balance, and are not interested in actual moderation - which is a hard job, but it’s easy to say the solution is for moderation to be a job.

The article even states that the problem is farming out moderation to underpaid, over-worked, and burdened moderators; so to declare “anyone saying there is a simple solution” as something to be suspicious of seems odd. Pay people a fair wage with fair working hours and good benefits to sift through the shitty online garbage, but the reason that doesn’t work is strictly through a capitalist lens of how expensive it would be.

Much like many modern perils, we are not in a place to strike things down out of hand because of the cost when the current system is a drain on society, nature, and the economy as is.

4 Likes

Yeah, my use of “only way” didn’t include a definition for “competitive in the current environment” :frowning:

I somehow missed that post of Cory’s and the idea that ActivityPub is an email-like standard that enabled sharing outside of even Mastadon. Do you know of anything more digestible reading than the “How to Implement a basic ActivityPub Server” (which I will be reading regardless…) to get a look at the contents and intention of the spec itself?

ETA:
curse me, I can’t even procrastinate thoroughly: ActivityPub

2 Likes

I’d defer to our fellow user @seandiggity of Yale Privacy Lab, who wrote the article to which I linked.

That’s the other, even more addressable aspect of the platforms’ suckitude: Dorsey’s and Zuckerberg’s willingness to allow some “respectable” bigots a platform in the name of freeze peach and “balance”. Complete privilege-blind Randroids are basically calling the shots at both companies.

3 Likes

Without getting rid of Faux News none of these solutions will help. IMHO.

1 Like

They got part of it:
Let users decide what you want to see.

If you do that, you don’t need to censor anything, people can maintain their own lists of people/topics they don’t want to see or deal with, and collaborate to share lists. If something violates your local community standards, block it yourself instead of working yourself into a froth about the fact that someone else is reading it.

Another suggestion, though it probably won’t scale to twitter or facebook:
Slashdot has an extremely good crowdsourced moderation system. It is not robust against the entire community taking a turn towards rightwing assholery, which has happened to Slashdot in recent years, but it is a very effective way of crowdsourcing moderation based on the overall standards of the community using it.

1 Like

Exactly what I do. I don’t need moderators, I moderate what I consume myself.

1 Like

With this post it’s finally clicked in my head, a basic tenet that is baked into the internet might be the root of why content moderation is a problem and cannot be solved with existing tools.

All users are anonymous.

Who you are on the internet is decoupled from the you that is a law abiding citizen. All of the problematic behaviors that platforms see comes back to the fact that law enforcement can not do anything about them.

If new twitter accounts keep sending you hateful messages, there is no recourse for you. The costs on the attacker side are nil.

This comes back to there are no concrete ways of discovering who is creating those accounts.

All users are anonymous.

Is respectful discourse compatible with absolute anonymity? Or will bad actors fill the platform with noise?

1 Like

This site operates under the assumption that it is (although of course there’s no such thing as absolute anonymity, even here – the moderators can see IP addresses and require an e-mail address, both of which can form the basis of a trace). The moderators and the designers of the Discourse BBS software put a lot of thought into making it work that way, and the results are pretty impressive. It’s not 100% trolley-free, but with a small handful of exceptions the bad actors don’t last long here.

Contrast with Facebook, which does have a real-name policy but also is a swamp of misinformation, disinformation, and other abusive and driving trollies behaviour. The difference isn’t simply due to scale but also due to crappy tech, crappy policies, and a crappy and exploitative philosophy.

3 Likes

Setting aside public debate utopian thinking and free speech absolutism, it looks like it’s almost impossible to have useful public conversations without active and intelligent moderation.

3 Likes

My epiphany is a base assumption of the internet that platforms must forensically reconstruct misbehaving actors identity. My open question is that desirable? Because of that base assumption platforms must do their own policing. That forensics is far too close to their crown jewels for any outsider to look at.

With platforms in charge of their own policing that means the users of those platforms have no say in how those platforms are policed.

Shifting topics, I am reminded that crypo’s big idea was that it could replace functions of government. That is crypo promised us we could replace authorities with math. But it turns out we like to have people involved in banking and law.

There is an isomorphism here, math could save us from misbehaving users.

The Facebook real-name policy was never about “Real Names” it was a fiction, a dodge of responsibility something they would never invest in because it’s not a service that can be monetized. Providing a pseudo-identity is a function of a government.

This is my initial idea A pseudo-identity would function like a corporation, in that it might not be publicly known who is behind it, but if necessary that anonymity can be pierced.

There are most likely things wrong about this idea, or not feasible. I think we need to look for our base assumptions and question them. I don’t think we can fix content moderation with math. An example SmarterEveryDay had this segment on coordinated inauthentic behavior on youtube. It’s a never ending fight of countermeasures and counter-countermeasures. The only way of fixing that is to change the game. The current ruleset is broken.

1 Like

Mate, have you seen the shit people like your nan fost on pastebook, under real names? There is no technologoical solution to what is ultimately a human problem of lack of empathy, not if a site has ambitions of being the biggest there ever has been. The call iscoming from inside the building.

3 Likes

If you do that, you don’t need to censor anything, people can maintain their own lists of people/topics they don’t want to see or deal with, and collaborate to share lists. If something violates your local community standards, block it yourself instead of working yourself into a froth about the fact that someone else is reading it.

Many of the issues people decry (e.g. “fake news”, “Russian propaganda”, “anti-vaxx”, or “ISIS recruiting”) are exactly people getting upset that other people are reading it.

3 Likes

They try to balance that here somewhat with the flagging and now the Ignore system. As I recall the mods discussed a recent shift here to more emphasis on community-based moderation, which supplements and informs their decisions via the Discourse system.

Ultimately, though, the buck should (and on this site does) stop with the owners and moderators. When a site abdicates responsibility or cuts corners or just decides that bad actors are good for engagement or MAU growth or adopts a Freeze Peach absolutist stance, FB and Twitter are what you get – real-name policy or not.

One of the ways the big platforms cut corners is by offloading moderation and user-focused curation onto supposedly objective algorithms. The results of this approach, especially on YouTube, have been very bad.

You’re getting at something interesting here in terms of data ownership and sharing under a standardised (and one hopes well-designed and well-shielded) ID, but I don’t think it would do much to solve the issue at hand beyond making it easier for a platform to, for example permanently ban a user once it’s determined that he’s a problem.

You’ve got that right. While it won’t fix things 100%, a move to federated and distributed social network platforms would be a good change in that regard.

2 Likes

I’d put it this way: people are getting upset that the major platforms are lending legitimacy to those positions by giving them equal weight to reality-based and/or good-faith ones.

That’s a reflection of a larger societal problem, but the big social media platforms as they currently operate turbocharge it. We now have dangerous epidemics like measles and right-wing populism spreading in the U.S., and the major platforms have helped that spread.

1 Like