An important, elegant thought experiment on content moderation regulation

Originally published at:


Why not put a cap on how many suscribers can belong to any one service? Plenty of room to haggle on the numbers. Anyone can still syndicate their own content. There really isnt room on my feed for more than 50 or so individuals anyway, why do I need to belong to a club with hundreds of millions of subscribers? All that does is leverage the platforms power and minimize mine.

1 Like

The platforms are not the problem.

The people that own them are wealthy beyond powers of magnitude and measure.

Capital has created a wealth gap so large is has become a moat between them and labor.

1 Like

Imagine you are Twitter. What do you do about any of it? Do you delete the trends? Do you keep them up? Do you move the ad? Do you make sure those two trends NEVER get lined up together to prevent bad optics?"

Twitter would do nothing, as long as keeping things as-is continues to generate revenue without adding any costs. Dorsey doesn’t give a damn about proper moderation, and never has – especially when it comes to right-wing populist content.

U.S. anti-trust law isn’t currently set up to impose that kind of solution, although a change in the underlying philosophy could allow it.

Really, the only current solutions I see are regulations along the lines of the new ACCESS Act, which has bipartisan support:

The ACCESS Act would require large tech platforms (defined as those with more than 100 million monthly users, which would include every major tech platform and any operating at a scale capable of monetizing user data) that monetize user data to make it easy and painless for a user to not only export their data, but import it to a competing platform (e.g., moving photo uploads from Instagram to … uh, some other photo service).


Enough of those kinds of regulations in place, and we might see federated, decentralised platforms gaining traction. You’d be more focused on the 50-150 individuals you care to follow, any issues with moderation connected to scale would be reduced, and bad actor individuals and instances would be relegated back to the fringe where they belong.


It’s also a problem that the president would really like to talk about lynching. I don’t think that’s healthy.


Even right here on this forum, I see people who understand all the pitfalls of automated moderation when it comes to EU copyright filters, and still think that it’s only intransigence that keeps the platforms from censoring things they don’t like, not the impossibility of doing effective moderation at scale.


I think we won’t be able to progress on this question until we abandon the monetization based on ads.
It was very good to make any kind of service available for free, but it got us in a local maximum.
Until we can easily finance directly the service we use, all the initiatives will be constrained by the same problems, serving the people is not drives a company, but serving ads is.


Whether Twitter, Facebook or any of the others does anything to content moderation, it doesn’t matter as long as they can continue to deny that the are publishers and should be thought of that way. There is a theory promoted by modern publishers that if an algorithm makes the choice of what is published and how it is promoted, then somehow or another the people who wrote, vetted and chose that algorithm have no responsibility for what they publish. Anyone who has written or even used an algorithm knows that this is not true. We are dealing with the modern version of William Randolph Hearst and his ilk, but, to Hearst’s credit, he admitted what he was.


Is it ok if I have a tiny daydream about a mob ACTUALLY attempting to lynch him, just so I can see his terrified face in my mind? That’s ok, right?

Am I missing something or is this article missing a link to the analysis in question?


We all have to get through this dark time somehow.


I think the best solution would be not a platform, but a protocol. Everyone pays to host their own social media page, and it conforms to a spec that allows a protocol to provide the social media-like functionality. That way there’s no central source where spies and censors can apply pressure. You could have a lot of features like block lists that allow you to control what you see, but you nobody could actually be censored without some legal action, and encryption could make mass surveillance much harder.

1 Like

Oh hey, they fixed it.

What you describe is essentially the www of the 90’s. It worked fine - great even. But as more people came aboard, it became apparent that the basic hosting everyone was doing independently could be centralized, and not only would this mean lower cost for the user, it would allow accelerated development of new features. On top of that, centralization meant that the platform could make money from advertisers, and so could subsidize both the hosting and the development of the platform.

Independent operators can still run their own services - in fact the tech for self-hosted social networking is way better now than it was in the 90’s. But it’s been utterly eclipsed by the sheer features and capabilities of the reigning stacks. To the point that even people who totally hate the stacks can’t find any comparable alternative in terms of functionality.

Alternative efforts are out there - take Scuttlebutt, which is an ingenious decentralized protocol/platform for social networks. Thing is, Scuttlebutt’s entire yearly budget is equivalent to the salary of a few Facebook engineers. Ultimately the problem is not technical, it’s economic.

Some say this means we should nationalize the stacks, but this seems nightmarish for a different reason (the Chinese model). The best I can come up with is massive grants to public-interest software development non-profits to do the hard work developing stuff like the Scuttlebutt protocol to reach feature-parity with the stacks. Basically, the subsidized development of a fully open-source, decentralized stack.


Because you aren’t going to get billions of people to either open multiple accounts to talk to their varied, or newfound friends, or whatever notable persons you want to follow with caps.

In many ways, that’s like trying to limit tv broadcasts for the same reason.

In TV land (in countries that actually care about this stuff, anyway), there is oversight relief if a broadcast goes off the rails. This doesn’t exist for the social media networks once you grow to a certain size, until you really go off the rails.


Yeah, a good protocol could make a good impact, but as @zikzak mentioned the problem is more economical/cultural than technical.
RSS can do almost all that is needed, but even working as intended it is not capable of maintain a site economically.
It is mostly offered by sites that can offer something else, comments or the full content vs. an truncated one on the feeds, or it is there because the website was not configured correctly (you can find the feeds, but there is no mention anywhere on the visible site).

And, even if everyone had his one page/blog/feeds, there is still the problem of who pays for the bandwidth.
Currently, the provider covers it until it is not viable anymore and they have to limit the user access or start charging much more than the user’s cost.
I think something like patreon but where you can pay cents per month or dividing a monthly value between what you access/subscribe like flattr could allow us to stop using advertising while maintaining what we consume.

1 Like

That was the kind of situation faced by early telephone adopters, the subscriptions were electronically incompatible, and initially, yes, you did need as many phone subscriptions to reach everyone as there were phone companies.

But internet service is all electronically compatible with all other internet service. The only thing keeping my facebook posts from reaching Twitter subscribers, are the walled garden terms of service that those services enforce. Which have nothing to do with the physics of the internet.

Things like Mastadon and RSS completely ignore Zuckerberg’s paradigm, and if thd law ever catches up with the technology, will outcompete the walled garden paradigm right into extinction.

1 Like

As a different thought experiment, imagine that you have clamped your mouth over the end of a raw sewage outlet pipe, and you wish to build a filter that preferentially selects the most scrumptious chunks, based on your personal tastes.

If it makes you think about Maxwell’s demon, my hunch is that’s just the kind of fundamental flaw that affects Twitter’s theory of moderation. You start by including literally everything, then filter it by the metric “how much will the user want to have seen this?”. But to measure that, the user first has to seen the content. By definition, the extent to which news is new – the extent to which information informs you – is the inverse of how well it can be filtered in advance. To truly be your own editor means not having an editor at all; you have to look at everything.

So if Twitter’s theory can’t work, what do their filters accomplish? For one, you tend to see narrow variations on what you’ve seen before, so that fleeting, highly specific thoughts (“oh yeah, I remember that Rainbow Brite ad”) are amplified to the point of madness, while broader and deeper mental currents are obliterated in the white noise. And on Twitter, especially, there is a bias toward flocking behavior, because the system of “followers” creates strong feedback, which engineers in any other industry would understand as an obvious defect caused by lack of damping (i.e., the filter should actively prevent topics from “blowing up” exponentially).

In case it sounds like I’m being pious, I had no problem with Twitter when it started, and to this day I don’t think it’s wrong to experiment and come at things from new angles. The college-studenty premise was that the worst thing is other people telling you what you should see, and the experiment asked, what if the only rule was “no editors”? That was a fine thing to ask. But the results came in at least five years ago, and the result was (/checks calculations) :poop:3. It’s not science any more, it’s just the one kid still fixatedly burning stuff with the magnifying glass after the bell rang.

Still, viewed as an experiment (rather than the final, permanent endpoint that bloggers and columnists have decided it to be), there’s value in the results. It affirms, for example, that mass communication does need subtraction as well as addition: as well as authors, you need editors of some stripe to take things off the menu.

Perhaps that can still be done in a distributed, early-Wired-magazine way, I don’t know. But there’s no reason to expect or want Twitter to be the laboratory for it. They’re just assholes at this point.


Well, sure, but there was also only the alternative of telegraph or snail mail, so that’s not really a parallel to the modern era here. :slight_smile:

I’d like to believe that, but I don’t. Mastodon or Diaspora have not yet faced any sort of concerted effort for bad actors (or even “undesirables”) trying to get on the network. And when they do, all hell will break loose, because the “free speech zones” will refuse to remove those individuals, which will lead to nodes refusing to talk with other nodes, which once again leads to operators deciding who can talk to whom and how, and possibly leading to folks not being able to talk to one another without jumping ship.

Facebook and twitter became popular because 1) everyone was there, 2) it was easy and fun. Item 1 is now a barrier to item 2 no matter how hard you push item 2. You are not going to succeed in creating the “next facebook” without a path towards item 1 at this point. (this, too, is why the BBS will never become reddit-sized, say.)


That’s Mastadon or Diaspora working by design, though. I’m not saying it will go smoothly (ask Wil Wheaton), but the assumption is that the “freeze peach” and other unsavoury instances (e.g. Nazis, paedophiles, incels etc.) will be blocked, probably by a lot of other nodes. The platform’s tech standards will still remain, and there will still be plenty more respectable instances that will still be federated.

That means that nothing prevents Joe Incel of the HeManWomanHaters instance from setting up a separate ID on another, more mainstream instance to talk with normies. As long as he doesn’t get his account there banned by talking openly about his garbage and breaking the rules, he’ll be fine.

What Mastadon and similar services are lacking right now is the “easy” part, not only because the UI/UXs aren’t quite there but also because there aren’t enough mainstream brands hosting instances and educating their users.

Something for which we’re all thankful.

Still, BoingBoing as a site could also host an instance of Mastadon or another platform for its users (BBS or otherwise). It would have its costs, especially in the added burden that would fall on you or perhaps a new hire for quality moderation (a must for a good node), but if the management was so inclined it’s possible and wouldn’t turn the site into reddit or (worse) Twitter or FB.

I’d love to hear more of your criticisms and concerns about the decentralised and federated platforms, by the way. If they’re going to improve, the devs and enthusiasts need to hear the insights of veteran mods and sysadmins like yourself.

1 Like