Why free-speech platforms all end up moderating content

Was there, can confirm. :face_vomiting:

8 Likes

I was hearing the same thing back in the Usenet days, too. Believed it myself for a bit, before “Spamford” Wallace and Serdar Argic/Zumabot quickly disabused me of that notion.

It’s the techno-utopianism inherent in the Californian Ideology: libertarians (small L and large) waiting for the deus ex machina to rescue freeze peach from any censorship. Ironically, in my decades on the Internet the most consistent bad-faith commenters and tr0lls have always been Libertarians.

9 Likes

This is why I find conservative calls to revoke section 230 so ironic. If I were a tech mogul, I would call their bluff in any congressional hearing, right after reciting the Supreme Court Decision(s) that private companies can’t violate the 1st Amendment in my opening statement:

“The Senator has stated the desire to revoke section 230. I say, go ahead. Because tech companies will be liable for anything one of our users posts, we will need to delete any post that makes a negative statement about any person, place, or thing, and block any user who even hints at violence, or we will have to stop offering a platform to everyone. Online platforms for most conservative voices will disappear overnight, as will platforms for many liberal voices. Only the most bland of political positions will be allowed, and social networks as they are understood now will cease to exist. In short, it will be a return to the days before the internet. You may think that’s a good thing, but when you finish the hearings today and tweet to your followers how you stood up to Big Tech, remember this: the moment you revoke section 230, your ability to reach thousands, if not millions of followers and constituents at no cost will be gone. Your voice will be gone, and it will be because you tore out your own larynx.”

14 Likes

Consequences matter.

1 Like

The problem is that ML systems tend to amplify existing bias. A small unconscious bias in the original training material can get self reinforced into a solid bias for or against some group. This is not much different that what happens to people, but with a ML system you can’t really ask why it made a choice or ask it to think about its choices once a bias has been exposed.

3 Likes

Usenet depended more on end-users filtering out unwanted posts and posters. That was gradually lost thanks to Microsoft and Google. I forget, but I think some software even supported published killfiles, so that you could block by someone else’s list, if you trusted them. I used a proxy program in front of my newsreader to killfile and block Hipcrime sporgery floods.

4 Likes

Dealing with bias is one of the reasons for re-training MLs because bad training data results in bad ML, no question. Inaccurate labeling of output from ML is also added to the training data to improve its accuracy on retraining. ML is an ongoing process that can improve as it is retrained and any kind of ethical ML does include the reasons for ML labeling.

the way I sketched out the architecture can probably still be gamed by dedicated trollies. the idea is there is a weight to likes and dislikes and those with similar reaction history are grouped together and see similar things. automatic echo chamber. and a very uphill battle to see anything new.

You cannot accurately moderate billions of messages.

Even with community help, you have to moderate the moderators. Slashdot figured that out ages ago with their meta-moderation, though you can argue about its results.

Machine learning only goes so far, though Craig Newmark is of the opinion that hard AI is going to arise from the struggle of spammers vs. spam fighters.

Maybe you can be selective as to who gets to participate, who you let in the door and get a referral system for that.

1 Like

But Usenet isn’t a monolith. One group or hierarchy could be fine, another one not. “We own the words, we own the space”. Some groups were high traffic, but enough people would pounce on the bad stuff to keep it all clean. Another group worried less, and could be overrun.

Any newsgroup needed newcomers. They kept the group active. once new people stopped coming, a group woukd devolve into itself, still active but members too interested in in-talk than the topic, or just fading away. rec.arts.tv still has traffic, but little of it is about tv, just oldtimers talking together, mostly politics.

One hierarchy has been destroyed, ironically by someone who thought it needed moderation. Adding a moderated group never created a space for content, it stayed empty. So he decided to spew the hierarchy with contents from the web. Initially it was clutter but eventually everyone left. Abuse of usenet.

One thing, there are people who can live without rules, and people who can’t. People who can’t, either abuse the medium, or insist on rules. For a while Usenet was about the only game in town, so people adapted. When alternatives came along, they moved there. Usenet faded because there weren’t participants.

… and for a long while, hardcore-punk bands also. “Karp” is an acronym for (read 2nd paragraph):

According to the guy who started it when he was interviewed on the Behind The Bastards podcast, he created it because on 4chan you needed an admin to create a new board or 4chan, on 8chan any user can create a new board. He’s subsequently done more than just about anyone to get it kicked off the internet (he doesn’t control it, or it’s successor, 8kun).

2 Likes

Twice I have been involved in early discussions to buy land and create a living community that is inclusive to non-traditional families. Both times it was frightening how fast liberal folk that hate HOA’s suddenly were talking like HOAs.

5 Likes

Newspapers, similar to librarians, maintain a steadfast commitment to free speech. As a result, the digitals comments sections of the LA Times, NY Times and others are torrents… TORRENTS of nothing but shite.

2 Likes

@beschizza
Back in the day ‘moderation’ would be considered editorial.

Perhaps this is a positive move in so far as I understand The Guardian and Boing Boing’s overt political bias and the below the line comments can be insightful, so these are my choice until I find some platform more suited to ‘my’ beliefs.

1 Like

There’s an unstated major premise in these stories that I’ve always disagreed with- that 100% unfettered free speech is a good thing in the first place. The only people who start with that premise seem to be straight cis white men who have been listened to by the whole world their whole lives anyway.

Free speech is not the right to a platform. Saying “it’s noble to let even Nazis and racists have their say” is a fine idea in 1750 when the worst they can do is be that annoying guy on the street corner. When every crackpot now has a global megaphone, we should be discussing what free speech is and we need more nuance on the consequences of it. It’s the old “shouting fire in a movie theater” argument, except everything is shouting “fire” now and we’re always in movie theaters.

Erring in the direction of total free speech is a very good idea for governments, because they are prone to making bad choices when allowed to restrict speech. But that should be as far as it goes. Americans need to let go of this idea that the 1st Amendment guarantees a platform. It never did and shouldn’t.

15 Likes

Colbert had a little joke ad for “the original social media” - writing things on the wall of a bathroom stall. This is one of those it’s funny because it’s true things. If you don’t moderate your platform that’s all it is.

6 Likes

librarians carefully curate what they put on the shelves, that’s their job

an unmoderated forum doesn’t look like a library in any way

7 Likes

But , , ,
couldn’t you just have a Night Of The Long Knives where you make them fight to the death?

10 Likes