Facebook will subject all of its users to "trustworthiness scores," similar to China's Citizen Scores

Originally published at: https://boingboing.net/2018/08/23/weaponized-whuffie-3.html

7 Likes

Facebook: not even once.

Seriously, if you haven’t already deleted your account, what will it take?

20 Likes

It’s interesting to consider money in this light - a score of your “value”, which can be improved by worthwhile deeds, but also by nefarious means. Those that have more of it find it easier to get more. If you have a higher score, you’re allowed more stuff.

6 Likes

What a coincidence! I assigned Facebook “trustworthiness score” ages ago. It failed, and is blocked at the router.

24 Likes

The Discourse system that powers BB BBS also has a trust metric to aid the mods but it’s not as intrusive and gameable and doesn’t have the same disastrous potential to leak into a user’s life off-site. It also offloads more moderation duties to a blind and unforgiving black-box algorithm as opposed to making it a true complement to human moderation. If this is one of FB’s and Twitter’s corner-cutting “solutions” to abuse of their system by bad actors the two services can’t be broken up or go out of business soon enough.

9 Likes

i , for one , welcome this latest boon from our social media overlords !!

2 Likes

I think if folks have not gotten off Facebook by now, they never will. They are consigned to liviing with the dystopian future in the here and now.

4 Likes

it’s an easy comparison but I’m not sure this is a fair comparison

you can (and should) quit facebook, there is absolutely nothing of value there which can’t be easily replaced elsewhere, people are just too lazy to even try

but you can’t quit being a Chinese citizen and subjected to the brutality of that government, well not and keep living anyway

7 Likes

I’ve come to accept the fact that there are just hundreds of millions of Americans out there who are unsophisticated, complacent, and ruled by inertia. Strange as it seems to people like us who knew FB was bad news from the star, many of them would welcome this development as well, just as many in China have embraced the Social Credit system there: “if you’re not doing anything wrong you have nothing to fear”, amirite?

4 Likes

Contrarian opinion: I recognize the danger here, and I have very little faith in Facebook’s engineers and programmers to make moral decisions. But something needs to be done to deal with the ever-growing flood of false information.

If we’re looking at science fiction as a lens for this issue, Neil Stephenson’s Anathem had a brief aside describing a previous era in a parallel earth wherein algorithms were running amok on their version of the internet, creating a hundred pieces of slightly-wrong information for every piece of true information. Bus schedules that were off by 20 minutes. Movie reviews that listed the wrong actors. Eventually their internet became utterly unusable because not a single scrap of information was trustworthy. The only solution was to create a further set of algorithms closer to the human interface layer that could judge what was real content and what was algorithmically bogus content.

If my aunt shares nothing but Q and Pizzagate garbage on her Facebook feed, why shouldn’t her content be rated as less trustworthy?

7 Likes

Counting on Facebook or Twitter to do those things willingly or effectively is a non-starter. They’ll always lean toward keeping Aunt Pizzagate as a user because they care more about their MAU numbers than they do about the poison she spreads. Her “trustworthiness score” will reflect this priority, where a service with more integrity would simply ban her outright for spreading hateful and violence-inciting speech.

False information will always be a fact of life on the Internet so the best we can do is work to eliminate the market power of semi-monopolistic aggregators that act as force multipliers for disinformation campaigns.

3 Likes

But the reason aggregators rule the internet is that 95% of humans are unable or unwilling to aggregate their own content. I consider myself reasonably tech- and media-savy, but I have a job and a family, I don’t have time to manually trawl a hundred trusted LiveJournals like I did when I was a kid. I like being able to use Twitter and Reddit to curate content for me, and I would like it more if they did a better job of minimizing Aunt Pizzagate’s audience. I also fully realize that this desire leads down a dangerous path… I just don’t see the alternative right now.

Yes, much like their “real name” policy a few years ago was supposed to deal with similar issues (trust and transparency), the reality was that people who had good reasons for not using their real names (because they were being stalked or victimized IRL or because they were trans people) were the ones who ended up getting screwed over by the policy. I have no doubt that this will end up the same way.

This will likely devolve into people using this to silence actual diversity of opinion (and it won’t be the Qanon theorists or the nazis who are going to end up being silenced).

Really? Because you can’t just ignore or skip content that you know is shit? And it’s not like this stuff goes away if you or I don’t read it, yeah? In addition, it’s the curating that got us into this mess in the first place. Because of the way that algorithms developed by the tech giant sort our viewing/reading habits, we’re already getting curated internet.

Plus, I’d argue that even if you are able to get something more curated for yourself, the problem of this disinformation is STILL there, and it’s STILL going to be having the effect it has if you and I aren’t reading it, because it’s not meant for US, it’s meant for them (the people who already are into that kind of BS content).

I wish I knew the answer here, but I don’t think that the tech companies can fix it using the very same methodologies that they’ve failed to do. I do think you hit on a good point, about having more human interactions with curating… but of course even that is going to end up biased and messy.

5 Likes

I agree, although it’s more about curation than aggregation. Either way, that still doesn’t necessitate the monopoly status of Facebook or Twitter and the outsized power that contributed to so many societal disasters over the last few years. Reduce their power in the market through a combination of anti-trust regulation and “delete/disconnect” campaigns and the problem, while not going away, will be far less concentrated.

2 Likes

Came for that comment alone, leaving satisfied.

6 Likes

They could simply disable sharing. That would cut the entire problem off at the knees.

Agreed, but isn’t that what this “trustworthiness score” is attempting to deal with? If someone is consistently spreading garbage, and the aggregator doesn’t want to simply ban them, isn’t the only other option to make their content show up on fewer feeds?

Again, I fully see the danger here. I’m happy when Nazis and Qs stop having their message spread. I’m happy with anti-vaxxers have a smaller audience to confuse. I’m less happy when Facebook critics get silenced on Facebook, or when Satanists can’t show pictures of their Bahomet statue.

But perhaps we can come up with a way to differentiate “dissenting opinion” from “obvious and dangerous falsehoods” ?

1 Like

You get what you pay for.

Well, the hard part is drawing the line.

Not because it’s hard to make a judgement call about what’s harmful and what’s not.

It’s because it’s hard for FB to deplatform content that makes them good money. And they’ve demonstrated time and again that they’d rather make money than de-platform harmful content unless required to by law.

4 Likes

Is this already in place? I just logged into FB and found 0 BS political posts in my entire feed. It’s amazing.