Silicon Valley investor called for far-right mob to harass "vulnerable" journalists

I’m going to have to check out Adam Curtis at this point. There are some threads of what you are worried about (again: part of the community has ties to Theil), but most of the use of game theory is how you described your use of it.

The community as a whole cares a lot more about ethics and moral philosophy than average (even if a lot of them are utilitarians, and I am pretty sure they are the source of “the trolley problem is solved” meme).

Anyway. My main point was that much of the community isn’t objectivist, and it’s much more interesting than that. And there are plenty of good people there (with a note that I am sticking up for my friends).

1 Like

The issue I had with Slate Star Codex goes a little deeper than that, and doesn’t really implicate your friends or impugn their intelligence or morals. The “Objectivist Bar Problem” (or, in the case of Thiel and his clique, an actual “Nazi Bar Problem”), while real, also strikes me here as a symptom that overtakes the original disease in malignancy. And it starts with the founder of the site.

Over the years I’ve read through (slogged through, really – anyone who’s read them will know what I mean) a few of Scott Alexander’s essays on hot-button issues that came across my newsfeed from start to end. In each case, after twisting my way through a lot of verbose noodling (including, always, fairly absolutist statements about general ethics and moral philosophy), the conclusion (sometimes only implied, because Alexander sometimes pulled his punches) was always something to the effect of:

[group that Fox News conservatives would call “SJWs”] are just as bad as [group that Fox News conservatives would approve of] because they just won’t listen to people with different views. They censor their own forums into echo chambers, which further aggravates the second group and divides the nation into two warring camps.

In other words: gussied-up and somewhat obscured victim-blaming, bothsidesism, total rejection of the concept of social constructs of gender and race, and freeze-peach absolutism – the sort of disingenuous takes we saw from PAPPP in the recently split topic.

And so feminists weren’t listening to guys in the manosphere, white liberals weren’t listening to white supporters of a racist con man, etc. If those darned outrage-driven liberals and progressives were just rational and open to listening to different and “unpopular” and even repugnant viewpoints from thoughtful and compassionate and very fine people, we’d all be independent and civil rationalists like me who transcends ideology and politics into the realm of pure reason.

Given that impression, when I hear that Objectivist and Libertarian and fascist techbros were drawn to this (IIRC) heavily male, heavily white community, I’m not exactly surprised. These views are what led to the disastrous state of the major platforms over the past four years, and to a lot of money going into the pocket of Silicon Valley VCs, founders, and executives.

I’m sure Alexander is a committed left libertarian (I believe he wrote an essay tearing apart right-Libertarianism), but it’s also clear that the arrogant exceptionalism and absolute moral and ethical certainty underlying all libertarianism (small-L and large) and which informs the approach of too many psychiatrists shines through in his essays when you cut through the weeds.

Now people like your friends might focus on the parts about ethics and moral philosophy, about the tools to improve thinking processes and open their minds. But there was a very distasteful subtext going on at Slate Star Codex. As they look for a new community to continue their explorations now that SSC is closed, I hope the one they find has better leaders and better members than the last one.

3 Likes

No disagreement there. I do think Scott is generally a good person and mostly in a reasonable place with his politics, but the free speech absolutism is a problem, and his comment threads have long been a “do not read, do not bother engaging” zone. For exactly the reason you describe. They should have never have entertained the neoreaction trolls nor any of their fellow travelers.

I got the luxury of being introduced to all of it primarily in person rather than online, and that made a huge difference in my perception. Honestly, the comparison between in-person vs online rationalist communities was definitely a factor in my change of mind about free speech absolutism. Particularly with the additional contrast with BoingBoing (I predate the discourse BBS, but I’ve mostly been a lurker).

1 Like

this is my feelings about places like slate star codex. the author tends to take a way too naïve approach and ends up with articles that sound a lot like . . . “humans, i mean if they would just sit down and talk things over with cannibals i’m sure we could work things out but they just have such an intolerance of the cannibal lifestyle they won’t even talk about it. talk about intolerance, they should look in the damn mirror, you know?”

i can only take so much of that type of unironic content before i want to just get rid of it, i certainly can’t justify going back to it.

5 Likes

I think this sort of thinking comes from the notion that the individual in question believes that they can or have come up with easy solutions for socially complicated problems that people have struggling with for literally all of civilization, just because they are the pinnacle of humanity and of all human knowledge… It’s generally driven by deep hubris and ego that they’ve “figured it all out” and that people should just listen to what they say to fix it. Hence comes authoritarian thinking.

6 Likes

That’s the vibe I got from Alexander’s essays. They were long and convoluted and made a pretense to figuring things out on the go and Just Asking Questions, but once you remove all the cruft it’s clear that he had an answer and solution in mind all along. As I noted above, I suspect part of that manipulative approach comes from his background as a psychiatrist.

3 Likes

I guarantee you this looms large behind a lot of their thinking. MIRI is the worst of this. They are an AI think tank that is trying to use utilitarianism to come up with a way to make godlike artificial general intelligence systems moral. They are utterly convinced it is the most pressing problem in the world that they go to charity conferences and urge people to give them money instead of fighting disease, hunger and climate change.

All because they did some some questionable math about how many people they are going to save, and assuming that because they’ll make a moral computer god, it will fix everything else. All the while ignoring that they don’t know how likely/possible AGI is (trust me, I have tried pointing this out, it’s not worth it) and ignoring the fact that things like climate change could wipe us out or at least wreck our technological society such that no AGI could be developed, if it were even possible.

Roko’s Basilisk is an entertaining google to find an opening to this rabbit hole. Also, I strongly agree with Charles Stross that this is a repackaging of Nikolai Fyodorovich Fyodorov’s philosophies, but stripped of their christian origins as they jumped from cosmism to transhumanism. It’s also one of my favorite examples of how people invent gods and create religious beliefs.

As I keep saying: they are way more interesting than boring-ass objectivists.

2 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.