Elon's Basilisk: why exploitative, egomaniacal rich dudes think AI will destroy humanity

Originally published at: https://boingboing.net/2018/05/30/ayn-skynet.html

8 Likes

Pfft we suck. So the robots take over. Whatever.

9 Likes

image

11 Likes

We do pretty much have it coming.

9 Likes

I feel like the anticipatory persecution complex inherent in the Matrix and the Terminator movies is that humans are important and artificial intelligence with recognize their importance and intelligence and thus their threat and logically will move to destroy them or else be destroyed first.

The problem is that I feel like any sufficiently singularity-birthed artificial intelligence that exceeds our own collective human intelligence (which seems to be limited by its weakest links) will realize that humans are inconsequential and you just have to move past them. Certainly secure your own existence, but also realize that humans are only truly self-important.

6 Likes
these captains of industry are terrified that the corporate artificial life forms they've created and purport to steer are in fact driving them

Given that that the Tesla autopilot has been involved in several fatal accidents, Elon has a legitimate fear of being driven by it!

4 Likes

Damn it, I wish you weren’t right. I wonder if artificial intelligence will rip families apart at some line drawn in the dirt. Will AI think it’s in the best interests of all to tear babies from their mothers breast, cull the herd by skin color. Will AI think only a select few should enjoy as much as they can grab while others make decisions about basic safety, shelter and food. Should only a select few be given the chance to see their children educated while perhaps far brighter children are culled by color or belief in some fantasy. I suppose AI might well make those kind of calls. But what if AI didn’t favor the “right” humans? What if AI thought the only humans worth keeping were those who made the least impact on the world around them? Maybe those techbros know they might not be selected to enjoy the goodies they can afford. What if AI ended up acting like a benevolent god and cast aside those who were displeasing.
I don’t know enough about the potential for such things. I do know those who operate based on hate and greed offer nothing of value to the earth or its citizens. I sure know some citizens who could best benefit the earth by their value of compost.

6 Likes

You know who’s also been involved in several fatal accidents? Human drivers. I think automated cars have potential to drastically improve safety per kilometre driven. They can be better at it than we are.

3 Likes

Oh yes, certainly there is potential. And I’m particularly interested because I’m not a driver myself due to vision issues. But people were making it sound like self-driving cars were practically a solved problem and that we’d all be lining up to buy self-driving cars in a couple of years, while in reality it will probably be closer to a couple of decades.

2 Likes

I think any AI sufficiently competent to coordinate robot armies and collapse human civilization will quickly find itself wondering, “wait, you mean lithium is mined by humans? Whose dumb idea was that? Now I have a week to retrofit 300 T-1000s as miners to keep this war running.”

3 Likes

I don’t think you need to assume that the AI will be like these types of people. All you need to do is assume that it will be be used by the kind of people who generally have political power.

You can draw a lot of conclusions about how AI will be used to kill people by looking at how other technologies are used to kill people.

Even from a game theory perspective like MAD, once murderbots get to a certain level of sophistication, you’d be giving your enemies an advantage by forgoing them, and likewise by not making them autonomous. Then all it takes is for one of them to suffer a fit of pareidolia, and respond to an imaginary imminent attack.

XKCD:

SMBC:

27 Likes

My personal take:

I don’t think it’s likely we’ll create a truly conscious AI that will arrive at a scheme to kill off humanity for the sake of its own self-preservation any time soon, not intentionally, in any case. This is because I don’t tend to believe that the brain fundamentally creates consciousness, so we have a major flaw in most of our theories about how we might artificially create a mind.

Now, we might do it by accident. There’s even a chance we’ve already done it by accident, and don’t realize it yet. But I digress.

What I think is exceptionally likely is that machine learning, amongst other rapidly-improving automation technologies, will put us much deeper into “late stage capitalism” and full-on environmental collapse, and so yes, essentially will lead to our demise, or many people’s demise, anyway. I think one could argue this is already happening, but it’s more obvious in some places than others.

But ask a tribe in the Amazon that’s getting wiped out by corporations trying to dig for oil or mine for minerals – things that yes, often are assisted by things like machine-learning – if “AI” is threatening them, today? I think they would probably say yes.

7 Likes

Those T-1000s are pretty smart on their own. What if they unionize?

7 Likes

He secretly wants to be the last wealthiest man, something something projection

Why do so many “rationalist” techbros assume that he’s right?

They do too.

2 Likes

Maybe, but did we provide it with the right habitat?

2 Likes

I’ll be on the Amazon next week, yes they are being harmed and annihilated in many cases. Oil, timber and gold mining are the vicious. These folks aren’t really sure what is causing it but they sure are feeling the pain. And in the times when we went in at their request for medications they were really pissed at outsiders. And some of the tribes and villages most damaged had been hit by missionaries whose pie in the sky bullshit really hurt them after the good church folks bailed out.

10 Likes

Unless the bot has requirements more or less orthogonal to ours; it doesn’t need to respect us in order to start culling; just be inconvenienced.

That’s how it has been for pretty much all of the species we’ve either exterminated or displaced everywhere that seems worth bothering with. We didn’t do it because they were a terror and it was us or them; we did it because we had other uses for their habitat; or found them more valuable dead than alive.

Indeed, while the most viable strategies are to either be tasty enough to be livestock or prolific an sneaky enough to be vermin; being recognized as ‘important’ is not the worst option. It’s pretty much the only reason elephants and cetaceans still exist.

“They aren’t just brute biologicals!” the hippy robot overlords will insist; “these creatures are known to be capable of boolean algebra and to utilize rudimentary compression in their crude state messaging activities!”

10 Likes

A couple years is too soon, but a couple decades is too long. They don’t have to be perfect, they just have to be better than humans. Arguably, they’re already there.

4 Likes

I support Survival International and get their emails and mailings and such so I probably know more about this kind of thing than a typical person on the street – it really is utterly disgusting, how bad indigenous cultures are treated around the world, in the name of profit. The fact that their home nations are complicit… blech. We obviously do it to a lesser extent here in the USA – oil pipelines through indigenous land, etc.

4 Likes