Elon Musk has grim prediction that AI (not North Korea) will be cause of WWIII

becoming a bamf.

I wasn’t taking this seriously until:

So is that your explanation of Fox News?

1 Like

Like a spell-checker?

1 Like
1 Like

If we don’t develop murderbots, someone else will. Once we’ve got murderbots running our military response, there’s an incremental advantage to be gained by giving them the ability to react fast, themselves, kind of like high-speed trading. If one of those black-box neural nets suffers a bout of pareidolia everything could go to hell before humans even have time to react.

Are you aware of OpenAI?

They’re pretty much that.

As the tagline says, “Discovering and enacting the path to safe artificial general intelligence.”

They’re in the process of building open-source AI so that it will be available to everyone, not just the exclusive property of government and big corps - and, insofar as possible, designed to “fail safe” rather “fail catastrophically.”

They’re deeply concerned about the potential dangers of AI, but they know that it’s inevitable - and that it’s potentially vastly powerful and beneficial.

Like nuclear power, it’s a genie that won’t go back in the bottle once it’s released. OpenAI is the equivalent of asking, back in the early days of nuclear power, “shouldn’t we maybe see if we could make a reactor that can’t melt down? Or maybe one that doesn’t need or create weapons-grade fissile material? Or one that doesn’t produce troublesome piles of dangerous radioactive waste?”

And OpenAI was founded by… that’s right, Elon Musk!

Also, BTW, despite the usual err-on-the-side-of-hysteria clickbait headline, Elon did NOT “predict that AI will be the cause of WWIII.” He said:

Competition for AI superiority at national level most likely cause of WW3 imo.”

Notice the difference? Elon isn’t predicting that WWIII will happen; he doesn’t imagine that WWIII is inevitable; he just thinks that, IF IT DOES OCCUR, the most likely (not “only possible”) cause will be national-level competition over AI superiority - not necessarily AI itself.

Humans fighting over AI.

Humans fighting each other, using AIs with no safeguards.

Humans fighting, basically.

7 Likes

When the machines take over our jobs and do most of the menial tasks and basic decision making, life will basically be a boring RPG with computers as the game masters. One day all we would have to do is have someone piss off the GM computers and they will nuke us in retaliation.

“I am sick of your Paladin’s shit, Dave.”

3 Likes

Once we’ve got murderbots running our military response, there’s an incremental advantage to be gained by giving them the ability to react fast, themselves, kind of like high-speed trading. If one of those black-box neural nets suffers a bout of pareidolia everything could go to hell before humans even have time to react.

Good thing humans never do anything like that.

2 Likes

Maybe Elon is concerned about someone setting their self-driving cars to have an acceptably small accident rate, of which the majority involve the death of %demographic%.

Isn’t the US nuclear arsenal ultimately controlled by teenagers using floppy discs?

2 Likes
5 Likes

If, by the time any AI exists that is sufficiently smart and powerful that it could conceivably be the kind of threat Musk is talking about, any human whatsoever has enough control to be meaningfully said to “use” it, it means the problem was already solved, the AI is already under control, and we don’t need to worry.

1 Like

I can’t comment on the likelihood of an AI starting WWIII but I would like to point out that the current NK “crisis” is purely caused by the US. I lived three years in SK and I know how difficult it is to get news about US troop movements stationed there outside of the country.

I work as a sound recordist and worked with foreign crews on documentaries on the political situation and atmosphere there, met people from the ministry of reunification, worked regularly with several NK refugees and toured some US military bases. Based on what I saw, NK only ever reacted to US / SK military escalation, not the other way around.

Edit: Just to clarify; I’m not defending NK external or internal politics. I think the country is a humanitarian crisis, and would need to be treated as such.

5 Likes

A counter fire for what ? He might just be wrong, or nervous, he don’t need to have malicious intents to have that kind of “techno millenarianism” thinking.

It seems to me that blaming AI is much like claiming atomic bombs will cause the next war: despite everything, AI is a tool. Should we achieve sentient silicon, that would be something different. And true intelligence will come slowly.

To take a page from Iain Banks’ Culture series, we were still worlds away from developing Minds, much less the intelligence of their drones. If we are lucky, our self-driving cars will be as intelliegent as horses, not some superintelligent Overminds.

Methinks Elon Musk is stuck in the Wargames mentality, conflating calculations with intelligence.

2 Likes

With the extreme fuzziness in the use of “AI,” I’m never sure what he’s talking about. “AI” is used right now to talk about pretty dumb systems, and certainly nothing with any awareness. Me, I’m worried about dumb autonomous software taking jobs and taking control of the systems that are vital for the continued existence of our civilization. You know, like has already happened. I’m not sure that’s what he’s talking about, though.

I dunno, why would we set up vital processes to be based on the output of inscrutably complex algorithms whose workings we don’t actually understand? Let’s ask the financial sector, and all the other corporate and government entities that do just that, right now

There would be no separation, though - the military’s information gathering and transmission would be computer-mediated, so it’s all part of the AI system. During the cold war, there were multiple incidents where the US and Soviet systems misinterpreted some event (system malfunction, weird weather, flock of birds, the sun rising) as a nuclear strike by the other side. Only by people ignoring what all the evidence told them was happening, did we avoid nuclear armageddon. That’s not always possible - during the Iraq war, the US shot down more friendly aircraft than the enemy did, thanks to autonomous and semi-autonomous anti-aircraft missile systems.

2 Likes

Of course AI is riskier than North Korea. This North Korean thing just showed up recently. Can we have a little perspective? Also, a nuclear war would be a setback, but AI is an existential threat to mankind. It’s the lack of respect of AI’s potential by some that increases the risk. It’s the same stupidity that let humans ignore climate change for so long. The difference is that AI may be opportunistic in ways that nature never would be.

Musk remind me the the boy who cried wolf.
There is actual and real problems with AI, automation, internet, etc. You have way more probability of experiencing a DDoS attack than an AI induced nuclear war, and as improbable is a war with NK is, it’s still a more urgent problem, but I might be wrong.
And Musk with his irrational fear (in the current state of the world and the technology) is distracting anyone from the real solutions we can have for all of those real problems.

It’s like his freaking hyperloop. If the US need trains just build trains, look at the TGV and the Shinkansen. You don’t need an hypothetical over-engineered solution to solve that problem when you can buy the solution from an over country or make your own.

3 Likes