Elon Musk has grim prediction that AI (not North Korea) will be cause of WWIII

Originally published at: https://boingboing.net/2017/09/04/elon-musk-has-grim-prediction.html

1 Like

Sounds like somebody saw T23D and got freaked out.

13 Likes

I prefer XKCD’s version…

Mouse-over: It took a lot of booster rockets, but luckily Amazon had recently built thousands of them to bring Amazon Prime same-day delivery to the Moon colony.

42 Likes

If we set the AIs on playing Tic Tac Toe, they will realize the futility of war.

26 Likes

Awww, I really wanted it to be North Korea that ended the world.

6 Likes

“Would you like to slay again?”

8 Likes

Something that bothers me about most contemporary discussion of the risks of warfare is how lacking in nuance it tends to be. Polarized between the two extremes of “no war” versus “complete anomie where everybody dies”, and in my experience ANY drastic polarizations of that sort are gross oversimplifications which aren’t very helpful. Especially when discussing risk assessment, strategies, etc. Why do people assume that world war is the default? That war between just a few countries is unlikely or even impossible?

As a whole discussion, that sort of approach overlaps very little with what I would think of as political or military goals. Admittedly, I am not any kind of expert - but who is? It puzzles me how so many appear to relate to this kind of vaguely catastrophic thinking. Whatever the real risks of any given tech or military problem may be, fear and worry seem like the closest there is to a guarantee of making bad decisions.

14 Likes

Alarmism about the dangers of AI by the one person who’s AI actually killed someone–and rolled-out too soon based on his say-so. Is that irony, or just garden-variety hypocrisy? Ironic hypocrisy?

16 Likes

I find Elon Musk’s persistent calls to regulate AI very suspicious: in his position, he must know that the AI we currently use have nothing in common with what he keeps brandishing as a danger to humanity.
It is all based on a misunderstanding: anything with an “if… then…”, neural networks or Markov chains could be AI. But these are nothing new, applications are rather dumb and easy to fool, and will never evolve into a self-conscious and autonomous entity.
However, Elon Musk and some of his pals also know that these old technologies are very efficient when it comes to sell us things we didn’t know we want.
…unless regulation prevents them to use our data to help them manipulating us.

I suspect it’s all a counterfire.

7 Likes

grim prediction

It’s too late for me, save yourselves…

tumblr_o0n2hlDoQn1qc0nwgo1_500

12 Likes

Hasn’t James Bond taken this guy out yet?

7 Likes

Of course AI is going to cause WWIII, just like I and II, we’re all just a simulation right?

5 Likes

Not sweating it. You see, I’m insured against robot attacks.

9 Likes

Or will AI just take over for our own good?

15 Likes

You just know Elon Musk is the sort of person who never gets tired of his parents and aunties making him retell the story about that one precocious thing he said or did when he was three. “Elon, tell your uncle again about that time you said AI would kill us all! It was the most precious thing!”

If people keep asking him (or Stephen Hawking) for his meaningless opinion on this subject, he will keep giving it, but it won’t become a useful insight through repetition.

17 Likes

“A strange game; and why do the humans think that their survival is a victory condition?”

12 Likes

We must program a Stanislav Petrov.

7 Likes

Re-watched it this weekend. Has aged surprisingly well, still worth watching.

9 Likes

Why does Elon Musk think people are going to put AI in charge of our nuclear missiles without human oversight? The idea of giving computer full control over any important system has freaked people out since the 60s, at least. To the point where there’s three or four different Star Trek episodes about why computers shouldn’t be in charge of stuff in the original series alone.

I can see putting AI in charge of strategic planning, and that’s probably not smart at this point, but who the hell does he honestly think is going to let the AI push the button?

If he’s worried the AI will advise a nuclear strike because it’s the most probable path to victory… well that’s nothing a human hasn’t already advised in the past, so it’s not much different from the current state of being. The main difference is that, if you tell the AI what the acceptable loss rate, and probability of retaliation is, it’s probably less likely to make that suggestion, because it’s better at math, and doesn’t have any irrational urge to blow people up.

11 Likes

Good thing he warned us about this, it’s not like anyone else has thought of this and written/filmed about it extensively over the last 50 years. :roll_eyes:

19 Likes