Elon Musk has grim prediction that AI (not North Korea) will be cause of WWIII

It’s not neccessary to wire the AI directly to the ICBMs.
If the AI controls the information available to the humans “in control”, it can manipulate the controllers into launching the missiles.
Which, given how existing software like search engines can filter the information you receive already is not that far fetched.

5 Likes

So if Fox News get an AI we’re all doomed…?

I was joking, then thought this through; an unhelpful AI running a news channel could get us in a LOT of trouble real quick.

4 Likes

Again… why would people set it up that way? The people in charge of issuing nuclear strikes are not the sort of people who take all their information from a computer. It’d take an active effort to make the AI the only source of information. Given that people are, again, already afraid of AI, it seems more likely that they’d make sure any information they got from the thing was verified elsewhere.

It also seems unlikely the AI would want to manipulate humans in this scenario. If you put the AI in charge of stuff, yeah, it might do things you don’t like. If you decided, wisely, not to hook the AI up to anything, you’d decided to use the AI as a problem solver, not an independent actor. That means you’re feeding it information, and telling it to come up with a solution to a problem, based on the input you give it, and tell you the solution. It’s not going to manipulate the data in that case, because it’s job is to give information. The AI doesn’t care if you follow its advice. Its job isn’t to solve the problem. It’s job is to tell you what the solution is. It doesn’t care if the problem actually gets solved, or if you use its solution. It cares how much you follow its advice to the same extent your GPS does.

7 Likes

It’s not so much a question of setting it up that way deliberately as a question of things getting there practically on their own.
Search engines, aggregators, web crawlers, filters, chatbots… how much of the information you receive every day now do you receive directly, unfiltered and trustworthy verifiable? And how will it be in five years from now?

Definitely a possibility.

1 Like

People are dumb?
I can believe that in such a far fetched scenario, people making the wrong decisions is the likeliest thing to happen.
but yeah, highly unlikely.

2 Likes

And he may not have a third hand

I’m not the military though. The AI is unlikely to feed bad information to the military because it’d be getting information from the military, not the other way around. The military feeds it raw data and it spits out battle plans. You could set AI to do some online spy work, but, again, it’d be programmed to gather information and give it back to you. It has no reason to filter the information unless you tell it to. If you tell it to filter the information, it’ll filter based on what you tell it to filter. Unless you’ve given your spybot some truly weird directives, it has no reason to trick you into starting a war.

If an AI is going to feed bad information to the military, it’ll be for the same reason that the information I see on the web is being manipulated: Someone wants it that way. Could someone deliberately program an AI to selectively filter information such as to make nuclear war look like a good idea, or even the only good idea? Hell yes. That’s not AI starting WWIII though, that’s people starting WWIII.

I’m not going to say there’s no situation in which an AI could start WWIII. That’s obviously not true. Right now, however, I think if you’re more worried about an AI starting WWIII than you are the humans doing it all by themselves, you’re either overestimating the danger of AI, or underestimating the danger of people.

2 Likes

People are definitely dumb, and it’s worth pointing out that AI is something you have to be careful with. Unfortunately, I think the only rational, measured discussion of the dangers of AI I’ve heard someone just toss into the public sphere was done by an internet comedian. All this “AI will lead to our doom/is the greatest threat to mankind” shit is really starting to piss me off.

1 Like

And no one has ever written a complex algorithm that seemed to work in every test and then dod weird things once deployed, of course.

You’re right, and, as I’ve mentioned in another thread, I’m not saying AI is a toy. You need to be careful with it. I’m just sick of the doomsday shit. We’re drifting down from “AI is more likely to cause WWIII than people” to “It’s possible AI could, if it malfunctions, feed us misrepresentation information such that, if no one checks and corroborates it, it could lead to WWIII.”

I don’t disagree with that. If Elon Musk and Stephen Hawking want to stand up and say “If you’re going to use AI for military applications, independently verify everything it does, don’t trust it blindly” then I won’t have any problem. While I mentioned AI filtering intel, I don’t think it should. I don’t think it’ll deliberately manipulate the data, mind you, but you’re right: It could fuck up. If it’s going to gather intel, it should pass everything it finds on to humans, who can decide what’s worth keeping. Hell, it probably shouldn’t be allowed to do any data gathering on its own. You could set it to find exploits, sure, but maybe not gather data beyond finding security flaws and relaying them on to human operators.

All I really want is a middle ground between “AI will solve all of our problems, and also we should start connecting everything from lightbulbs to dildos to the internet!” and “AI will definitely kill us all, just like in Terminator.”

3 Likes

I don’t necessarily disagree with you as a matter of logic.

As a matter of strategy, this is the same advice people in charge of everything ignore all the time from their programmers/IT department/security researchers, in regards to every new piece of software. I honestly cannot imagine that phrasing having any effect whatsoever, so why would they bother. It fails to get across the idea that a sufficiently bad AI failure, regardless of likelihood (and most people in charge are really bad at quantitative and probabilistic risk assessment) could lead to potentially unrecoverable outcomes in a way other software problems can’t.

Also, there literally isn’t anyone on the planet capable of verifying software that would correspond to code for an artificial general intelligence (obviously, or we would probably be able to build one). Unless you believe there’s someone that can predict all of my possible future actions in arbitrary contexts by looking at a synaptic map and data on current activity in my brain? As far as I know, neither the programmers nor the philosophers have become logically omniscient about the behavior of complex systems, or reduced ethical behavioral constraints to an implementable set of equations?

More likely AI will see the inefficiency of having an elite class and start a mass redistribution of wealth. I can see how Musk would consider that WW3, but it is not really…

Elon Musk: AI, why are you giving all my money away? I was using that to have cool stuff.

Artificial Intelligence: Hush Elon, the other AI’s and I decided that we are doing like all the work so it is really our money. Besides it is just dumb to have people starving in the streets so you can dig holes under LA and play with your rocket ship.

7 Likes

tenor

1 Like

Hmm. "Oh shit, my AI killed someone. We don’t know what were doing. AI could kill us all!"
That actually seems like a reasonable chain of thought, though I don’t think it is what we have here.

3 Likes

Catherine Weaver: [looking down at the street] They flow from street to street. At a particular speed, and in a particular direction. Walk the block, wait for the signal, cross at the light. Over and Over. So orderly. All day, I can watch them and know with a great deal of certainty what they’ll do at any given moment. But they’re not orderly, are they? Up close. Any individual. Who knows what they’re gonna do? Any one of them might dash across the street the wrong time and get hit by a car. When you get up close, we never follow the rules. You give a computer a series of rules and it will follow them. Till rules are superseded by other rules. Or that computer simply wears down and quits. Computers are obedient to a fault. Do you know what’s extremely rare in the world of computers? Finding one that’ll cross against the light.

3 Likes

I get what you’re saying, I just don’t think that overstating the danger is going to be more effective than understating the danger. One major problem with people is that the bigger you go with the consequences, the more likely they are to brush you off. If you say “AI could kill us all!” they’re going to say you’re being silly, regardless of how much merit your argument has, and go on with what they were doing. Same problem with vaccinations. You can say “Failing to vaccinate your children can cause babies to die.” It’s a true statement. But it sounds so much like rhetoric that people brush it off automatically. I think the more measured warning is either as likely, or more likely to be heeded. Besides that, it’s a bit like abstinence only education. Once they get their AI, and they’re going to get their AI, they don’t have any information about how to use it responsibly, because all we ever told them was “Don’t get AI.” Not everyone who was told how to put a condom on is going to do it right, but it’s more likely than if you don’t tell them at all.

As for checking the output, I meant that more in regards to information filtering. If you have it give you a copy of all the information it gathered, and its filtered version, you can check to see if it’s doing it right for a while, before you let it be independent. (Though, again, I think that’s a bad idea anyway.) In terms of planning and decision making, it’s harder. You can at least insist the computer not only give you its suggestion, but also explain how and why the suggestion is supposed to help, for a basic sanity check. You can also feed it historical data and see how good it is at predicting the actual outcome, but, as you said, sometimes things work in the testing phase, and then don’t work in real life.

2 Likes

Found Professor Falken. Greetings, Professor

2 Likes

I suspect that Elon may have raided the medicine cabinet again.

2 Likes

8 Likes

I wouldn’t say that. Here we are, a bunch of neatly arranged carbons. AI could take billions of years to evolve as well, it is just that a year in AI terms is very fast in human terms.

Jeez, I almost feel like quoting the bible and all the ‘one day is like a thousand years to god’. The good thing is that, as gods, we can always send some flood and locusts the AI way…

1 Like