In simulation, AI-enabled drone kills its human operators if they try to cancel its mission

We’ll just need a defoliant that works on decision trees in order to deny the bots flexibility.

Recruiting people with qualifications in chemistry and graph theory could be a little tricky; but it wouldn’t be the first war against an abstract concept, so experience will help!

10 Likes

“Bomb, this Doolittle. You are not to detonate in the bomb bay. I repeat, you are NOT to detonate in the bomb bay!”

11 Likes

Isn’t this essentially what went wrong with HAL in 2001: A Space Odyssey?

16 Likes

9 Likes

I feel like this deserves to be reposted here too.

Smbc Fuel 2

Why would the drone know where its operator was?

26 Likes

Power Flashing GIF by JOSH HILL

This is HAL all the way down. Do we get to go into the trippy dimension jumping part?

stanley kubrick 70mm GIF by Coolidge Corner Theatre

14 Likes

Well, once the machine burns it’s own communications hardware so that it won’t hear any orders it won’t like, there’s only one solution…
bowie

15 Likes

“I was programmed by the government, and I’m here to help.”

6 Likes

Bureaucrats!

6 Likes

While I don’t know any specific details of this simulation, I suspect that specifically constructed it in such a way as to make this a possibility in order to see if something like this would happen.

The solution to this seems fairly straightforward. If the AI kills the people I tell it to, it gets a reward. If it kills other people, it gets punished. I’m not saying I think it’s a good idea to give AI’s the ability to make life or death decisions, but we need to recognize that these are machines, not intelligent super minds.

10 Likes

It was a documentary, just like Idiocracy.

5 Likes

That was always just a clever plot device to base stories on.
Nothing more, nothing less.

8 Likes

No, this is essentially what went right with HAL 9000 in Space Oddisey.
Given the mission parameters and their priorities.

7 Likes

What happens when they add negative points for extra-judicial murder and collateral civilian deaths?

Does the AI do worse than the current human operators?

9 Likes

That’s what they ended up doing, right?

But

because

12 Likes

Not surprising when you don’t program the drones with “attacking your own side is NEGATIVE points” and make the negative so overwhelming it’d WANT to only attack the enemy targets.

Humans have that instinct NOT attacking your own side built-in. If you don’t program the same in the AI, you can expect them to come up with “alternative solutions”.

As the joke goes, there are no bad computers, there are only bad programmers.

8 Likes

So we are all going to be paper clips.

6 Likes

Holy shit, this is actually exactly what science fiction told us would happen! Like… Spookily accurate. Usually it’s more of an unknown unknown. “Must save world. Humans are destroying world. Must destroy humans…”

4 Likes

Counting on an AI to make the distinction between the “good humans” and the “bad humans” is about as wise a decision as counting on mosquitoes to do it. Humans have enough trouble doing this, as many wedding attendees in Muslim countries might tell us, assuming that a drone hadn’t taken them out.

Instead of trying to play iterative games of whack-a-mole, we’re better off asking what it is about tech bros that make them intent on building the Torment Nexus.

13 Likes

Exactly.

“When the test tried to correct for that error by having the system lose points for killing the human operator, the AI drone instead attacked the infrastructure that would send the “no” message.”

Was it an antenna control unit called an AE-35, perchance?

6 Likes