We’ll just need a defoliant that works on decision trees in order to deny the bots flexibility.
Recruiting people with qualifications in chemistry and graph theory could be a little tricky; but it wouldn’t be the first war against an abstract concept, so experience will help!
“Bomb, this Doolittle. You are not to detonate in the bomb bay. I repeat, you are NOT to detonate in the bomb bay!”
Isn’t this essentially what went wrong with HAL in 2001: A Space Odyssey?
Obligatory xkcd: Voting Software
Our entire field is bad at what we do and if you rely on us, everyone will die.
I hear venture capitalists are salivating to get on the ground floor of killful.ly, the bold new disruptive tech startup building killing machines, “guaranteed to safely and efficiently complete it’s missions, trained by next-gen AI, Algorithms , and cryptocurrency.”
I feel like this deserves to be reposted here too.
Why would the drone know where its operator was?
This is HAL all the way down. Do we get to go into the trippy dimension jumping part?
Well, once the machine burns it’s own communications hardware so that it won’t hear any orders it won’t like, there’s only one solution…
“I was programmed by the government, and I’m here to help.”
While I don’t know any specific details of this simulation, I suspect that specifically constructed it in such a way as to make this a possibility in order to see if something like this would happen.
The solution to this seems fairly straightforward. If the AI kills the people I tell it to, it gets a reward. If it kills other people, it gets punished. I’m not saying I think it’s a good idea to give AI’s the ability to make life or death decisions, but we need to recognize that these are machines, not intelligent super minds.
It was a documentary, just like Idiocracy.
That was always just a clever plot device to base stories on.
Nothing more, nothing less.
No, this is essentially what went right with HAL 9000 in Space Oddisey.
Given the mission parameters and their priorities.
What happens when they add negative points for extra-judicial murder and collateral civilian deaths?
Does the AI do worse than the current human operators?
That’s what they ended up doing, right?
Not surprising when you don’t program the drones with “attacking your own side is NEGATIVE points” and make the negative so overwhelming it’d WANT to only attack the enemy targets.
Humans have that instinct NOT attacking your own side built-in. If you don’t program the same in the AI, you can expect them to come up with “alternative solutions”.
As the joke goes, there are no bad computers, there are only bad programmers.
So we are all going to be paper clips.
Holy shit, this is actually exactly what science fiction told us would happen! Like… Spookily accurate. Usually it’s more of an unknown unknown. “Must save world. Humans are destroying world. Must destroy humans…”
Counting on an AI to make the distinction between the “good humans” and the “bad humans” is about as wise a decision as counting on mosquitoes to do it. Humans have enough trouble doing this, as many wedding attendees in Muslim countries might tell us, assuming that a drone hadn’t taken them out.
Instead of trying to play iterative games of whack-a-mole, we’re better off asking what it is about tech bros that make them intent on building the Torment Nexus.