We’ll just need a defoliant that works on decision trees in order to deny the bots flexibility.
Recruiting people with qualifications in chemistry and graph theory could be a little tricky; but it wouldn’t be the first war against an abstract concept, so experience will help!
While I don’t know any specific details of this simulation, I suspect that specifically constructed it in such a way as to make this a possibility in order to see if something like this would happen.
The solution to this seems fairly straightforward. If the AI kills the people I tell it to, it gets a reward. If it kills other people, it gets punished. I’m not saying I think it’s a good idea to give AI’s the ability to make life or death decisions, but we need to recognize that these are machines, not intelligent super minds.
Not surprising when you don’t program the drones with “attacking your own side is NEGATIVE points” and make the negative so overwhelming it’d WANT to only attack the enemy targets.
Humans have that instinct NOT attacking your own side built-in. If you don’t program the same in the AI, you can expect them to come up with “alternative solutions”.
As the joke goes, there are no bad computers, there are only bad programmers.
Holy shit, this is actually exactly what science fiction told us would happen! Like… Spookily accurate. Usually it’s more of an unknown unknown. “Must save world. Humans are destroying world. Must destroy humans…”
Counting on an AI to make the distinction between the “good humans” and the “bad humans” is about as wise a decision as counting on mosquitoes to do it. Humans have enough trouble doing this, as many wedding attendees in Muslim countries might tell us, assuming that a drone hadn’t taken them out.
Instead of trying to play iterative games of whack-a-mole, we’re better off asking what it is about tech bros that make them intent on building the Torment Nexus.
“When the test tried to correct for that error by having the system lose points for killing the human operator, the AI drone instead attacked the infrastructure that would send the “no” message.”
Was it an antenna control unit called an AE-35, perchance?