Elon Musk and Stephen Hawking call for ban on “autonomous weapons”

[Read the post]

Spoilsports.

4 Likes

18 Likes

Seems like an arbitrary distinction between what counts as a “revolution” in warfare. What about the bow, or metallurgy, or armored vehicles, or the ability to drop explosives from the sky?

None of which is to say that I’m looking forward to autonomous killbots.

2 Likes

note to self: sell Cyberdyne shares.

8 Likes

Just as the healthcare reform act is now referred to as obamacare, I think that everyone should only use the term killbots. Maybe it will help remind people just what they are and the only reason for using them.

4 Likes

Brannigan: “Killbots? A trifle. It was simply a matter of outsmarting them.”
Fry: “Wow, I never would’ve thought of that.”
Brannigan: “You see, killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them until they reached their limit and shut down. Kif, show them the medal I won.”

19 Likes

Thank you.

7 Likes

Well I would agree with not arming true AI. But programmable thinking machines I am all for. Maybe I am being an optimist, or maybe I am being naive, but I think such machines would cut down on collateral damage in war and suspect deaths in policing.

Robots have no cognitive biases. They don’t hate. They don’t flinch. They don’t get angry or over react. They don’t have authority complexes. While a human will react on a PERCEIVED threat or at the first sign of danger, robots have the luxury of waiting and analyzing to see how things play out.

Example, police robot goes on a call of domestic disturbance. Suspect comes at it with a knife. Human cop would shoot, robot cop could tase. Suspect points gun at robot. Human cop would shoot, robot cop could tase. Suspect starts to shoot at robot cop or points gun at other humans, depending on the situation robot cop could still tase, or use lethal force if it meets a set of perimeters. Unlike a human cop, it won’t flinch from being shot at and will hit its target.

In war it is even more useful. It has the luxury of confirming any targets before pulling the trigger. It can wait until being shot at until it returns fire.

Humans are extremely fallible for a number of reason. We think we are great at making decision, but we aren’t especially under stress. The controversial drone killings were all done with humans pushing the buttons. Machines can be much, much more selective in killing than humans. They may even have the option to self destruct if in a bad situation vs trying to blast their way out hurting civilians in their desperate attempt to survive. They won’t ever snap and just kill someone because they can.

Of course, it all depends on what they are programmed for. I am assuming the best. You could program for the worst and then it would be a terrible thing.

1 Like
YOU HAVE 20 SECONDS TO COMPLY.

But seriously: the biggest problem with arming AI? Killing with even less accountability than cops and soldiers have now.

10 Likes

Dude. Killbots.

Possibly in the hands of our next president…

The horror.

1 Like

Maybe its just me, but I feel like “Killbots” doesn’t have enough impact. Can we do better than this? Maybe MurderBots or BloodBots or something…

1 Like

I’d say more accountability. It would all be recorded. You could even review the robots thought process for every shoot. If google cars can navigate through traffic, robots can figure out if someone is or isn’t a threat. We could make their programming be much less aggressive than your average human. In policing they would Make lethal measure as a last resort. You could even program it to not defend itself with lethal force, to only take lethal action if other humans are in danger. Hell you could even go the T2 route and shoot too wound - which is a legal liability with human cops.

Think about it. Now any time a suspect has a weapon the law basically gives cops free reign to shoot. What does a robot cop care? It’s bullet proof. Or if not bullet proof, it doesn’t care if it gets shot or not. It has no fear. It won’t just whip out a gun and start shooting for fear of its life. It won’t automatically assume a black person is up to no good. Facial recognition means it won’t mistake you for someone else. If there is even a small level of uncertainty to it’s actions, it doesn’t shoot it’s gun.

1 Like

Anything that is more descriptive and truthful than autonomous weapons, which sounds so benign.

1 Like

By that logic drone killings should have more accountability and less collateral damage than most other kinds of military actions. After all, every step of the process from intelligence gathering to final bomb delivery can be dispassionately evaluated and documented by supposedly objective people whose judgement isn’t impaired by the heat of combat.

In practice this hasn’t been the case at all. If a bunch of marines showed up to a wedding party and massacred dozens of unarmed men, women and children with assault rifles then they’d (hopefully) be hauled off for court-marshal. When a drone strike yields the same result we might give a little lip service to how we’re going to try our best to avoid that kind of “tragic error” in the future, but nobody ever faces serious consequences for it.

10 Likes

You make it sound all warm and cuddly, almost as good as droning or humanitarian bombing.

Note: some already-existing land mine technology could arguably count as such under this proposed ban.

1 Like

Well it IS war. Even though our ROE has DRAMATICALLY reduced civilian casualties compared to past wars (where the rape, killing, and plundering of civilians was just part of being at war). So if higher ups allow ROE to kill off 50 innocents to get one bad buy, that is a policy issue.

Of course if you program it to kill indiscriminately or program it with no regards to collateral damage it isn’t going to be any better than if people are doing the killing. I am mainly thinking of policing right now, but still, if programmed with a high level of care to avoid civilians, it will do a better job than any human. It also will cut down on friendly fire.

I don’t even know if what I am talking about is possible with our current level of machines who can work on their own. But it is coming.

1 Like

Based on history:

  1. Archery
  2. Iron
  3. Massed infantry tactics (the phalanx)
  4. Light cavalry
  5. Better massed infantry tactics (Roman manipoles)
  6. Heavy cavalry
  7. Mounted archery
  8. Gunpowder
  9. Mechanized warfare
  10. Air power

Nuclear weapons didn’t seem to appreciably change how wars are actually fought, whereas all the above did. (They may change whether a war is fought, but not how.)

If you read the article, the concern isn’t Skynet. The concern is the arms race where Russia, China, and the US each compete to make the most lethal autonomous systems possible and then some amount of them end up on the black market.

So yes, there are obvious benefits to autonomous systems. There are also downsides. The argument is that the potential downside of proliferating autonomous death machines is worse than the upsides.

Humans being the optimistic creatures they are, it’s usually worthwhile to call out the drawbacks.

6 Likes

Even if that was a valid excuse, it’s not even accurate. For example we regularly send drone strikes into Pakistan, a country which is ostensibly our ALLY. And that’s not my point anyway. My point is that adding layers of autonomy between the person making the kill decision and the person who gets killed ultimately reduces accountability.

A gun-toting soldier or police officer should theoretically be able to answer for every life they take. If they kill someone they shouldn’t have then they are at least supposed to answer for it in court. A pilot operating a UAV faces much, much less accountability for the lives they take. Do you really think a software engineer is going to face more serious consequences if one of the killing machines he helps program takes an innocent life?

4 Likes

what about trained velociraptors though. that’s a great idea, right?

1 Like