Pentagon reassures public that its autonomous robotic tank adheres to "legal and ethical standards" for AI-driven killbots

Originally published at: https://boingboing.net/2019/03/09/atlas-shot.html

2 Likes

Brought to you by the same group of people that made tear gas illegal for use against military humans but just fine for use against plain humans.

25 Likes

31 Likes

I’m rethinking the Luddite philosophy.

27 Likes

autonomous

autonomous

autonomous

10 Likes

Actually, USA disputed that law in Vietnam and used tear- and vomiting gas against the enemy. (And if you think it was humane, it was just to render them helpless before going in with guns to shoot them). It was not a good example to show that the US military follow rules.

15 Likes

Dont worry, we always have a human in the loop.
King-Size_Homer

29 Likes

The code of ethics for computing professionals developed by the ACM used to say “avoid harm”. But an update last year changed this to “avoid unintentional harm”. The implication seems to be that it’s now okay to harm someone with your technology as long as that’s what you intended.

Ethics aren’t what they used to be.

31 Likes

That is why I chose the comparison. Civilian LEO use of robots has different standards, I suppose.

3 Likes

So they’d be referring to Asimov’s Three Laws of Robotics? :thinking: …maybe not

4 Likes

I like how somebody left their water bottle on the multi million dollar death machine…it’s not a coffee table!

6 Likes

15 Likes

Such a relief.

1 Like
7 Likes

Reading about the kind of PTSD that military drone pilots can get despite experiencing no risk to themselves, I got to thinking about centuries of patriotic tradition going down history’s toilet. Proud warriors would invite another generation to put their own lives at risk to protect their families from foriegn invasion.

But now the stakes are much diminished. Honest recruitment ads would invite the next generation to put their consciences at risk, for a few more percentage points on the GNP.

10 Likes

Good to know that we will adhere to the highest ethical standards when deploying murderbots.

3 Likes

M247 Sergeant York, automatic targeting …In February 1982 the prototype was demonstrated for a group of US and British officers at Fort Bliss, along with members of Congress and other VIPs. When the computer was activated, it immediately started aiming the guns at the review stands, causing several minor injuries as members of the group jumped for cover.… from Wikipedia M246 Sergeant York, see article reference 18 too PDF

A number of volunteers that used to work with came from the defence industry. One relayed a story about an autonomous targeting test that went wrong and from his description I think it was the one above. As told it was significantly more dramatic than the text above notes (the truth likely lies in between the two stories), and that the only thing saving the audience was a set of failures, I can’t back that up with a reference. It did eventually target the ground after a reset, and began firing ending the demonstration.

13 Likes

Ethical slippery slope:

  1. Autonomous robots will only kill after a human has approved the shot.
  2. Autonomous robot will only kill after a human has approved a killzone (only enemies/terrorists will be moving around in this killzone and therefore all shots are legal). Think of approaches to a military base.
  3. Autonomous robot will only kill after a human has approved a conflict zone.
  4. Definition of ‘conflict zone’ becomes flexible.
  5. Conflict zones include peaceful protest or just non-immediate cooperation

Ultimately autonomous robots will also become self directed landmines, making certain areas impossible to enter or traverse.

Stanislaw Lem predicted the natural endpoint of this stuff decades ago - which is that conflict zones become unsurvivable for anything that is not an autonomous robot, and they are in an escalating arms race of absurdity.

18 Likes

The old lie: dulce et decorum est pro patria mori. There were honourable people who charged at machine-guns on horseback, waiving a sabre, but they did not last long. Gas and minefields were proposed back then as clean solutions that could deny territory to others without risking people. They were even seen as the humane solution as you they had survivors, where people with rifles fought to the last man. They are the precursor of these robots: they are a thing sent against an enemy rather than a human.

The idea that sending a real person into the battlefield is somehow honourable lives on yet, despite everything they do when they get there. The soldier is gallant provided they are risking their own lives, even when this could be avoided.

I have no solutions here. Until we have no need of weapons, perhaps the best weapons are ones that never have to be used. Deploy robots, but tell the enemy they are somewhere out there, and that they will kill anyone they see until they are stood down. That makes them like a minefield, but without the clearing up afterwards.

Oh, why can’t people just be a bit nicer?

2 Likes

Killbots are easy to deal with.

8 Likes