San Francisco supervisors vote "yes" on police robots use of deadly force

“Robot Dogs don’t kill, coders do your honor”

10 Likes

JFC, first it was cops claiming to be afraid of unarmed people with their backs turned. Now we’ll be hearing about the “failed software update / unpatched model” defense after they kill people. :grimacing:
Help us, Hackers for Humanity…:nerd_face:

locked out hack GIF by MANGOTEETH

10 Likes

I’m slightly heartened that two top officials need to sign off on this, as that could greatly reduce its actual use, and also limit future expansion of its use. I mean, it’s still fundamentally awful and cops are totally going to lie about the circumstances to get approval, but it stops it from being something that gets used routinely. (At least until they change their policy again.) It’s kind of sickening to be thankful for that, but…

Oh, and the latest “reassurance” is that it won’t have bullets, but explosives with which to kill suspects. So collateral damage, ahoy!

4 Likes

At least it isn’t like RoboCop 2 where they controlled the murder bot with a deliberate crack addiction, this is much better!

:cry:

7 Likes

This is all so wrong, theygot it all wrong . Yhey are supposed to have two prototypes to demonstrate before the comity . one robot and one robot/human/ hybrid then the robot is supposed to shoot one of the comities members so the comity chooses the hybrid. thats how it is supposed to work.

1 Like

Dr. Asimov is not smiling, or wouldn’t be if he was still alive.

1 Like

But there’s still a human operator at the controls, no? These robot things can’t have advanced enough to make their own decisions like that. There is a joystick with a red button on top. A trained police officer is at that control. I can’t see how they get a pass.

I mean, yes, I can totally see it. but how stupid are the politicians to approve this BS ?

Again… obviously very dumb… basically, everyone from manufacturer to policy makers to operator has a hand in this… remarkable.

I found this video to be pretty thought-provoking regarding Star Wars droids, self-awareness, and how they are treated like total shit. A very thoughtful examination with very good points.

8 Likes

I am going to do the bad thing. I am going to try and defend this. Not this particular decision, about which I know too little to comment, and suspect the worst. But we could discuss options when designing and deploying robots with a deadly option.

Suppose we have a Boston Dynamics robot with a gun. How would you control it? This is a lot like controlling a Mars Rover. You haven’t the time or the bandwidth to tell it “Move your left leg. Now move your Right leg.” You tell it to move in general terms.

You cannot tell it to “Shoot the Bad Man”. Its reactions are probably better than yours. It could have twenty eyes looking in all directions, many guns, and tiny reaction times. It should respond to a threat autonomously. But this could be subject to many pre-coded rules such as “Do not shoot any person who shows their empty hands”. Where possible, disable the weapon rather than the person. If not, shoot to disable rather than to kill. The robot should be tested with many simulations, just as self-driving cars were tested on artificial roads before letting them loose on public ones.

An early robot will not have consciousness as we recognise it. It will not shoot because it is afraid. It will not (or should not) be racist, and this could be tested in simulations before deployment. The training data should be made public (not much point knowing the robot will never shoot anyone showing the palms of their hands if you don’t tell anyone). It will not get a kick out of being tough, or kill the weirdos because God told them to do it. As with self-driving cars, the job is not to have zero accidents, but to be significantly better than the existing solution. And that bar seem rather low in BoingBoing most days.

That Star Wars Droids video is about human storytelling. Humans are too squishy and fragile for long space travel. Real droids may go to the stars without us. I wish them well.

Why? Why not a Taser, Mace, or any other less-lethal gadget?

Why do we want to jump to shooting the first person the robot encounters, given that that person is innocent until proven guilty? The person in the robot’s sights may not speak English, or may have mental health issues that make them a split second slower to comply than the robot’s programming allows.

9 Likes

Or, and I’m just spitballing here, none of the above.

10 Likes

Why does the robot have to shoot at all? It’s a robot…it could be both difficult to damage and expendable. So self-defense isn’t a priority and in theory you could try making one that focuses on other less deadly ways of apprehending suspects. Why start with the assassin droid?

13 Likes

JFC if they really are allowing autonomous robots to use lethal force that is an egregiously horrendous decision.

The robot could take care of its own movement, aiming, etc, but IMO you’d have to have a human operator who can bear the responsibility of pressing the button to take the shot.

Cop judgement is certainly not great - hopefully, the lack of personal danger would help improve that - but letting an algorithm decide to shoot someone is hellish dystopia territory.

9 Likes

This is Hollywood talk: “You have one second to comply”. But there is a good question behind it.

I introduced the straw man of a Boston Dynamics robot with a gun to give an idea of what a real robot might do. This does not forbid us replacing a gun with a taser. Tasers are presented as the Vulcan Grip that puts people to sleep, but they can and do kill. They can be ineffective. They may not be fast enough if you have a gunman with a hostage at a distance. To start the discussion, if a policeman has a gun, I give the robot a gun. We improve things as we go.

I do not suggest shooting the first person it sees: I hope I do the opposite thing: the robot should take in the whole scene and identify threats and non-threats. It should not shoot people because they can’t speak English. Its reactions may be so fast enough no-one has had the time to speak anything. Its response to speech may be faulty. It will respond to visible threats, which may include not shooting at hostages with an empty gun superglued to their hand. Responding to speech may be too risky, so the controller may do this rather than the robot.

If we don’t think about it, the robots will be made in Texas by some billionaire who wants to keep America Strong and Pure, and the training data will never get into the public domain.

You’re talking about a level of AI that far surpasses autonomous vehicles, and we’re nowhere near introduction of safe autonomous vehicles.

What would be far, far more effective than any weapon? A speaker and microphone. Unlike on TV the best tool for saving people in a hostage situation is communication.

14 Likes

Not what I said. I was talking about people who didn’t understand the robot’s commands.

“Freeze! Police robot! Show your hands!”
“No comprendo.”
Bang bang bang bang bang!

“Freeze! Police robot! Show your hands!”
“Sure. Just let me take off my mittens.”
Bang bang bang bang bang!

“Freeze! Police robot! Show your hands!”
“Begone, foul demon! This cross is my protec…”
Bang bang bang bang bang!

6 Likes

high five big hero 6 GIF

This is a much nicer idea.

5 Likes

… they’ll train the neural network with video of previous times cops killed people :vhs:

6 Likes

Is anyone else deeply tired of rhetorical invocations of the ‘unimaginable’ in preemptive and retroactive justification; when it’s so frequently intended to short-circuit discussion of matters that may be distasteful but are eminently imaginable?

It’s especially bad when the ‘extraordinary circumstance’ they are non-specifically gesticulating at is either left unspecified because it’s too ill-formed to survive definition; or is actually something common enough that you don’t need to imagine it because it happened last Tuesday. If your position is that mass shooters are too dangerous for swat teams without robotic support quit emoting and just say that. Even with the best of intentions(by no means assured, but let’s pretend) you can’t possibly hope to formulate sound policy if you stop thinking and start feeling hard within a few sentences; and the profound unseriousness of presuming to craft a lethal force policy for something you refuse to even speak of is just repulsive. When you are going to go ahead and build policy based on something every part of it you elide is functionally a part you are lying about.

Be specific: gunman, gunmen, hostage situations, bombs planted, vehicle, or worn, use of chemical weapons, atypically high density vehicular homicide. All of these are matters of historical record, not even requiring imagination, go on, articulate them and start crafting your policy response. It’s not hard, it just requires a shred of intellectual honesty.

7 Likes

This was a great movie, but it does not need a sequel set in San Francisco.

4 Likes

This seems like a deeply unreasonable set of assumptions about how such systems will work and we have a lot of good evidence from existing tech deployments. I’ll take a few of them and discuss alternative ways it can play out.

No, it will shoot because the existing corpus of training data is based on the beliefs of the cops who oversee the purchasing. It will be trained on what current experts call a high risk situation. The experts a large swathe of our society accepts for that are the current murderous cops.

It doesn’t have to be and you can test until the end of time and it won’t prepare for racist implementation. To use an obvious example, Tasers don’t have the ability to determine race, but somehow manage to be consistently applied in a racist manner. That is even easier to have happen with nominally neutral complex systems. There was a really good study on traffic tickets in Cleveland. As will shock roughly no one the tickets went disproportionately to black drivers. Early discussions around that focused on either the cops or the drivers. It turns out black drivers were actually more likely to obey speed limits. It turns out that cops were roughly equally likely to ticket an individual speeder of either race. When they dug into the numbers it turns out that the bulk of the difference came from unequal deployment of cops. Streets with larger numbers of black drivers just had more cops stationed there. The people making those placements were looking at neutral numbers and seeing a higher rate of tickets, and assigning cops to that area. So basically a historical explicitly racist history sparks a change that keeps reflecting in nominally neutral decisions for decades. It is incredibly likely that killer bots will be put in neighborhoods labeled high crime now, that won’t be a decision free of racism.

It won’t be. They don’t even make most of the contracts for current police surveillance tech public, even where required by law. They will fall back on claims that a public training set will allow people commit crimes without being caught.

And this is one of the examples where our tech history says we’ll almost certainly make mistakes. We regularly have facial recognition and even sinks that can’t manage detection tasks on darker skinned people. Why should we assume the murderbot run by the least accountable gang will act any different?

No they won’t do it for those reasons, they’ll do it because the department needs to appear tough and a programmer with a god complex told them to.

And that is pretty heavily begging the question because it assumes they will be better. Police beta testing on the public is a bad idea.

8 Likes