Originally published at: San Francisco Police Department wants to use robots to kill suspects | Boing Boing
…
There’s an vital ‘splitting of hairs’ argument here, and it falls on this issue of how autonomous the robot is at the point of lethal force. There’s a scary spectrum from a legally responsible human pressing a button on a remote control panel as a remote gun’s trigger to writing a bunch of image interpretation code which ends up algorithmically ‘pulling the trigger’. They need to be very clear about which end of this spectrum they’re permitting. Pessimistically, one may assume they aren’t even up to thinking about this distinction -sigh-
Not the gif I was going to use, but it’s exactly what I was going to say.
That’s not the only troubling implication in this article.
What are the police doing with bubble gum and why don’t they want us to know about it?
Obviously, robots will soon be killing kids blowing bubbles.
I…suppose your are probably right. Damn that’s horrible.
I’d rather not be.
Impossible. A robot would never mis-identify something.
(edited to fix the gif)
Yeah. I wish the SFPD had a little bit of your clarity instead of getting so excited about new ways to kill. It seems like military weapons are fun new toys for police departments.
Cops at war with the people they are allegedly here to protect. What else is new?
Can’t wait to get a hold of the white papers for the Knightscope k9 and see what laughable wireless security features they’ll have in place.
That ‘protect and serve’ police mantra is broken by the requirement to limit it to three words.
The missing word at the end is, of course, ‘ourselves’.
Now instead of officers getting away with murder because “I feared for my safety” it’s gonna be “I feared that the suspect was going to damage the remote-controlled robot if I didn’t have it shoot first.”
None of the above. Cops are rarely held legally liable for pulling the trigger of their own weapon. Giving them drones to do the dirty work? Oh, hell no. The extra layer of distance and unreality of the victim will make them more likely to shoot and less culpable when they do.
For example, what do you think the outcome would have been if ICE cops had had a dozen of these in Portland during the BLM protests that ran every night for over a year? They would have turned it into a free-for-all protester-killing video game is what.
I agree that distinction is important in a general sense, but I don’t think it needs to apply here.
This is a terrible idea even if a real cop is pulling the trigger remotely. Cops are terrible at hitting the right people in ideal conditions. To now give them the ability to try and kill “bad guys” through a blurry night vision camera lens with a 25° field of view and no depth perception? No fucking way.
Ever tried to drive a car or do any other skill-dependent physical thing through a camera? It’s incredibly difficult. Cameras are not the same as moving your eyes to a new location. They are a terrible, extremely misleading version of vision.
On top of that, we are to believe a robot can aim quickly and smoothly enough via remote control to hit a target that does not wish to be hit? Puh-leeze, SFPD. Get bent.
The military has similar robots for battlefield operations so apparently killing “bad guys” isn’t the problem, the problem is that a police officer’s primary mission is to protect the public, not to deliver lethal force. If you shoot at some enemy soldiers and hit a different one than the one you were aiming for then that’s a whole different situation than shooting at a criminal suspect and hitting an innocent civilian.