Completely leaves out that it’s entirely powered by the body parts of homeless people.
Surely there is some science and research into spending tax payer money for even 1 of these.
Do city folk there know of any future crime waves these may be used in?
It feels as if they’re already handing sentences to future encounters.
In Detroit this year, covid relief $ was going to be used to purchase gun detection tech (shotspotter), it was struck down, and the police had to purchase said tech out of their own coffers. Wah wah.
Update: Detroit City Council approves contract to expand ShotSpotter surveillance technology
That, and the owner class.
An ethics question for sure, mostly along the lines of “do cops really need this kind of hardware?” but not a question of autonomous robots killing humans. These machines are remotely operated, Asimov’s laws do not apply here (as if they ever really did anyway).
But, like others have said: The ability to judge whether lethal force should be used, if you have the right target, and if you have a clear shot is not something drone operators always get right, so why should we expect cops will?
“Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to SFPD.”
Quite aside from the fact that that sentence doesn’t really make sense, it’s pretty clear that in the real world this will shortly translate to “We will use a killer robot whenever we feel like it.”
“What’s the situation, sergeant?”
“Guy’s in his house, sir. Behind a closed door.”
“No way to tell what he’s got in there?”
“No, sir.”
“I mean, he could have, I dunno, a bazooka? An M-60? Eight child hostages?”
“Yessir.”
“So what are our use of force options?”
“We could go in there with a ram, ballistic shields …”
“And?”
“We could get shot. Blown up. Stabbed.”
“Imminently.”
“Imminently stabbed, sir.”
“Are you thinking what I’m thinking, sergeant?”
“Killer robot?”
“Killer robot.”
“OK, guys, you heard the captain. We’re using the robot again.”
I expect new Denver Police Union T-shirts with a picture of a Dalek and the text “We Get Up Early to Eliminate the Crowds”
All that, and it would be for an unpaid parking ticket.
For a house down the street.
Nada (They live) disclosed his use of bubblegum; the police should be held to the same standards
So the progression has been: “We’re getting robots, but don’t worry, it positively, absolutely will never be used for anything but bomb disposal. Never ever - we promise! Oh, and we might get some drones, for spotting suspects… and surveillance. Ok, we might kill someone, but only if there’s definitely no other alternative, and they’re an immediate danger to others. According to us.” Next, it’ll be, “We used the robot to kill the suspect because we didn’t know what he was up to, and we didn’t want to ‘risk the life’ of an officer finding out.”
Yeah, my immediate thought was that the weapon itself really does eliminate the justification for using it… so now they need a new justification (“protecting the public!”) or, scarily, no justification at all. (Or perhaps, more accurately, giving up the pretext that it was justified.) It seems like, in response to the whole BLM movement, the cops have often gotten worse - feeling a need to be seen exerting their power (i.e. through brutality) over the population they police.
The problem is, that’s only true in the minds of the public - it’s not true legally speaking, nor is it true in terms of police policy or operations…
They’ve being very clear - it’s only for situations of “imminent” danger… you know, after the cops have called for the robot to be delivered to the scene and have deployed it, setting up an encounter where they expect the suspect will be an “imminent” danger… at some point.
“Our public camera surveillance system algorithms flagged the suspect as engaging in suspicious behavior, so we deployed the robot. If he didn’t want to get shot, he shouldn’t have engaged in suspicious behavior. No, we don’t disclose to the public what behaviors our algorithm treats as suspicious.”
For your family first, assholes.
they always start with “disaster response”
“… we started with a random network and then trained it on video of other times we killed people”
Additional info, context, links to the proposal, statement The Register obtained from the SFPD.
I agree with your larger point, but would like a citation on this part. I’m still skeptical that robots would be any good for this at all, not counting area effect weapons or of course drones that shoot guided ordinance.
San Francisco Police Department wants to use robots to kill
Me too, sometimes! All I have to do is ask?
San Francisco Police Department wants to use robots to kill suspects
Judge Dredd style! No need for a trial, or presumption of innocence, straight up killing people suspected of crime. Ratchet up the police impunity, expand that license to kill! Robots are coming for our jobs
We need laws and we need them now, people. Yeesh.
Yeah, that’s always the go-to justification for the technology by the people developing it, whether on the academic level or in industry, even as it clearly is going to get used by the military. (“Why are we building our ‘disaster response’ robot with a giant gun? To, uh, blast debris out of the way!”) The actual police departments that buy these things liked to give iron-clad assurances that they were going to have limited use in bomb disposal, to keep anyone from objecting to the purchases. Once they actually got them, objections from outside parties mean nothing, so we’re just going to see further escalation, because who’s going to stop them?
Yeah, law enforcement and courts are already using neural networks to codify prejudices into completely opaque systems that everyone can pretend are without prejudice (because algorithms!).
In this context, you end up with:
The cops: “We can’t disclose what behaviors are flagged as suspicious (because we have no fucking clue ourselves, but we treat the output as unimpeachable truth anyways).”
Independent analysis: “As best we can tell, the most ‘suspicious’ behavior a person can engage in, according to the system, is being Black…”
I was reading about software doing gait analysis to spot people walking suspiciously 20 years ago… gods know who is actually making use of it now… We really are about one step away from “system detects ‘suspicious’ person at sensitive location, police are automatically sent out, clueless police deploy robot to kill ‘suspicious’ person without even knowing why they’re there in the first place.”
Read that line and like @anon11942186, the Enforcement Droid - Series 209 boardroom scene came to mind. The robot pictured looks like an old 80’s bomb disposal robot made before BattleBots was a thing…
SFPD will be improving the microphones on their bot, right? Right!?