When it comes to “Predictive Policing”, using algorithms and models to attempt to predict how likely someone is to commit a crime like this shit:
or algorithmic sentencing
I’m right with you. What those programmes are attempting to do is look for suspicion, and it’s going to be impossible to convince a someone that if the computer thinks you’re a match for suspicious behavior, then you aren’t suspicious. Even if it’s got a 0.01% false positive rate, if you routinely run it against a population of a million people, you’re going to ruin the days of 10 random non-suspicious people every day by treating them as suspicious. Dressing “He seems like a wrong’un to me, send 'im down” in data science doesn’t make it a less unjust sentiment.
But this is about recognising someone’s face, not trying to predict a person’s future behaviour on the basis of a bunch of data points. And there’s nothing inherently suspicious about happening to have a face that looks like the face of a wanted criminal. You go “oh it’s not them” and move on to the next one. It’s a tool that can help by screening down a vast number of faces to a small number that can be reviewed manually.
It’s a tool that can be misused, like any tool. If what the police were doing was dragging off everyone the system highlighted, kicking the shit out of them, and casually murdering some of them, then we’d all be up in arms and the newspaper headlines would be ABOUT the base rate fallacy, not woefully ignorant of it. But these are British police we’re talking about here, not American ones.