Jeeze. Even the computers are racist. It’s like the old RPG, Paranoia, but instead of the computer going mad it went your uncle.
Except that the people who are getting shot don’t have a backup six-pack of clones.
I’ll certainly agree that the color ranking in the game, though, is spot on, and, going with your comment, the tendencies of Friend Computer to see enemies everywhere and blame the Commies, Mutants and Traitors for its own failings… well, that just speaks for itself.
Saying the decisions are being made by mathematical models is just a way to pretend you have a purely objective decision being made-- but where does the original data come from that’s being used to calculate the risk-assessment? If you live in a high crime area (ghetto) is that automatically a mark against you, and is that fair? In that case the computer is just reinforcing the long-term racism already present.
Seriously, what the fuck is this shit?
This is some next level Orwellian, Minority Report, evil parallel universe, sci-fi dystopian bullshit right here.
Why are computer tests determining sentencing in the first place? It doesn’t matter WHAT their results are.
Yes and no? I mean if I was buying insurance on something and I lived in a high crime area my premium would be higher…that’s not a coincidence, that’s based on a much greater chance of something happening to my property. I have a hard time seeing the difference between you referencing high crime area - ghetto to racism and saying someone is from Appalachia and therefore low income and poorly educated. Statistically speaking both are true.
Couldn’t we use this diagnostically and eliminate crime altogether?
#notAllComputers
All Bits Matter!
This is a fucking terrible argument, often used by straight-up racists.
If we go purely by statistics, there’s a higher chance that you’re going to be incarcerated if you’re a black male. Is this an argument that an average black person is more likely to be a criminal, and should be treated with much more scrutiny from the rest of society?
Should a black man pay a special “prison tax” to recoup some of the taxpayer costs of incarceration they’re more likely to use?
The “yes or no” here is a question of whether you think bigoted discrimination is a good thing.
Could we not encourage the quicker arrival of the inevitable dystopian hellscape by doing this stuff? This is like a cross between Idocracy, where we get too dumb to do the thinking and computers do it for us, and a choose your own Orwellian future where the boot on your neck is a statistical model. I think the combo makes Brave New World with the added insult that using soma gets you a prison sentence.
I’m sure that they excluded explicit race from their data set in order to “seem less racist”. I wonder though, will that make the system more racist? What I mean is that the correlates of recidivism may be different in different groups. For example, lots of the questions from their survey are highly cultural. Also, is living in a high crime neighborhood a weaker correlate for blacks than whites?
No, they are more likely.
This is why I don’t like the idea of handing things over to automation. Not only does it not care about individual situations, but it also is only as imnpartial as its programming tells it to be, which in this case is not at all.
The problem here is that there is that algorithmically it’s a black box and there seems to be no formal feedback, so the results are garbage.
But let’s say, hypothetically, someone developed a open-source model that incorporated feedback and was highly predictive of who would show up for their court dates and even take into account the person’s financial assets and determine the optimal bail amount. Assume again that this system also consistently and significantly rated people of color as a higher risk than whites.
Would such a system be racist? Should it be used?
Part of me doesn’t like my own answer, but logically it seems that such a system would be acceptable, useful and less susceptible to bias than human judges.
Am I missing something?
There’s no such open source code available and vendors haven’t been compelled to open their source generally to defense counsel.
Yes, it should be discontinued, and, no, it would not be a “useful tool” in the justice system.
I have no doubt that this will have been the intent behind this system. “Eliminate individual judgement, and you eliminate individual biases.” But as with mandatory sentencing, this just moves the problem from the unknowable inner state of an individual’s mind to a flawed and easily politicised process.
And in this example, nobody can have confidence in the results of the"calculation" because proprietary models are kept secret, and justice cannot be seen to be done.
Sure. In this case the methodology is terrible, so the results are garbage. Of course there is nothing in the article that indicates the system is less accurate or more racist than judges at setting bail.
My question it what if the methodology was solid? If the algorithms were peer-reviewed and the system was constantly being trained and updated with data from places that were not using it.
If that system still consistently and significantly rated people of color as a higher risk than whites should it be discontinued? Or would it be a useful tool?
Why are courts paying for it and using it without knowing whether it’s bunk? Seems that’s the point where scrutiny ought to be applied, not after Northpointe has swindled them. Northpointe (et al.) wouldn’t even need to reveal their algorithms, just provide some evidence of efficacy. How does the buyer benefit from any of this?
They might as well be buying patented witch-dunking stools.
Except that here a datapoint like “living in a high crime area” can only come from police records. Did the police make more or less than average arrests in your neighborhood. If the police goes out to white neighborhoods less because of racist stereotypes embedded in the police system, this data is going to be skewed. Result is that software built with this data is also going to be skewed (or racist if you will).
Then there’s another problem, because the police (who fed the software racist data) uses the same risk-assessment software to determine what neighborhoods to patrol in. Causing them to visit black neighborhoods more, making more arrests there and thus feeding even more skewed data into the software.