I’m not understanding your logic at all.
In any of your scenarios what happens if you remove the police altogether? Do the racially profiled neighborhoods continue to have higher crime rates than their surroundings or does everything everywhere level out? Because if it’s the latter then the obvious problem is the police and not the people. Some how I expect that’s not the case considering the amount of hard data you can find on various crime statistics. (Which obviously are somehow racially biased.)
The issue here isn’t the people who are victims, it’s the software. We can feel sorry for the people and try and help them all we can, but if nothing is done about programs like this it solves nothing. Realistically technology is outpacing our greater perception of problems, ie. humans are decent at pattern recognition, so a computer must be or could be better. I think the point or direction @Timoth3y was going for was how much data and how far can you go with a system like this? The #1 issue with this system is the fact the accused is providing their own answers. I’m guessing on bad days a lot of people would skew heavily toward psychopathic/violent tendencies on a psychological self evaluation questionnaire. But it’s not like an outsider is without fault either, we are all biased in some way - hopefully the professionals in the law fields are trained well enough to look past those and make rational choices.
Think about this example. I tell you a person has been brutally murdered with a claw hammer and the perpetrator is in custody. They were found at the scene holding the hammer and there is a witness to the event. You have already formed a mental image of that situation. Now add in more information, the victim and perp were related. Perhaps the scenario you pictured has now changed. How about if the perp is underaged? They show signs of long term physical abuse? Still the scenario continues to change. Software is only as good as the data we can give it. In a perfect world I don’t see software that could psychoanalyze you to the point of gauging your truthfulness as racists or any other -ists. I will agree having it all hidden behind patents and corporate provided data is ill advised, but rigorous third party testing could go a long way to helping any of those issues. As far as the here and now all programs like this should be discontinued and banned until said time were AI is advanced enough to be indistinguishable from a real person.