Rules for trusting "black boxes" in algorithmic control systems

I strongly disagree with O’Reilly on 3. Take the “unintentionally” racist hiring algorithm. The algorithm’s consumers want the algorithm to be “unintentionally” racist or they would do something about their existing hiring process. The only thing they’re trying to do is speed up the process and make human beings less accountable for it.

“Don’t blame us, it’s the algorithm. Who knows why it only shows us resumes of white guys or women whose names could be mistaken for a man’s. In fact, I bet it has nothing to do with your name or anything on your resume. Do you have a political opinion on Facebook?”

People who make these algorithms have a lot higher responsibility to humanity than the low bar Tim set on #3.

1 Like