NYC will require businesses to prove A.I. employment software isn't racist or sexist

Originally published at: NYC will require businesses to prove A.I. employment software isn't racist or sexist | Boing Boing


This is exactly why machine learning is a really bad tool for this task.

Automatic decisioning systems have been around for a few decades now, and one of the most important features that they have had to show is explainable results. You have to be able to show what factors go into a decision, and with machine learning, it’s really difficult to do that. Neural networks and similar systems are black boxes by design, so you can’t tell if the system is just constructing a proxy for race or gender and decisioning on that.

So that’s before we even get into the issues with training these things on previous biased decisions in the first place.


Cool. They should expand that requirement to healthcare, insurance, and banking/investment software, too:


Good. So this is a de facto ban, right?


The problem with machine learning is the usual one of “rubbish in, rubbish out.”

Your application contains clues to your “racial” status from your name, address and the college you attended. The machine knows that people with particular sorts of names, addresses and colleges are less likely to be hired. It does not know that this is due to the structural racism of the historical human powered hiring process, and it assumes that such people are less likely to be hired because they are less suitable.

It shuffles your application to the bottom of the heap.


Any company that uses A.I. in its hiring should be required to state that publicly, so that I won’t feel bad if I game their system.


The problem with those who program and create application processes is the intent behind evaluating any of that. Having a name, address, and degree could all be yes/no questions. The racism, sexism, nationalism, and classism comes out when those of us who don’t fit stereotypes show up for an interview.

Oh, the hours I have spent enjoying the look of shock on hiring manager’s faces. :roll_eyes: Then there was the fun in fielding leading questions designed to determine exactly how their careful screening failed. :imp: :weary: Now they probably ask for a photo or video chat to save time in making sure candidates are “a good fit.” :woman_shrugging:t4:


need one AI to scrub out data that has high correlation to race or gender, then that gets submitted to the HR system.

This topic was automatically closed after 5 days. New replies are no longer allowed.