Amazon trained a sexism-fighting, resume-screening AI with sexist hiring data, so the bot became sexist

Originally published at: https://boingboing.net/2018/10/11/garbage-conclusions-out.html

5 Likes

I do not welcome our sexist AI/bot overlords.

13 Likes

Too late-- they have already “captured” and “executed” you.

8 Likes

Your so moody on Thursdays. But I like it.

7 Likes

Whether an AI is slow or (like this one) fast, the patriarchy always knowingly or blindly nurtures it by feeding it sexist garbage.

8 Likes

An algorithm is known by the company it keeps.

8 Likes

Do you work at Cyberdyne? Asking for a AI/bot friend…

5 Likes

It was a feature, not a bug.

1 Like

I think these attempts are still valuable as machine learning makes hidden biases more obvious. A human may discriminate against women, but if asked he will rationalize, claiming other reasons to not hire those women. A computer analyzing all the applications he has taken part of won’t try to hide the patterns it finds. Having it out in plain view it is then easier to train the staff to avoid basing decisions on statements that ought to be irrelevant. (Assuming the staff is honest and trying to find the best applications and the bias is unintentional).

7 Likes

Yes and no. The algo down-ranking the word ‘women’, for example, isn’t necessarily obvious. Unless a human A) notices something wonky in the output AND B) knows how to go looking for causes AND C) cares enough to execute B) then the bias will be unchallenged. It’s most likely that a human is just going to respond “well, this is what the algo is giving us.”

Note that in this case, for example, Amazon worked on it for four years, and in the end just biffed it in the bin. Few organisations have Amazon’s resources. Most organisations would build it, test it for a few weeks, call it good and ram it into production.

7 Likes

No, I’m strictly a Yoyodyne Propulsion Systems man.

6 Likes

This is a brilliant illustration of survivorship bias.

3 Likes

If only they’d left the training to an actual Amazon…:thinking:

5 Likes

Yeah, but Amazon sucks to work for, the really top engineers avoid it like the plague. So their resources just meant their swarm of middle grade engineers had the budget to keep trying for a few years longer is all.

4 Likes

Stalin: quantity has a quality all of it’s own

2 Likes

Reading the article, it looks like Amazon have used entirely the wrong approach to designing this system. The description of the program “learning” patterns makes it sound like they have used a neutral network and trained it on their current decisions.

That’s one of the infuriating problems with big data analysis- neural networks are known to have issues that make them unsuitable for several tasks, but these lessons have constantly to be relearned as more and more people jump on the Big Data bandwagon.

Specifically, the issues with NN in this instance are:

  • Opacity- it’s inherently difficult to tell what parameters the network is optimised for.
  • Known biases in the training sample.

In the fields that have been using data modeling the longest (pharma and credit analysis) , there’s a wealth of data and experience in how to deal with these problems. For instance, a neural network would be impossiblel to get approval for use in those fields unless the model drivers could be shown not to be illegally discriminatory, and they would have to be updated and recalibrated as part of a controlled process, not allowed to “evolve” as this one at Amazon seems to.

And on the second point- where the training sample is biased in a known way, there are methods to solve that as well, such as Reject inference and reference:

Seeing problems that have solutions still cropping up like this is sheer stupidity.

3 Likes

Yes, it seems better as an HR diagnosis tool than a HR replacement. Probably no need for NNs though, classical statistics would work fine for that kind of thing.

4 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.