Amazon trained a sexism-fighting, resume-screening AI with sexist hiring data, so the bot became sexist


#1

Originally published at: https://boingboing.net/2018/10/11/garbage-conclusions-out.html


#2

I do not welcome our sexist AI/bot overlords.


#3

Too late-- they have already “captured” and “executed” you.


#4

Your so moody on Thursdays. But I like it.


#5

Whether an AI is slow or (like this one) fast, the patriarchy always knowingly or blindly nurtures it by feeding it sexist garbage.


#6

An algorithm is known by the company it keeps.


#7

Do you work at Cyberdyne? Asking for a AI/bot friend…


#8

It was a feature, not a bug.


#10

I think these attempts are still valuable as machine learning makes hidden biases more obvious. A human may discriminate against women, but if asked he will rationalize, claiming other reasons to not hire those women. A computer analyzing all the applications he has taken part of won’t try to hide the patterns it finds. Having it out in plain view it is then easier to train the staff to avoid basing decisions on statements that ought to be irrelevant. (Assuming the staff is honest and trying to find the best applications and the bias is unintentional).


#11

Yes and no. The algo down-ranking the word ‘women’, for example, isn’t necessarily obvious. Unless a human A) notices something wonky in the output AND B) knows how to go looking for causes AND C) cares enough to execute B) then the bias will be unchallenged. It’s most likely that a human is just going to respond “well, this is what the algo is giving us.”

Note that in this case, for example, Amazon worked on it for four years, and in the end just biffed it in the bin. Few organisations have Amazon’s resources. Most organisations would build it, test it for a few weeks, call it good and ram it into production.


#12

No, I’m strictly a Yoyodyne Propulsion Systems man.


#13

This is a brilliant illustration of survivorship bias.


#14

If only they’d left the training to an actual Amazon…:thinking:


#15

Yeah, but Amazon sucks to work for, the really top engineers avoid it like the plague. So their resources just meant their swarm of middle grade engineers had the budget to keep trying for a few years longer is all.


#16

Stalin: quantity has a quality all of it’s own


#17

Reading the article, it looks like Amazon have used entirely the wrong approach to designing this system. The description of the program “learning” patterns makes it sound like they have used a neutral network and trained it on their current decisions.

That’s one of the infuriating problems with big data analysis- neural networks are known to have issues that make them unsuitable for several tasks, but these lessons have constantly to be relearned as more and more people jump on the Big Data bandwagon.

Specifically, the issues with NN in this instance are:

  • Opacity- it’s inherently difficult to tell what parameters the network is optimised for.
  • Known biases in the training sample.

In the fields that have been using data modeling the longest (pharma and credit analysis) , there’s a wealth of data and experience in how to deal with these problems. For instance, a neural network would be impossiblel to get approval for use in those fields unless the model drivers could be shown not to be illegally discriminatory, and they would have to be updated and recalibrated as part of a controlled process, not allowed to “evolve” as this one at Amazon seems to.

And on the second point- where the training sample is biased in a known way, there are methods to solve that as well, such as Reject inference and reference:

Seeing problems that have solutions still cropping up like this is sheer stupidity.


#18

Yes, it seems better as an HR diagnosis tool than a HR replacement. Probably no need for NNs though, classical statistics would work fine for that kind of thing.


#19

This topic was automatically closed after 5 days. New replies are no longer allowed.