Amazon trained a sexism-fighting, resume-screening AI with sexist hiring data, so the bot became sexist

Yes and no. The algo down-ranking the word ‘women’, for example, isn’t necessarily obvious. Unless a human A) notices something wonky in the output AND B) knows how to go looking for causes AND C) cares enough to execute B) then the bias will be unchallenged. It’s most likely that a human is just going to respond “well, this is what the algo is giving us.”

Note that in this case, for example, Amazon worked on it for four years, and in the end just biffed it in the bin. Few organisations have Amazon’s resources. Most organisations would build it, test it for a few weeks, call it good and ram it into production.

7 Likes