Two years later, Google solves 'racist algorithm' problem by purging 'gorilla' label from image classifier

Originally published at: https://boingboing.net/2018/01/11/gorilla-chimp-monkey-unpersone.html

1 Like

Sure that will fix the institutional racism.

18 Likes

https://archive.google.com/pigeonrank/

2 Likes

Google has solved its “racist algorithm”

I guess ignoring the problem through sheer force is a way to resolve a problem. Right?

15 Likes

So that was the only way to fix it. Not by, you know, identifying the clear visual differences. And this after 2 years of work? Madness…

4 Likes

Never mind the actual problem; that it took a living breathing human being to create the algorithm in the first place.

17 Likes

Couldn’t they redirect those searches to keyword “primate”? Or do they expect the racists will then monkey with that term as well?

1 Like

They really couldn’t find any reliable way to tell the difference? Really? I find it pretty easy, but I’m not as smart as Google.

And nothing changed.

You mean the institutional racism Google perpetrates against the traditionally oppressed group known as white males?

[You can’t make this stuff up]

9 Likes

I’m guessing this is a problem with “training” the algorithm. If you give the system a bunch of pictures of people and tell it “these are people” and, oopsie, all the people in those pictures just happen to be white, then the system is going to assume that one of the defining criteria of people is that they have pink skin.

If all the people pictures used to train the algorithm had been of black people, then I don’t think this problem would have arisen; but the system would probably have ended up classifying white people as naked mole-rats or something (which amuses me more than it should).

image

20 Likes

It is entirely possible that they got the error rate from “surprised nobody saw it before it shipped” to “well under one in a million”, but if you think of how many photos get stuffed into Google photos “well under one in a million” is still “a whole lot more then once” in a category where “even just one more is really far more then we want to deal with”

8 Likes

Is that a picture of Steve Bannon?

3 Likes

I am certainly not condoning racism but let’s give Google a break here.

The algorithm did exactly what it has been made to - namely classification of images. The fact is these machine learning algorithms, neural networks, whatever are not perfect. And worse, there is no way to make them flawless, simply because that is the way how they function - they are all based on probabilities. We can only reduce the probability of an error to levels deemed acceptable but there is no way to completely eliminate them. However, who cares that the software works correctly in the 99.9999% of cases - the 0.00001 mistake will be the one trumpeted in the press because it makes for a nice outrage and will sell ads better.

The algorithm is certainly not “racist”, certainly not any less or more than e.g. an automated phone voice prompt system that fails to understand my speech because I happen to speak with a foreign accent. It may be defective, buggy, not fit for purpose or simply used for something it hasn’t been designed for but that is not racism - there is no intentional malice there.

Someone here even wrote that it was the developer being racist - come on! Do you really think that a developer would “hardwire” that the computer should say “gorilla” when an image of a black person is identified? And that this would actually pass any review before going live?

What happened is simply a consequence that we don’t really understand how these machine learning algorithms work - why and how they are making the decisions they are making, due to the enormous complexity. Worse, it has been shown that all it takes to throw such a system completely off is a few pixels in the image. See e.g. here: https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture

The only problem is that it has happened with a combination of label and image that people find offensive - many other documented cases where these systems e.g. classified pandas as vultures didn’t get anyone as excited, even though e.g. your self-driving/“autopiloted” car could be relying on it to recognize traffic signs or pedestrians. In that case there could be dead people instead of outraged ones.

There is little Google (or anyone else) can do here to “fix the bug”. There isn’t any bug. The program works exactly as designed/intended. The problem is with us, humans and our cultural sensitivities that the computer has no idea about. Computer sees only colorful pixels and, objectively speaking, a gorilla is a better match than e.g. a giraffe for that picture (why it didn’t classify it as a person is another story though - maybe insufficient amount of training data containing black people?).

The only way Google (and others) can address this is with better (more diverse) training data (but see above - it is never going to be 100%!) and then piece-meal “patching” out of sensitive situations, such as labeling an image of a black skinned person as an ape. Or an image of Putin as “aggressor”, or any label that the Thai king could find offensive about himself. Or any other issue that someone, somewhere in the world could find offensive. Good luck with that. That is always going to be a whack-a-mole situation, there is no AI that you could teach to not offend someone - heck, just look into the White House to see how we, supposedly superior, humans are horrible at it!

There is a saying - never attribute to malice what could be explained by incompetence. Throwing the r-word around at every opportunity, whether really justified or not, is really not helpful at all.

13 Likes

image

8 Likes

OTOH, sounds like they broke a perfectly good algorithm for identifying racists.

3 Likes

Racism doesn’t require intentional malice. There is ignorant, unthinking racism too, the kind that might lead someone to fail to ensure that the pile of pictures used to teach the system what people look like included a representative sample of people with darker skin tones. Most human beings are not white. Sub-Saharan Africa alone has a population of a billion, give or take.

19 Likes

I dunno about them, but I am okay oppression of the Broflakes.

6 Likes

Um…

https://global.discourse-cdn.com/boingboing/optimized/3X/3/d/3df14f64724fc9c402b4884c90c121994608f3bc_1_690x483.gif

Google as a standalone company has an intrinsic value of at least $730 billion, representing a 25% upside vs. its current market price.”

No “breaks,” no ‘benefit of the doubt.’

16 Likes

It’s a misclassification. That seems pretty bug-like to me.

You say there is little possible to do and then present exactly what one should do… get more pics of black people.

14 Likes