Two years later, Google solves 'racist algorithm' problem by purging 'gorilla' label from image classifier

And tag/label them as PEOPLE.

16 Likes

There’s no guarantee the training set was, say, all-white or devoid of black people. Nor is that he only way this could happen. Google’s algorithm just lacks the tools to confidently distinguish some photos of black people from other primates.

  • Odds are very good that this algorithm wasn’t written, it was trained. Which means it’s a statistical construct that not even the “authors” would be able to precisely predict the results of. Algorithms that simply enshrine a programmer’s judgment and foresight will reflect and amplify his unacknowledged biases, and suffer catastrophically from his failures in imagination. But trained algorithms like this usually aren’t even auditable.
  • All of these algorithms are running thousands of times a second on huge sets of data on a server. Taking 200 ms instead of 100 ms might give you a slightly better chance of success, but probably not enough to justify doubling the cost of running the algorithms. Resource constraints will dictate how much effort goes into any one picture. Maybe the algorithm is generally sound, but times out on some more difficult images (“more difficult” in algorithmic terms being completely different from what a human would call difficult).
  • An image can be legitimately (probabilistically) sorted into any number of bins. The same image may turn up in both searches for “gorilla” (54% match) and “black human” (89% match). Maybe there’s some overlap – enough to pass an arbitrary threshold for inclusion in the results of either search – even if it’s not huge.
  • As the false positive rate goes down (searches for gorillas does not return black people), so too will the false negative rate rise (searches for gorillas fails to return many images that are clearly gorillas). The degree to which this will occur depends on the data and the algorithm, but to some extent it’s always true. It’s always a balancing act about whether to set thresholds so that a certain set of images are either in neither group or both groups.
  • Pictures are hard to analyze and train on, faces especially so. They can be cropped, distorted, color-shifted… basically you have to aim for an algorithm that either normalizes the image to account for it, or you need training sets that can reasonably train on each of the niche subsets.
  • For example, it’s not enough to train on “human faces”, you need to train on “Asian faces head-on”, “Caucasian faces head-on”, B lack faces head-on"… “Asian faces three-quarter view”… “Caucasian faces profile”… “Black faces worm’s-eye-view”… “Asian faces tightly-cropped”… Even very few orthogonal variables with enumerable values makes the training set explode if you want similarly good results across all scenarios. (Or you can accept some risk of error and train on dramatically fewer human-curated inputs.)
  • And then you should validate the algorithm against a curated set of images (for each niche) that it wasn’t trained on. A set at least 10 times the size to give some confidence (but still not great confidence).

Basically, the story here is not so much that the algorithm made a mistake: the story is that they still have no good solutions, which sounds surprising. Of course, if you’ve ever actually tried to get processed data to behave it’s less of a surprise, because sometimes problems are actually hard.

7 Likes

I’m guessing that this is the very first thing they tried, maybe it didn’t work? There could be any number of reasons why it’s doing this, unrelated to the quality of the training data.

1 Like

Also – color photography was developed by white people with serious built in technological issues that affected the rendering of non-white skin.

15 Likes

It kind of didn’t, that’s the thing. There wasn’t a human being punching in parameters for facial feature dimensions; a person designed a program that can very broadly notice and remember patterns in any sort of image, and then they shoveled several million labeled pictures into it and let it learn on its own. Humans are responsible for releasing the algorithm in this state, but it’s unlikely that anyone made this happen intentionally. That doesn’t make it okay, of course.

From a technical standpoint, the problem is that distinguishing faces is really hard. We’re good at it because a substantial chunk of the human brain is dedicated exclusively to parsing faces. What’s probably happening here is the algorithm isn’t sophisticated enough to distinguish any human faces from ape faces by features alone, and the only distinction it can figure out is color.

The whole thing is a clusterfuck, and there is racism at work here, when the devs didn’t think to include a broad cross-section of races in their test corpus before they declared the thing ready for prime time. But, if they still haven’t gotten it to work reliably yet, I suspect that’s because it actually is a hard problem.

5 Likes

Lots of things are easy for humans but hard for computers. Look at the current state-of-the-art in bipedal robots. How many billions of dollars and thousands of engineer hours has it taken to get a machine that’s almost as good at walking as any three-year-old?

6 Likes

I may not be technically inclined enough to know/use all the correct terms, but my meaning was pretty clear; the system itself was created by people, who either subconsciously or intentionally incorporated their own bias.

8 Likes

In defense of the algorithm, humans apparently suffer from an eerily similar affliction, implicit bias:

Eberhardt’s research suggests that these racialized judgments may have roots deeper than contemporary rates of crime or incarceration. In a series of studies, she has unearthed evidence that African Americans sometimes become objects of dehumanization. Specifically, Eberhardt has found that even people who profess to be racially unbiased may associate apes and African Americans, with images of one bringing to mind the other.

http://web.stanford.edu/~eberhard/about-jennifer-eberhardt.html

The reasearch suggests that dehumanization of black peoples takes place subconsciously, even if you are a totally reasonable and not racist person.

Unfortunately most people just pretend this totally isn’t a thing rather than go through the excruciating process of reflecting on how one’s cultural assumptions, thought patterns, privilege, etc directly contribute to or are a result of systemic racism.

One way or another (either via the human engineers or the dumb algorithm), it looks like Google has given more support to this theory.

1 Like

It’s highly unlikely (I’d venture to say impossible even) that the algorithm in question is complicated enough to encompass human subjective biases. It’s broken for very different reasons than human cognition is broken.

3 Likes

I doubt that they have finished trying yet. Quality and quantity of data is the main driver of these algorithms and gathering good quality training data is hard.

3 Likes

It’s not the only thing though, what’s going on in the layers: which operators are being used, whether they’re linear or feedback based, etc. can be just as important.

You shouldn’t use linear layers in a NN, then it’s just a factorization of a linear map and equivalent to a single linear layer. That there is a non-linear activation function on your layers is important. Also, I’m not sure what you mean by feedback. (These aren’t recurrent, do you mean back-propagation?)

I don’t agree that those details are just as important though. These are all functional-approximation schemes and if the data isn’t sampled from the right distribution, you don’t have a chance.

2 Likes

I meant either, or you could have an even more complicated multi-network system chained together.

Even with the best training sets these simple networks will still make lots of mistakes, and it’ll be easy to deliberately create edge cases it can’t classify (as has been shown recently).

Computer vision doesn’t necessarily work based on what we think of as “clear visual differences.” If they’ve got a net trained that works well, they might use it for a while.

A living, breathing, human being almost certainly did not create the algorithm, at least not the way you’re thinking. Computer vision like this is done with a neural net that’s far too complex for a human to understand exactly what it’s doing, and make fine adjustments. In a simple CNN, you can sometimes look at a filter and see what the net is seeing/how it is seeing one specific feature, but the ones Google is running now are enormous.

Probably this resulted from training with a dataset where black people were underrepresented in the “human” category. There would be no way to fix the net they’ve got without retraining on a more representative dataset. I just glanced at the imagenet dataset and it does seem to be a little bit skewed towards white people. That could be something to work on.

I suspect that what happened is, since a neural net isn’t like traditional code that you can just make detailed fixes on, the fix got put somewhere else in the code, and then, even if they swapped in a new net, the fix was still sitting there and nobody remembered to remove it.

5 Likes

Just like when Obama was elected.

7 Likes

Computer vision has long struggled with embarrassing bugs especially when it comes to non-whites.

From not “seeing” people with dark skin:

To saying someone of Asian decent is blinking:

I don’t think the problem is inherent racism or unconscious bias, I think it’s more a lack of consideration in QA which is of course not helped by the lack of diversity in tech.

In other words, computer vision is hard.

7 Likes

**Any sufficiently advanced incompetence is indistinguishable from malice. **

12 Likes

Christ on a fucking crutch, this shit is NOT that fucking hard. The algorithm does not work… the solution is to ditch the fucking algorithm and try again, not simple erase a few words from the list so they won’t be used. Again and again our current technology leaders seem hell bent on trying to automate everything, despite constant and troubling failures to make it work right. Automating our news feed leads to Russian bots gaming the system; automating abuse measures often leads to banning the victims and not the abuser; automating our photo labeling leads to racist bullshit; next they’ll try to automate our voting booths, and boy, that will be a fucking fun day.

Automation fails. Repeat after me, you dumb ass, socially inept programmers who think people can be removed from every system: automation fucking fails. You’ll need to HIRE SOME PEOPLE to do the manual labor of data entry for this shit. Even then, people will make mistakes, but they fucking won’t label black people as gorillas. Jesus christ, I’m sick to death of this bullshit. Go ahead, call me a Luddite, I really don’t care, but this sort of stuff needs to stop NOW.

  • Note: I, too, am socially inept, introverted, autism spectrum. But I know you can’t remove humans from every system and expect them to work, we have to interact with each other to make a cohesive, functioning whole. My apologies to any programmers I’ve insulted.
5 Likes

I don’t see anything changed, it is still there :disappointed_relieved:

Just needs a MAGA hat shooped on there.

1 Like