Google AI researcher complained of discrimination in email. Then it fired her

Originally published at:


Don’t Be Evil?


Came here to see if anyone else missed the memo at Alphabet when they dropped the “Don’t”?


Research has found Evil and Evil Adjacent practices to be very profitable in the short term (and only term, should the practices be sufficiently Evil)
Evil maximization is a growing aspect our core values, and in turn our core products.


Files paper for review so it can be published at a conference.
Sends demand to boss to finish review or she’ll quit on certain date.
Publishes paper at conference without review.
Boss says, “We accept your resignation, but you would have been fired with cause anyway.”


Google then proved her point by silencing her. Brilliant! (/s)


Ah, this must be the " Culture of consequences for speech" that everyone was so keen on getting employers to enforce a short time ago.


I get why people try to fix companies from within, but this one has shown many times that they are not only resistant, but hostile. Just another reason to bring that talent, effort, and money to their competitors.


According to Google that first line should say: Files paper for review, late, with only one day left for review before publication deadline

If true, one wonders if she did it deliberately in order to avoid review/editing of her paper. And the ‘do this or I’ll resign’ gambit does sound a tad like an self-engineered exit with intent.


Google has become worse than the worst Microsoft ever was.
It’s now the Donald Trump of big corporations.

As a friend of mine who is a researcher in a similar environment said when I showed her this article, “This person was looking for a way to quit without looking like the bad buy. She failed.”

Submitting a paper to an external conference without legal (and scientific) review is a justification for termination. Being insubordinate is a justification for termination (the letter to the staff telling them not to do their work and to not fill out the surveys).

If anyone was being evil here, it wasn’t Google.

If the email published on plattformer is about said paper (the part of the email where she repeatedly writes “you” but means “I”), then what Google writes is a fabrication. The whole thing she lines out is beyond the pale. So IMHO quite believable because it is about Google.


With all the “we don’t know the inner workings” stuff assumed, I am not one to give Google that much benefit of the doubt. They have absolutely not earned it.


I initially read the headline as saying that the racist AI fired her. Which wouldn’t have been surprising either.


Yet you liked @willmore’s first post.

As soon as any company gets big enough to be powerful, it turns evil. It’s baked into the structure of corporate capitalism, and overrides the “niceness” of any founder or CEO


I acknowledge what Google reported as being the case. I do not say I believe that over the researcher’s version, but I said the inner workings are currently not known.


To be honest, we don’t know it didn’t.


Why was the paper rejected for publication by Google?

What I see as having happened is that Google has an arbitrary rule that says, “We have to be able to read over your paper two weeks before you submit it.” and academic culture says that your coauthors never finish their parts of a paper until the last possible minute before it is due. My bet is that usually this is smoothed over and Google just says, “Well, next time try to get it done earlier”. The only difference this time is that the google managers didn’t like what the paper said and would have rejected it if they had had the opportunity. So what does the paper say that Google doesn’t like?


I think there’s still more to this story since Google’s version of events is substantially different than her’s.

Timnit co-authored a paper with four fellow Googlers as well as some external collaborators that needed to go through our review process (as is the case with all externally submitted papers).
Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

[…] the authors were informed that it didn’t meet our bar for publication and were given feedback about why.
Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback.

I’m sure both accounts leave out some details. But companies review papers before external publication for a reason. Sending it a day before the deadline and then submitting without a response is a potential reason for dismissal in itself.

And I understand how anonymous internal reviewers can a form of censorship, but they’re also a form of peer review. Peer review is pretty sacred in academia, demanding that internal reviewers be outed isn’t really appropriate.

As for her complaints about the internal culture, they sound serious, but they’re also really hard to validate. There might be serious issues at Google or she might simply be an example of a brilliant researcher who doesn’t play well with others.