After Tay's very public crazy racist Nazi sexbot breakdown, Microsoft's like, 'Tay-a culpa, guys'

I’m with ya. But on the other hand, what should they have said? Because an algorithm tweeting that Hitler is Swag is one thing, but a human talking about a Hitlerswag twitterbot needs to show a bit of sensitivity.

“Never mind all the dead Jews, this is great research!” …is a thing an actual Nazi might well have said, once.


So, basically, the Chinese and Japanese are just nicer people? Even if only in the vulnerability exploitation arena?


I hear you. I just have a hard time calling making the bot respond to a programmed command a “vulnerability.”


We have too much “sensitivity”, so many that one is often afraid to talk at all. We live in an era where an innocent remark can ruin a career.

Was anybody killed or injured, was significant material damage done? If not, and it’s only people’s tender feelings that got “hurt”, there’s nothing to bother with.

A bot repeating offensive things people tweeted at it is comparable to “all the dead Jews?”


I’ll take @shaddack et al.’s questions about “why apologize?” one further though — What is there to be offended at? Does anyone think that a machine-learning experiment actually has racist opinions? Or even, for that matter, that its programmers are actual neo-Nazis spewing hate speech through a puppet geared towards tween girls?

It’s like when John McClane was forced to wear nothing but a racist sandwich board in Harlem. People would assume he was crazy and call the cops^W^Wan ambulance, not get angry at him thinking he’s sincere and attack him.


I think in China at least, it’s not so much that people are nicer but that online discussion is very very manicured. I mean, twitter is banned. If you were going to take the risk of stirring shit by talking about locally controversial topics such as the Tianmen Square incident, you wouldn’t do it just to fuck around with a chatbot.

On a side note I happen to be in China at the moment. Boingboing isn’t blacklisted but I might send this post out encrypted, all the same…


For a naïve training set, download all of 4chan. If the probability approaches a confidence of 80% it has appeared in some form there, tweet back “You’re a tool”.

Hay @msft, wheres my grant money!!


I read it as…

Hi. We know what just happened. So do you. Let me just say… That’s not a representative of Microsoft. Thats a computer program.

So. 40,000,000 Chinese people enjoy chatting with China’s version of Tay. We thought… What if we tried this in the US?

And we were wrong.

For even fucking thinking that… Obviously.

Stay classy.


@shaddack: @L_Mariachi: Actually I’ve yet to see anyone, anywhere who is offended about what the twitterbot said. Because it’s just an algorithm, and also this is twitter, so nobody is surprised and it seems like actually people are being pretty mature about this.

I think you’re missing my point, which is that the human beings responsible for the algorithm are being entirely reasonable by making clear that a Hitlerbot was not the objective. You talk about people’s “tender feelings,” and yet you yourself seem to be offended that they felt the need to say so. Just goes to show you can’t please everyone and so stating the obvious is probably the best policy.


From all that I’ve read, the algorithm was coerced into tweeting what it did by nothing more than talking to it on twitter. So “a vulnerability” is almost certainly a euphemism for “bad design”. In that sense I think “we didn’t design it that way” is inaccurate.


1 Like

Is that what it was? I don’t use twitter, so I have no idea what the – let’s call them “seeders” – did to mess with it. If that’s the case, I agree: “Forgot to disable the dev tools” is hardly what I’d call an exploit either.

I don’t know,

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.

makes it sound a little more complicated than purely a euphemism for bad design.

1 Like

You may be right, on the other hand that may just be referring to e.g. a filter for curse words, and other basic stuff.

As a rule I’m skeptical of how official releases “make things sound”.

Yes. Tweeting “repeat after me” at it had it repeat the next @ reply you made to it. The learning ability just compounded that; after making her say these things for awhile, the structure, like all caps, and the language began to be part of her own spontaneous tweets.


Damnit, I missed my chance to coerce tay into only replying with fish puns.


Now you know. I’m surprised the bot didn’t also tweet about Obama’s race (in not so polite terms) and #BlackLivesMatter (again, in the negative).

More bot maker commentary here


That may be what confuzzles me: there are no Neo-Klansmen anywhere Tay’s been tested, or is this the first time the First Amendment has been in effect, or have they no concept of what’s under the bridge?

The article was nice, but it didn’t get into the neoreactionary ideologies that poised the experiment.

1 Like