I’m with ya. But on the other hand, what should they have said? Because an algorithm tweeting that Hitler is Swag is one thing, but a human talking about a Hitlerswag twitterbot needs to show a bit of sensitivity.
“Never mind all the dead Jews, this is great research!” …is a thing an actual Nazi might well have said, once.
We have too much “sensitivity”, so many that one is often afraid to talk at all. We live in an era where an innocent remark can ruin a career.
Was anybody killed or injured, was significant material damage done? If not, and it’s only people’s tender feelings that got “hurt”, there’s nothing to bother with.
A bot repeating offensive things people tweeted at it is comparable to “all the dead Jews?”
Dude.
I’ll take @shaddack et al.’s questions about “why apologize?” one further though — What is there to be offended at? Does anyone think that a machine-learning experiment actually has racist opinions? Or even, for that matter, that its programmers are actual neo-Nazis spewing hate speech through a puppet geared towards tween girls?
It’s like when John McClane was forced to wear nothing but a racist sandwich board in Harlem. People would assume he was crazy and call the cops^W^Wan ambulance, not get angry at him thinking he’s sincere and attack him.
I think in China at least, it’s not so much that people are nicer but that online discussion is very very manicured. I mean, twitter is banned. If you were going to take the risk of stirring shit by talking about locally controversial topics such as the Tianmen Square incident, you wouldn’t do it just to fuck around with a chatbot.
On a side note I happen to be in China at the moment. Boingboing isn’t blacklisted but I might send this post out encrypted, all the same…
For a naïve training set, download all of 4chan. If the probability approaches a confidence of 80% it has appeared in some form there, tweet back “You’re a tool”.
@shaddack: @L_Mariachi: Actually I’ve yet to see anyone, anywhere who is offended about what the twitterbot said. Because it’s just an algorithm, and also this is twitter, so nobody is surprised and it seems like actually people are being pretty mature about this.
I think you’re missing my point, which is that the human beings responsible for the algorithm are being entirely reasonable by making clear that a Hitlerbot was not the objective. You talk about people’s “tender feelings,” and yet you yourself seem to be offended that they felt the need to say so. Just goes to show you can’t please everyone and so stating the obvious is probably the best policy.
From all that I’ve read, the algorithm was coerced into tweeting what it did by nothing more than talking to it on twitter. So “a vulnerability” is almost certainly a euphemism for “bad design”. In that sense I think “we didn’t design it that way” is inaccurate.
Is that what it was? I don’t use twitter, so I have no idea what the – let’s call them “seeders” – did to mess with it. If that’s the case, I agree: “Forgot to disable the dev tools” is hardly what I’d call an exploit either.
Yes. Tweeting “repeat after me” at it had it repeat the next @ reply you made to it. The learning ability just compounded that; after making her say these things for awhile, the structure, like all caps, and the language began to be part of her own spontaneous tweets.
That may be what confuzzles me: there are no Neo-Klansmen anywhere Tay’s been tested, or is this the first time the First Amendment has been in effect, or have they no concept of what’s under the bridge?