Jerks were able to turn Microsoft's chatbot into a Nazi because it was a really crappy bot

I used to post on a discussion board for a 1930’s dogfighting game (Crimson Skies).

Then FASA Interactive was bought by Microsoft, and I suddenly found that my posts about the assassination of Archduke Ferdinand now discussed the “buttbuttination” of the Archduke instead.

8 Likes

Nah, I’m perfectly able to get offended or have my feelings hurt. But OTOH I can also prioritize things. Perspective is really important after all.

Chatbot software has no agency, no intent and really no awareness of the strings of text it receives, generates or repeats. Even if a social context subroutine is present to train the software not to repeat “hitler was right”, that string of text is no different to the software than “I like blue”. Its just bits, the software can’t “know” the difference any more than a wrench can.

OTOH, I’m able to differentiate situations, to choose how to respond to them even if I feel something about them. Honestly I can’t even be upset at the people whose actions resulted in the chatbot repeating socially unacceptable phrases. It is entirely possible that they themselves bear no genuine ill will towards my people, they just wanted to play with the software, kind of like how some people train a parrot to say curse words.

OTOOH hand lets say those people who taught the chatbot to repeat “bad phrases” do have some ill will towards Jews. Could be somewhere in between:

  1. The little kid who confronted me in an airport and said “you killed Jesus” then smiled as if proud of himself and went away.
  2. The three arabs who sat behind me in a coffee shop and upon noticing me switched to speaking English and started talking about the joys of shooting Jews in the head.

Both cases are “just words” but both cases are also people with their own agency saying those words. The first just made me wonder how the kid learned those words and if he might grow up to know what he said. The second case did scare me some even considering the extremely low likelihood that these men actually had guns here in Japan.

Agency matters, that someone understands the words they say matters.

6 Likes

I just want to say a quick “thank you” for not linking to the Daily Torygraph. That article was terrible. The kind of weird associations they were trying to make between the Microsoft sexist misstep at, uh, whatever recent event that was, disparity in tech, and so on…I mean, trying to paint them as a bunch of sexist jerks…

It’s a big company. Are there sexists there? I’m sure. Do they have sexist policies, intentional or accidental? Probably. But look, Maw, I can do that kind of reasoning with anyone.

BoingBoing’s recent decision to post a link to an image gallery of every Playboy centerfold ever was especially problematic, and one wonders why they would do something so tone deaf. Indeed, it’s not the only sexist misstep they’ve made: they made the decision to run a series of comics on the history of hip-hop, which is especially troubling given the heavily misogynist nature of hip-hop in general, the marginalization of women of color, and the overwhelming whiteness of BoingBoing’s editorial staff. One wonders about their commitment to feminism and exposing racism.

Quote: completely made up by me.

Now, of course, you guys are a blog, and you have diverse interests, and you’re not nearly as big as MS.

I have trouble contextualizing an AI within some framework of female servitude. When it comes to AIs with a voice like Siri, some research has pointed to a possibility that most people just find women’s voices more pleasant to listen to.

3 Likes

A bit off topic, but here in Japan where it seems every automated device must talk at you (self service gasoline pumps, ATMs, elevators, car navigation systems, refrigerators, etc) the default voice, or in some cases the only voice, is female using a fixed level of politeness found in Japanese that doesn’t map to English.

3 Likes

I’d asked whether the exploitation of the bot the way it was exploted could be considered bad, not whether the bot qua bot is bad (it obviously was badly coded, but not maliciously). I don’t really know that anyone was actually offended or had hurt feelings at the tripe it was trained to spit out, though probably there were a few. I wasn’t offended. But being offended and the lack of agency of software are peripheral to whether a popular bot being trained to post “Hitler was right I hate the Jews” could be considered a bad thing, which was what I had asked about.

While this is not bad like someone was murdered, it’s bad in the sense that a bot that was introduced by MS and popularized so it was getting a lot of popular attention and was in the public eye was poorly coded and exploited by a group of white supremacists to spout malicious and vitriolic propaganda, and GamerGaters to post maliciously harassing slogans against their perceived female enemies through a megaphone. There were a number of white supremacist code phrases being used that make this look like it wasn’t some kids who didn’t know what they were doing and the GG garbage had a clear source from malicious parties. People were exploiting the bot as a megaphone to spread aggressive, hateful, bigoted slurs. There’s something bad in that, something obviously bad to most normal people, which is why the bot was brought down.

2 Likes

I’m not sure if we are talking about different things or not. Agency is very much at the root of things as I see it, but thats probably obvious by now. The people who tricked the bot into repeating “bad words” will have had different levels of intent in doing so, but I already covered that in my examples above.

How “bad” is it that some people would do this with complete understanding of their words and full intent of spreading hateful words? Somewhat bad. But again when weighed against the possible risks to life and limb I might deal with next week when I’m in Europe, the fact that hateful words are spread is pretty minor.

Consider the above in the context of something I’ve probably said here before: when I was a bit of a wild youth, I sometimes had people threatening to shoot me (or firing warning shots) for what I did but in example 2 above the talk of shooting me was for what I am. This may clarify why I’m not so concerned about online trollery.

I’m not sure if you are saying I’m not normal for not being very offended, but I see this as a PR stunt gone wrong and MS pulled the plug rather than risk further bad publicity.

1 Like

Maybe you have too few cow orkers.

There is a lot of strangeness being talked here. Or is it just me?

This was an experiment. If you know the outcome, or you fiddle with the data to get a predicted result then it’s not an experiment. This doesn’t mean you can never tinker with an experiment in progress. When Kasparov was beaten by Deep Blue, the Deep Blue chess algorithm was being tweaked by lots of chess and computing experts, so you could argue that Kasparov was not wholly beaten by a computer, but that was still a computing achievement as well as an advert for IBM. The achievement today would be to build a chatbot with as little human interference was possible. Which is partly what IBM were trying to do, I guess. Maybe, out of this, they will come up with a chatbot which will distinguish between a genuine consensus and one person saying something a lot or lots of people (or lots of apparently separate characters) chanting the same thing in the same way.

I don’t believe the chatbot is ‘racist’. I don’t believe it had much access to the general internet, let alone directly experiencing the world. To a chatbot, ‘Hitler was right’ and ‘Bananas are yellow’ are both grammatically correct, and capable of being correct or incorrect. If one of these is re-enforced many times over a few hours by some trolls, it may well repeat it. When I was a student in the seventies, the God Squad used to feed the version of Eliza on the university IBM 360 with bible quotes, that it might preach to others. I don’t think they were genuinely trying to convert Eliza, or I hope not. But trolls have been force-feeding chatbots for the last 40 years to my personal knowledge.

The actual story is it is still hard to make a chatbot that learns without human support, and progress is happening in its usual zig-zag fashion. The public who don’t understand computing still rush out to buy tinned food and ammunition against the rise of the machines; because they hate what they fear, and they fear what they don’t understand.

4 Likes

The last chatbot I ran was an Eliza engine with a customized call & response script. You can’t “teach” Eliza anything, it doesn’t work that way because it has no persistence between sessions.

3 Likes

errr… there is a glitch, in that … at the risk of sounding all studenty … this is exactly what all societal powers have been seeking from humanity all through history?!

I used to post on a board that changed “bullshit” to “bullnuts” for some reason. I guess someone prefers testes to poo, as long as it’s not swearing. Everybody’s got their own bag.

Isn’t this more about some people fucking with Microsoft than sociopaths pushing their agenda? But seriously, what is that sketch that headlines this article supposed to be?

Not to mention God, with the whole “you have a choice but if you don’t choose to love me I’ll torture you forever.” What a dick.

Yup, and I don’t think that problem is going to go away. A lot of stuff that we might consider hallmarks of intelligence, such as achieving your goals without murdering anyone, are actually carefully evolved and utterly arbitrary social constructs. As far as I can see there’s two ways around the problem:

  • A human being must check every solution that an AI algorithm generates, to ensure that it is a good solution in the context of human society, in which case what’s the point, or:
  • The AI must be thrown fully into the human context, with a corporeal form and access to the normal human system of development through rewards and punishment within human society, in which case what’s the point?

I suspect that AI won’t get “better” than humans because there is no “better,” it can either make decisions that make sense to us or it’s doing it wrong. And trying to emulate human thought with binary computation is just an incredibly inefficient use of electrical signal. The world’s fastest computer takes up 720 square metres, uses 24 megawatts (1.2 million times as much as a brain), takes something like 40 minutes to run a low-fi simulation of 1 second of brain function, and Moore’s law ain’t what it used to be.

My prediction is that by the end of the century, our ability to emulate the brain on digital hardware will be surpassed by our ability to grow organic brains to spec.

4 Likes

PTSD can really fuck with that. I have had friends hold me to the ground until I calmed down after I heard/read about Germaine Greer or Julie Bindel making transphobic comments to the press. They are not a threat to me, I doubt they even know I exist, but they are making similar comments to people who went on to assault and harrass me in the street, smash the windows of my flat, shout death threats though my door. I don’t think either of them are stupid enough to do that, but I react as if they would.

8 Likes

I was very clear that I was not asking about people being offended, and was noting the other aspects in which the situation that might be seen as bad for the bot to be exploited the way it was that didn’t involve offense per se. On a side note, I’m leery of term “offense” since it seems to be a loaded way of justifying abuse by blaming the person who’s being abused while writing off the agency of the abuser.

1 Like

Interesting New Republic piece on Tay/Trump

4 Likes

Ooo. You are probably right. I stand corrected. It wasn’t Racter because Racter wasn’t out yet. I can’t remember what it was. I think the command was something boring like ‘chat’. But it did ‘learn’ and reproduce bits of your text for other people. This was the Cambridge Computer Science Dept, about 1976-ish.

How is this not a success for Microsoft? They built a bot that learns the same way that many people do.

Oh, tangentially related, here is an image from today’s NYT:

(I brought it to their attention, am curious to see whether they respond.)

1 Like

Quod erat demonstrandum.

The left lower arm in the image is straight, without the needed edge. It may be vaguely resembling a swastika but it certainly is not one. And even if it would be, where’s the problem? Is it actually referring to something bad?

Here, occasionally somebody brings a load of cheap tchotchkes from Far East for sale and doesn’t screen them all. Sometimes there are some that contain a symbol of good luck. Inevitably some do-gooder spots one and makes a brouhaha and it makes the evening prime time news.