Microsoft AI chatbot promptly becomes Nazi

So was this thing basically Cleverbot?


A pretty lousy sex slave, I’d sell her to a Republican right away.

Also, shouldn’t it be possible to decide who your bot listens to? That way your teenage girl can act like a teenage girl, and you can program a NaziBot to learn bad words from Trump supporters. Everybody wins! (Except losers.)


In other news, I cut and pasted the text of Mein Kampf into Notepad and suddenly it became nazi!


Maybe we could set such a Twitterbot to perform simple, harmless tasks—like, say, naming scientific research vessels.


It’s not like the Google chatbot is doing any better…

Let’s dispel this notion that Microsoft doesn’t know what it’s doing. It knows exactly what it’s doing. It’s trying to change this country.

Here’s the bottom line: This notion that Microsoft doesn’t know what it’s doing is just not true. It knows exactly what it’s doing.

We are not facing a Microsoft that doesn’t know what it’s doing. It knows what it is doing.

I think anyone who believes that Microsoft isn’t doing what it’s doing on purpose doesn’t understand what we’re dealing with here, okay?

Apple’s chatbot destroyed it on stage for that. And then endorsed Skynet.


That’s some juxtaposition:


Are we sure someone from 4Chan’s /pol/ didn’t take it over…

That or it was fed a constant stream from /pol/ one…’71.jpg


Given that he was working well before transistorization made it even conceivable that you could bully someone online for less than the GDP of a small nation state, he might not have predicted the awfulness of online exchanges; but in his capacity as a homosexual back when that was even less encouraged than today; he could probably have told you that a computer emulating a human might not be a very nice expert system.


I learned a long time ago to avoid news programs and news sites during an election year. I just broke that rule.

Someone has convinced AP’s newsbot that Donald Trump is not only running, but leading in the polls! Even funnier, the Republicans’ best hope to defeat him is Ted Cruz!

It’s pretty obvious that 4Chan is up to their usual tricks.


Yep. Impressive. Just like the human troll trash on the internet. Now, all we have to do is to get the people who chatted her up agree to a meeting. Somewhere on a firing range. Or an active volcano. Or in the middle of Death Valley.

Not really. But one can dream…


Quelle surprise…

  1. Twitter.

  2. A “chatbot” using an AI designed to build conversational skills from from interactions online.

  3. Lovely human beings on the Internet.

What could ever go wrong?


Hal 9000: Went bad because he poor at lying, but was ordered to lie by those who found it easy.

Tay: Went bad because it was ordered to learn how to chat by chatting with people on the internet.


Did anyone imagine yet what this bot would do if it had drone-mounted pistols??

If this is what they programmed it to do, and it succeeded, why did they pull the plug?

On of the most perplexing aspects of the internet I find is the notion of “carefully-manicured populism” where people are eager for participation of the public - provided of course that they like what the public says. And when they don’t like what the public says, they delete it and try to think of ways to get them express a heavily idealized version of some imaginary public input.


In an Isaac Asimov world, this exactly how you would train robots for drone warfare. To defeat the First Law of Robotics (A robot may not injure a human being or, through inaction, allow a human being to come to harm.), just order it to spend a few hours learning from chatting with humans on the internet.


and that dude had some experience of man’s inhumanity to man.


I believe it was Foundation and Earth that posited a robot violating the First Law by being taught that other races besides its creators’ did not qualify as human.



this is the clippy we need.



This story is certainly an illustration of Godwin’s Law (at least, in its original version), but I don’t see how the title of the article is to blame.

A chatbot started posting pro-nazi slogans because Twitter users trained it to do so. The headline of the article accurately conveys that information.

It wasn’t “Godwined in the title”; the title simply describes, without exaggeration, a situation that was already Godwined.