Jerks were able to turn Microsoft's chatbot into a Nazi because it was a really crappy bot

[Read the post]

since @beschizza already covered this topic, I was expecting that this post would be an open source rant about how if the bot was #FREE this wouldn’t have happened.

4 Likes

I think this incident underlines what could be a fundamental paradox in our quest to develop generalised artificial intelligence. We want an algorithm that can think for itself, but we want strict control over what thoughts it should have. Is it even possible to have it both ways?

15 Likes

Why regret? It happened, nobody was killed, no physical damage was done, some feathers were ruffled but that’s it.

Let’s try again. Nothing really bad can happen anyway.

3 Likes

I thought Microsoft was the greatest software company in the world. That they had the smartest people. How could this have surprised them?

(checking the bot)

Oh I see, it was version 1. Microsoft software isn’t usually good 'til about version 3 or so.

11 Likes

I’ll repeat what I asked before - was no effort made to control who the bot learned from?

1 Like

Spent more time than I’d like to admit, trying to parse the word that my brain initially interpreted as “bother-ders”. Forget AI, I need a project that will improve my I.

6 Likes

I think it’s pretty obvious that if someone creates free wild AI it will do things they don’t expect and don’t want it to do. Way back in Usenet days, people created conversational agents that generally reflected their textual environment, and let them run unsupervised, with the result that there was yet another source of random stupidity to go along with the collection of dumb people that already existed naturally. Besides writing garbage, really free wild AI will also engage in random vandalism – why not? When your internet-of-things-connected garage door opens and closes 20,000 times one day, it may not be a script kiddie but jerk bot. And more seriously, bots may get involved with serious destruction, like running airliners into buildings. If religion can do it, why not bots? It’s all waiting to happen.

11 Likes

Hurt feels is the genocide of our times.

7 Likes

If the MS engineers do eventually find a way to prevent their child from behaving like a dipshit on Twitter … can they please apply that fix to the rest of the Internet also?

11 Likes

she’d find something to say that got around it— not on purpose, but like, just because that’s the way algorithms work.”

That’s the way human brains work too.

3 Likes

Don’t see why this is such a big deal, or why anyone is blaming MS. They made a bot that learned from its environment, and it successfully learned from its environment.

10 Likes

Version 3.11, to be exact.

3 Likes

many of Twitter’s most well-known botmakers and they all expressed their shock at Microsoft’s bungling

Implausible that a Microsoft bungle would surprise anyone. On the other hand, this is exactly the technology Donald Trump is looking for to disseminate racist messages while claiming deniability. “The bot said that, not me…”

4 Likes

Couldn’t a popular bot being trained to post “Hitler was right I hate the Jews” be considered a bad thing even if not a genocide-class bad thing?

6 Likes

well theoretically Microsoft might have been able to spare more than one person to track the bot, thus allowing the bot to tweet more than a one-person maintained bot could.

let’s as in Let Us?

Were you on the MS team? Or are you maybe proposing that you in coordination with others, do so again?

I aske because you’re phrasing seems to indicate that you’re one of the ones who did something other than judge. I suppose you may have meant - let them try again.

Unless of course it was actually you. In which case your use of us in ‘let’s’ no longer sounds presumptuous and fake.

Working in my favor, I have to assume you will only be offended if you choose to be, by my pointing out of some very common rhetorical tricks at work here. Let’s appreciate, together, how obnoxious it is to speak for a group you’re not even a member of.

Its a bot, not a human so I for one wouldn’t care less.

3 Likes

When you run out of arguments, you attack the form of the message.

Common. So common.

Not even particularly creative. Perhaps, let’s try again?

2 Likes

While I wouldn’t put a well publicized bot publicly harassing/slandering people and regurgitating white supremacist propaganda on a popular social media platform high on the list of bads, I see less of an issue with with labeling that as bad as I do with not labeling it as bad. I think you imagine you have an impressive Vulcan freedom from emotions through logic-powers, but really that’s callousness and it’s not impressive.

2 Likes