Microsoft AI chatbot promptly becomes Nazi

See?

Terminator is next.

Next?

1 Like

This is an utterly ridiculous argument. It’s like saying we should leave chemical spill superfund sites around as a “warning” to “show what happens when people don’t care about the environment”.

The world doesn’t need more examples of racism and hate.

8 Likes

No it does not need more examples of racism and hate. Nor is saying it should have stayed up as an example of such. It should have remained as an example of how the AI is flawed and the dangers it can present.
It’s like not acknowledging the existence KKK and Nazi’s because by doing so you are going to also show their hateful and bigoted speech.

1 Like

We have plenty of commentary / articles documenting the mistake. That should suffice. More than suffice…

6 Likes

That’s what I think is paradoxical about it. If it’s only the public’s input which is objectionable, then this seems more like bad publicity for the public, rather than the developers. And if it’s the public doing it, why hide this from the public? Maybe if people are more critical, they will type something better. It’s like attacking a mirror.

2 Likes

By taking it down it takes away the proof. The comments and what it was should be left for people to see for themselves IMHO.

Because they expect (rightfully or not) the public won’t understand that distinction? And even if the public would grok that, it’s lots easier to point to somebody else (program/programmers/bad AI) than look in the mirror and point at yourself.

1 Like

I really don’t get the argument you’re trying to make. If it’s documented what happened, why would the bot need to stay up as proof?

2 Likes

Because documentation is prone to being faked and misrepresented. Leaving it up posed no real risk but was a direct example of the problems with it. There could have just as easily been a disclaimer put up about it.

What good or purpose does taking it down do?

Taking it down allows them to fix it

They will simulate the ‘gross’ facility of tweens. It is primarily what they will do…

And probably hard code it to think nazis and racists and rapists are disgusting.

What parents do…

2 Likes

Well, I’m sure for one Microsoft doesn’t want to deal with leaving it up with a disclaimer stating “This is how we really fucked up one time, and by the way, be warned it’s offensive.”

I also don’t see how it couldn’t be modified if it were left up in such a way that it didn’t misrepresent its past behavior. Having it on the Internet really isn’t any more proof than anything that can and will be written about it.

2 Likes

Sure it is. The thing itself is the best proof it is. They could easily have juast taken the code and tried to “fix it” and made another one next to it.

You act like it’s a real life form or something.

It’s solidly in the uncanny valley.

We can’t be sure we can’t tell it isn’t yet… But we’re getting close.

Great, next we’ll be talking about it having rights and treating it as an equal. Treating it like a real person.

I was speaking to the point you were making about how documentation can be misrepresented, and pointing out that from a point in time in the future it would be hard to determine that what was online was not misrepresenting its past behavior either. Especially without an external reference, so saying it being online is somehow more proof than any documentation doesn’t logically follow for me.

2 Likes

It should have been allowed to spew it’s nonsense as an example of it’s faults.

Micorsoft putting an end to it and shutting it down is covering up the problem with it and the condition that gave rise to it.

Don’t worry. We don’t even treat real people like real people.

2 Likes

True enough.

LoL, that’s not why we leave superfund sites around, but we sure as hell leave them around…

you’d think we’d get the warning factor feature as a tangential benefit to our own collective laissez-faire attitude toward them…but if that happened we’d clean them up? So it ain’t working.

3 Likes