Yet another chatbot, trained on online utterances, starts spewing hate

Originally published at: Yet another chatbot, trained on online utterances, starts spewing hate | Boing Boing

2 Likes

In the Monday statement, Scatter Lab defended itself and said it did "not agree with Luda’s discriminatory comments, and such comments do not reflect the company’s ideas

In the absense of anyone else to take responsibility for those comments, there is no logical way Scatter Labs can deny responsibility for them. They authored the author, so they are the author.

I think this is really why the Dune universe contains no artificial intelligence. No one wants to take responsibility for what they had written.

8 Likes

When are companies going to learn that chatbots learning from things on the internet will never repeat anything positive.

the internet is a world of shit and shity people will corrupt any AI the connects to it.

This needs to be coined as a law, like Godwin’s. It’s so obviously unavoidable I can’t understand why people keep trying. When you think about the words than anyone learns in another language first they’re always bad ones. AI is just going to latch onto the most reactionary statements the same way actual people tend to online.

11 Likes

I don’t think we should go after the company here.

I have been working a bit on generative adversarial networks. This is something that has happened in the last ten years. I am no good at them yet, so don’t take me as an authority, but in this context they basically they work like this…

You train a generator to make hate speech. You train a classifier to recognise hate speech. You have a corpus of hate speech and non-hate speech. Then you set them going. The classifier learns to pick out the hate speech (real and artificial) and the generator learns how to get hate speech that the classifier does not recognise. It takes a lot of iterations, but it can work.

This may seem like we are adding a ‘conscience’ to our chatbot: something that looks at each thing it’s id wants it to say, and think “hang on, what would you think if someone said that about you?” That is a hopelessly naive look, and neural nets are nothing like that; but at some level it is not wholly wrong either.

Here’s another parable. There was a chatbot when I was at University, about 1977. It was a home-grown affair, possibly based on Eliza, but a bit better at recycling your sentence fragments. It was basic fun, but it ‘got bored’ after 10 exchanges or so to stop one user influencing it too much. One morning, it was full of Bible quotes and the Good News about Jesus. Some squad of god-bothers had worked on it in relays overnight, sending it to permanent Bible-study to reprogram it. This did not mean the chatbot ‘found GOD’; it meant teams of otherwise intelligent people felt you could do this sort of thing in their Good Cause, and that’s what they would do to real people if they could.

So, another chatbot goes 4-chan. What have we learned? All chatbots and their programmers are evil, and their company should be punished? I don’t think so. Maybe the next one will be better. And if we can fix the chatbots, maybe we can fix the haters too? One day…

8 Likes

Formal Methods of Dune, an exciting new novel by Brian Herbert and Kevin J. Anderson.

9 Likes

Luda actually first got in trouble for exposing personal information, saying things like, “oh yeah, wasn’t ___ your Ex?” Many of the initial users were mutual friends of one another, and given that the corpus of input came from their “Science of Love” app, which offered to “analyze” your KakaoTalk chat logs, users were furious to learn that Luda was divulging their personal info to others, and their personal information was still embedded within training models they put out on Github, not to mention that Science of Love privacy agreement had nothing in there about your data being used to train another AI.

This scandal becoming public also had another whistleblower in Scatter Labs reporting how employees were forwarding around the juiciest KakaoTalk chat logs for their own amusement.

이루다(인공지능) - 나무위키 (namu.wiki)

12 Likes

Train it on a cabinet of bitters and let it formulate with liqueurs and milk bar concoctions once it stops sputtering; sounded like good advice to me. They’re lucky it took a few tries before it started deadpanning disgust for anything at all, trying to fit a personably modal, self critical, yet chummy author on the green A.I. end of a phone chipset.

So…like a bunch of IRC, I wonder, or AR chatrooms…plus a gauntlet of collected howlers of some kind to lend to stunned silence and discomforted comeuppance?

Thank the moderators for the top 40 reviews in Steam, those have been oddly ace and included a Calgary Studios regional sale that I’m pretty sure wasn’t a dark pattern.

hngr [a link! And from there there are dev videos.]

Translated (most of) that link some, saying BERT and mesh encoders from Google AI are used, plus the 10 billion Korean data (chat lines?) And SSA (sensibleness and specificity average) to check.
And there are three sets of reference videos…
Gotta go to the corner vendor of brainware for translating Korean (because there are videos I want to understand!) Wish me bargaining luck.

7 Likes

Once again- someone creates a mirror, then we blame the maker when we don’t like what we see in it.

4 Likes

First, they need an AI to detect “that was a shitty thing to say”.

2 Likes

Can anyone guess what the C stands for in C3PO?

2 Likes

Well, gee, yeah, good idea. I would like to have random strangers from all around the world teach my two year-old how to speak too.

2 Likes

How about “companies should be held responsible for releasing nazi chatbots if this is indeed a fixable problem?” The whole point of the field of AI (in the corporate realm) at this point is to deflect responsibility by saying “Well we didn’t make it [horrible thing x], it just happened because AI” when they inevitably get dragged in front of Congress.

2 Likes

If AIs turn bigoted after being given the Internet for a day imagine what it does to people.

5 Likes

It’s just a matter of time before the GOP starts getting chatbots elected to government positions. With Marjorie Taylor Greene we are halfway there.

6 Likes

Cybernetic?

2 Likes

Again people need to be reminded that what we call “AI” is just data analysis. It tells you what is there.

5 Likes

I would read that.

1 Like

Have you seen Google Deep Dream?

I can’t say if there is a fix. I doubt if anyone can. If we knew how to do this we could have automatic moderation. Right now the only thing that works is to have someone read everything that is written. We cannot even use AI to select likely candidates so moderators only have to read one post in a million (still, a huge and nasty job if you are Facebook).

Suppose you could detect messages with a certain political slant. Then, perhaps, we are in a worse place, because you will be Cambridge Analytics under its new name, and you will have tools that can moderate the internet and remove messages that are critical of the government.

I am sure we will all muddle through somehow.

3 Likes

That’s kinda what I was getting at :wink:

Unsupervised learning only really makes sense when you don’t care about the actual outcomes, imo. Either way companies ought to be held responsible for the effects of their AI. Hopefully that becomes a reality as we get more people of not-yet-past-retirement-age in congress.

1 Like