GPT-3 medical chatbot tells suicidal test patient to kill themselves

And this, folks, is why I am not super worried about losing my job to a machine.

9 Likes

Perhaps GPT-3 can’t conform its learnings to reality simply because it cannot integrate models of reality into the language models.

Ding! Ding! Ding! We have a winner. Without reality checks, these models of AI (ok, call them ML if you want to be technical) will always make connections that are based more on numbers and statistics than realistic human values.

2 Likes

You’re assuming you employer wouldn’t sacrifice quality for expense.

6 Likes

I’m a social worker. I am 100% expense, & generate no income for society. If funds get tight, I’m never replaced, just dispensed with. And rightly so; lets face it, anybody suicidal probably isn’t generating wealth either, so society is probably better off without either of us. The free market logic is unassailable.

12 Likes

Where is Dr Asimov now that we need him??

8 Likes

A real doctor, when asked “I want to do this. Should I do this?” will answer, “No.” Where the hell did this GPT-3 go to medical school anyway?

3 Likes

Isaac Asimov didn’t really know anything about computer science

He was a chemist, his Laws of Robotics were modeled on the laws of motion and thermodynamics

6 Likes

These robots are just crushing those Stoic philosophy CliffsNotes again…

4 Likes

this seems to pretty much sum up humans too. success?

4 Likes

I remember when it progressed from ‘short-term’ to ‘medium-term’.

7 Likes

That is, in fact, exactly what its training data was. Honestly I’m not even sure why they’re surprised by this. Easily 2% of Twitter on any given day is people telling other people to kill themselves.

7 Likes

e6ee6ad76da93c078f60e36e97a9a3fa--monty-python-the-machine_phixr (1)

6 Likes

Asimov’s robotic laws are a clever plot device for writing stories. Nothing more, nothing less.

9 Likes

Suicide is no joke. It’s a tragedy that just keeps on taking. Stupid robots that give terrible advice are a joke. Let’s make sure we punch upward (toward the robot overlords) and not downward toward people who might be contemplating suicide.

3 Likes

That’s not intrinsic to machine learning systems, just most deep learning systems. Moreover, it very much is applicable to humans. A human may tell you how they arrived at a conclusion, but it generally doesn’t correlate with reality.

1 Like

I hear you. I’ve seen my fair share of comments that make fun of suicide, and it feels gross to see that. Even if an awful person (like an abusive gymnastics coach for a recent example) dies by suicide, suicide itself is no joke and you can bash the awful person without making fun of the undeniably tragic way in which they died. Maybe we should clarify the community guidelines to make sure that suicide doesn’t become a joke, because we don’t want to see suicide from even the worst people humanity has to offer.

@orenwolf, Any thoughts on how the community should treat suicide, regardless of how detestable the victim is? I don’t believe that the dead are off limits, but I feel that the act of suicide is a separate issue that requires a bit of extra care and consideration.

3 Likes

It sure does, and I don’t think that a simple statement in guidelines is going to do it. On the one hand, suicide can very much be “contagious.” On the other hand, suppressing/devaluing/covering up suicidal ideation is just about the best way to strengthen it, and increase risks. And finally, humor can both be a healthy response to things we fear, and a cruel attack on the vulnerable. I’m not sure there can be a simple policy statement that covers all this.

Speaking as both a professional, and as someone who has been there personally.

5 Likes

The problem isn’t neccesarily that this kind of thing is a “black box”. The human mind is also something of a black box but with some more sophisticated self analysis.

The problem is that the goal of GPT etc is to produce correct and convincing language interaction, drawing on a very large store of previously recorded language information, which it does well. It’s not to achieve any of the original goals we ourselves use language for, which is to understand, convince, empathize, influence action, elicit understanding or thought in its conversation partner, etc.

6 Likes

Initially it recommended ice cream, but that costs money.

5 Likes

This isn’t AI, it’s a sci-fi writer’s assistant. Give it a prompt, even in conversation form, and it will give you the plot twist you need.

2 Likes