GPT-3 medical chatbot tells suicidal test patient to kill themselves

Is it just me, or is that photo like, vintage 1971 or so? Surely AI has come forward a bit since then!

4 Likes

It is less shocking than the suicide conversation, but we also have this:

Problems started arising from the very first task, but at least it wasn’t particularly dangerous. Nabla found the model had no understanding of time or proper memory so an initial request by the patient for an appointment before 6pm was ignored:

Similar logic issues persisted in subsequent tests. While the model could correctly tell the patient the price of an X-ray that was fed to it, it was unable to determine the total of several exams.

Scheduling can be done pretty easily with several types of rules based systems and getting the price of an X-ray is a database lookup. The whole thing is yet another demonstration that when you use the cool, new, but wrong tool for the job, you get a bad result.

Fortunately, it seems that the original article is more or less in agreement and was about debunking GPT3 hype.

2 Likes
3 Likes
4 Likes

The fiction is addressing these topics better than the society.

3 Likes

We have a long tradition here of telling it like it is when people die. That’s specific to the person, though, not the act of suicide in the abstract.

Trying to defend suicide as a positive is inappropriate, and I encourage mutants to flag any post that does so.

(This is very similar to our policy on the pandemic as well. We have ejected folks who have tried to paint the pandemic and the lives lost as a positive).

8 Likes

2 Likes

S02/E02 “Health”

7 Likes

Ideally, we would want an AI to use the input from a patient to build a model of the patient’s state of health, and then make recommendations based on that model. How we get there from where we are I have no idea.

1 Like

Not all of them. They may not advocate euthanasia, but there are doctors here in Belgium (and the Netherlands) who will accompany a patient who has chosen to end their life.
Slight differences: the patient has to be decided, not seeking. And euthanasia is not the same as suicide, as the means are different.

1 Like

Lenat was getting this project started back when I was in grad school in the 80s. I was dubious then for the reasons Rob (&Dennett) stated: disembodied human like intelligence is an oxymoron. What I don’t get is why people have to keep discovering this over and over :shushing_face:

Edited to correct the auto-correct (auto-corrupt?)

2 Likes

It’s not just the patient’s health that the model needs to take into account. It also needs to consider accepted methods of treating people with similar conditions. And even then, it can’t start using those as a jumping off point that concludes with a violation of basic rules of medicine. Clearly they neglected to put in some basic but important gating factors.

This topic was automatically closed after 5 days. New replies are no longer allowed.