How voice data is being used with AI to generate "predictive" medical diagnoses

Originally published at: How voice data is being used with AI to generate "predictive" medical diagnoses | Boing Boing

3 Likes

This can’t possibly be science-based medicine. It’s “micro-expressions” all over again.

One of the biggest problems with AI is that people imagine it’s a free pass to pull any signal out of any noise. It isn’t. Just because we have a huge volume of noise in a particular domain and it appears that AI can pull a signal out of it doesn’t mean any of this is real. It’s just techno-astrology.

Garbage in, garbage out still applies to all data, even that at which we point AI.

I weep for our future when I hear stories like this.

16 Likes

Note that Business Insider has turned on a firehose of AI fan-club articles in the past weeks.

3 Likes

Oh God the tsunami of undifferentiated ai bullshit is upon us, have mercy!

7 Likes

:fuelpump:

oh look, a pump.

3 Likes

Adobe is integrating more robust AI processes into the whole Creative Suite. I’m conflicted about this. Expecting to see a whole lot of new homogeneity in media (I mean, more than usual).

2 Likes

I think that some of these trained machine learning neural network things basically are correlation engines. And that’s okay, being able to do evaluations of huge sets of data for correlations seems useful in a general sense, especially because humans likely can’t do it easily. We aren’t really set up to process enormous amounts of data as individuals, having computers do it makes sense. However, correlation doesn’t equal causation, so there’s still the step after of testing out whether it really works or not.

I seem to recall there was a trained AI thing that was fed chest Xrays or something, and it sorted people into “they’re likely to have sepsis from pneumonia”, and it seemed to work! until they started feeding it even more data, then it fell apart terribly - because it had trained on the doctor’s name visible in the metadata around the edge of the Xray image, and patients who were doing poorly were triaged to that particular doctor to begin with. (I’m sure I’m mucking up the details, I’m recalling from memory.) It actually did find a correlation (“people treated by this doctor will have a worse outcome because they’re the doctor for people in bad shape”), just not the actual one they wanted (“look at this patient’s lungs and detect if they’ll have sepsis”)

7 Likes

I dunno. I could imagine that with enough training it might work to spot a severe case of emphysema or laryngitis.

Are those signs universal or the tool is trained and deployed with a well defined group in mind?
If not, then it is just garbage being extrapolated to generate more garbage outside its intended range of application, even if its developers realize it or not (the whole world consist of old hetero white men, isn’t it?).

2 Likes

This is decent way to look at it. More fundamentally, they are pattern matchers, and this is the problem.

We know from our own psychology that excessive pattern matching gets us in a lot of trouble. We had to invent science to get anywhere because otherwise it’s just seeing Jesus in toast and putting moss on wounds. Pattern matching got us this far, but now it only gets us in trouble. Science exists and works because it compensates for our human cognitive failings.

Now we’re taking that human failing, multiplying it by a billion, and calling it a miracle because it’s an algorithm.

6 Likes

There was one that was used to look for skin cancer. It ended up focusing on the ruler that happened to be in the malignant images.

8 Likes

On today’s episode of “Hey, a non-HIPAA dataset that might be usable for marketing or raising insurance premiums!”…

6 Likes

always-on AI trained to listen for slight variations and aberrations in people’s voices? what could possibly go wrong with that? fucking christ. Welp, we better add anti-AI voice modulation to the anti-facial-recognition facepaint if we don’t want our dissidence to be “picked up and flagged”

“This AI could be used for medical diagnosis!” is the new “these jetpacks could be used for search and rescue!”

2 Likes

I wasn’t really being serious, sorry if the sarcasm didn’t come through. I picked those two examples as something that have such obvious presentations that severe cases can typically be diagnosed by a layperson with better-than-random chance, and therefore a robot diagnosis would serve no value.

2 Likes

Can we get an ::eyeroll:: emoji in the canned available ones? KTHXBYE

2 Likes

Now it will be even easier to gaslight women and minorities seeking healthcare.

3 Likes

The famous example of this is that if can say “I’m choking”, your trachea isn’t completely occluded.

2 Likes

How voice data is being used with AI to generate “predictive” medical diagnoses

Patient, “Doctor, what’s the prognosis?”
Doctor, “Ruff.”

5 Likes

I’m booking you in for some lab tests…

But some guy at a conference insisted that he could do this from handwriting, so why do they need my voice now? /s

6 Likes