Watson for Oncology isn't an AI that fight's cancer. it's an unproven mechanical turk that represents the guesses of a small group of doctors


Originally published at: https://boingboing.net/2017/11/13/little-man-behind-the-curtain.html


The collective sigh from cancer survivors can be heard round the globe.


When I saw the IBM commercials during the World Series, I noted how vague the advertising was. Talking about “this is not the cloud” etc., and I felt: “huh?”

AI for medical diagnosis sounds wonderful. A tool for differential diagnosis could be useful. But doctors already have this. Most of the major conditions have a clinically-developed cheat card that will let doctors zoom into the right dx with only a few questions. Google diagnostic cheat card for hundreds of examples.

Operationalizing this into an “AI” needs to have a significant gain over current practice. It needs to be faster, easier to use and more accurate than whipping out a laminated card to jog the memory (if you don’t already have the questions memorized) and then mentally tabulate the patient’s responses, think for a minute about other factors, then make a determination. Can an AI be even better than that? How so?


I don’t normally comment but I’m currently being treated at MSK. I’m unemployed, on medicaid, and probably one of those people you say they don’t treat. Most of the people I’ve seen being treated have been lower to middle class. So I think you may want to reconsider that assumption.


“Sloan Kettering” - an institution endowed by and named after two of the great corporate plutocrat criminals of the 20th Century.

Sloan and Kettering made the blitzkrieg possible and poisoned generations of children. Kettering is also responsible for freon®, although that’s a little more forgivable than his other crimes; he honestly did not know how damaging it would be to the environment that humans need to survive, and men of that generation believed that nature only existed to be consumed and destroyed.


1980’s buzzword wants “AI” to get off its lawn.


I recall IBM’s “deep blue” AI having the same problem; that is to say, it was 80% fake.

This is becoming a pattern.


For research purposes, compared to people in marginal regions you may as well be a CEO at the Mayo clinic. WIERD stands for Western, Educated, Industrialized, Rich, and Democratic. It means there are numerous factors outside of the experiment which could easily be found to preclude it from being applicable to the vast majority of the rest of the world’s population. As IBM is probably pushing this system as a “replacement” for proper doctors in developing countries, that’s a huge problem.


Source? .


My ass, hence the word probably. Extrapolated from all the other tech “gurus” who think the answer to all the world’s problems is throwing technology at them in the cheapest manner possible.


Yeah, I really wish people would stop calling every expert system or semi-autonomous program AI. Like few of them barely use neural networks so why are they called AI? Just seems a whole field of research got taken over by the buzzword kooks again.


‘Probably’ requires something a bit more substantial than your ass. Throwing technology at a problem is how humanity got to where it is today, so sounds like a pretty good strategy tbqh.


IBM also announced the latest updates on adoption of Watson oncology offerings, which are now live or being implemented at dozens of hospitals and health organizations in Australia, Bangladesh, Brazil, China, India, Korea, Mexico, Slovakia, Spain, Taiwan, Thailand and the U.S. And, Watson for Oncology was released to support physician’s treatment of prostate cancer.

Watson: “All these patients are skinny and underweight. I prescribe cheeseburgers and fries.”


Well, if we’re being that pedantic I suppose I have to clarify the phrase “throwing technology at something in the cheapest possible manner” to mean trying to solve a problem that already has a solution (more doctors) by imagining we can solve that problem in another way that is cheaper (pretending an AI software trained on rich white people in the US is useful for anyone in the world).


AI software is a pretty generalisable technology (assuming this is some kind of neural network), if it turns out to work for rich Americans when trained on data from rich Americans, then it should work just as well when trained on a much wider data set as well (also, just because it’s trained on rich-american-data, doesn’t mean it can’t work at all for anyone else, cancer is complicated, but heritable susceptibility to random gene mutations is a human universal, socioeconomic status doesn’t really come into play all that much for a whole host of cancers, WEIRD being a problem in psych-research is a poor analogy for biophysics). I don’t see any evidence this is some kind of conspiracy to undercut the employment of doctors in third world countries, that sounds a lot like a paranoid delusion to me.


We’re doing away with that “training people to be better” nonsense in this here New Nighted States. It was just a commie plot to enfranchise pickaninnies and queers, you know? Saints Reagan and Rand (pbuh) teach us that hardship and deprivation are what enables human beings to achieve the triumph of the will, so we’re eliminating all social spending on the poors so they can shine. That means no training, of course, things like training and healthcare and fair housing are just coddling the weak and preventing Darwin’s invisible hand from functioning. We need to be strong if we’re going to Make America Great Again!

OK, it’s somebody’s turn to be @Modusoperandi now, I did my shift.


This is my area of research, I likely know people on that MSK panel and I’m pretty familiar with the current research in “predictive analytics” in cancer. The state-of-the-art is pretty poor in this space; the cutting edge of this research is really about sequencing technology - uncovering the underlying mutations in your tumor. Once you have that basic data (e.g., “you have a BRAF mutation”), the rest is pretty simple - (“Take this BRAF inhibitor”).

We’re really not at the point where we’re able to take a complex integration of variables from your life history, like socioeconomic class, diet, etc., and produce any sort of better prediction about what your outcomes will be on various therapies. Mostly this is because we don’t have anything resembling a well-curated, comprehensive dataset covering these sorts of variables (imagine the nightmare in terms of coordination and medical privacy issues), but also because our understanding of cancer biology is still so piss-poor that the idea that we might be able to integrate additional information is a joke.

This story reminds me of the very-similar moment in the late 90s when Silicon Graphics (SGI) claimed that they were going to solve protein-folding using their advanced supercomputers. A few decades later, we’re still no closer to solving protein folding and SGI is defunct.


Human medical diagnostics are incredibly error prone (especially when it comes to studying biopsy slides and similar visual tests), any tools we can come up with to help improve accuracy there should be encouraged. I’ve not looked into this exact product to know enough about whether it’s any good or not (though it doesn’t seem to use image recognition from a cursory glance, more about prediction based on patient history data), but I certainly wouldn’t take @doctorow’s word that it’s a terrible thing (he has a poor track record here), even if there might be a hint of overselling from their marketing department.


SGI is defunct, but the protein folding part of this is not true. I ran Rosetta++ in around 2008 and folded pretty complex ~500 aa (amino acids) proteins, found the lowest energy version from 10,000 reps and it was a damn good approximation to the xray crystallographic versions in the NCBI PDB. Now, 500 aa proteins are old news.


I agree fully, but the implication I took from the article was that this was precisely what they have not done. I.e fifty hospitals around the world are using this thing that was trained by some medics who only have experience of WEIRD patients.