Originally published at: https://boingboing.net/2024/06/03/google-ai-just-might-kill-you-it-misidentified-a-destroying-angel-mushroom-as-an-ediblebutton-mushroom.html
…
AI, the wave of the future.
Or is Google AI telling you it’s so good it’s the last mushroom you’ll ever eat?
[Edit to be on topic]
I’m beginning to suspect AI shouldn’t be used for anything other than checking ones spelling or blurring out the background of a photo.
off topic note: Thanks to Stormy Daniels I can’t read “button mushroom” without giggling.
Is this how the new wave of Darwin Award winners will be doled out; people who naïvely trust in the unreliable, faulty a/f results from Machine Learning?
So, if someone had used the Google AI and it had identified a deadly mushroom as safe, causing their death, would Google have any liability? (I’m assuming their weasel-wording legal-eagles will have made the Ts&Cs such that they think they do not, but that is not necessarily what a court might determine.)
(@danimagoo?)
There’s 1 Simple Trick to not eating a poisonous mushroom you pick in the wild…
(I’m sure you can guess what that is)
We already had AI-generated fungi identification guides, but this feels like a new low, still. Google are just opening themselves up for a world of hurt, in terms of legal liability, by providing their own “AI” content (on top of just completely fucking up the entire web in the process).
It’s been pointed out that a high percentage of Google searches involve a small number of search terms, so Google can have humans curate the AI output (for the top X thousand results) so the most popular searches don’t have obviously wrong/harmful information. But this raises two issues: why bother with the AI at all (the humans are doing the actual work), and that doesn’t help with all the rare search terms, that involve a lot of medical information, identifying poisonous mushrooms, etc. where wrong information can be incredibly harmful and even deadly. And this is the problem with “AI” in general - you can’t guarantee the information without doing a lot of human labor to cover every possible result (probably ultimately more labor than if you just had people do it in the first place, once you factor in all the basic training work).
Just by providing “AI” summations of search results, Google are quite likely opening themselves up to a bunch of legal liability. Outright identifications like this seem even more problematic.
AI is right, all mushrooms are edible, some more than once.
I have yet to see any app that can identify mushrooms well.
In order for a photo or set of photos to be good enough to ID from they need to show many views of the mushroom(s). You need to see all of the features that are relevant. Apps seem to be unable to tell when they don’t have enough information to be able to make an accurate ID, something that gets to be a real problem when somebody intends to eat the mushroom.
It’s also worth noting that your odds of survival from eating these amatoxin containing mushrooms are about 50% if you do nothing about it. If you seek medical care, your odds of survival are about 95%. They are still responsible for more deaths than all other mushrooms combined.
Man, there are stories of mushroom experts visiting a different part of the world and making themselves and other sick because a safe variety from where they are from, looks like a deadly variety where they are now.
I don’t really like mushrooms to begin with, but I wouldn’t trust myself or AI or even most random people to identify them. I think morels might be an exception, but even then I don’t think I’d risk it.
Well let’s see. That would be the tort of wrongful death. So first of all, the plaintiff would have to prove that Google’s AI’s misidentification of the mushroom was the proximate cause of the individual’s death. In other words, if it weren’t for Google’s AI’s misidentification, would the person not have eaten the mushroom. And I suspect that to prove that, you’re going to have to convince a jury that any reasonable person would have relied on Google’s AI’s identification of the mushroom as safe. And given the hordes of screenshots I’ve seen just in the last couple of weeks of laughably wrong Google AI summaries, including one I took myself from a search I did, I think you’d have a tough time convincing a jury that it was reasonable to rely on the Google AI.
Find a mushroom guru, if you really want to learn it. Maybe check their liver panels, just to make sure they know their stuff.
Engaging podcast, tangentially on topic:
Your mushroom guide should be published before 2021. Preferably older. If you’re eating them, don’t take a chance someone got lazy somewhere.
I have a theory that reference books published before ChatGPT will be far more valuable in the future.
I’ve read a few wilderness survival guides (for fun, I don’t think into the wilding is a good idea), and when it comes to foraging mushrooms they basically say “please don’t”, explaining that, like the example above, it’s often not easy to tell an edible species from a deadly one and when you find out it’s likely too late for you.
I think a better analogy would be trusting one of the scriptwriters of Grey’s Anatomy to perform brain surgery. They can string all the right words together (probably), but don’t know what most of them mean.
IIRC an actor who played a surgeon on ER said “I know just enough to be dangerous”.