Originally published at: AI-generated article hallucinates Christmas Day murder in small New Jersey town - Boing Boing
…
I guess if “Androids can Dream of Electric Sheep”, then LLM can hallucinate.
Am I even less likely to trust tech that’s ‘tripping balls’ than I am most humans?
AI can dream of electric sheep, sure.
Only they’ve got too many legs and they’re not always in the right places.
Another way of looking at it:
THEY’RE ALL HALLUCINATIONS, THE AI DOESN’T, AND CAN’T, KNOW THE DIFFERENCE.
Nothing, at all, ever, that a generative engine produces can ever be relied upon to be “true”. It is repackaging what it sees into a different form through opaque algorithms. There is no “understanding”, and there is no “analysis”, and there is no “critical thought”.
It is not a “summary”, or an “analysis”, or a “report”, it is a semi-random squeezing of mechanically recovered content paste that looks like one. If you ask it for a story, it will invent something that looks like a story, and if you ask it for sources and citations, it will cheerfully invent those too.
The entire and only possible output of these engines are plausible lies. If those lies are congruent with facts, then that’s a sheer bloody coincidence.
Um… okay, then.
It was not clear to me that you were joking.
I don’t like this constant push on all fronts towards tech that seems unreliable a/f, and that’s where my ire lies.
Have a good evening.
In 1994, Intel’s Pentium processor had a limited-scope floating point error bug that cost the company almost $500MM…but today, we’re asked to accept LLM output that’s wrong more often than it’s right. Ludicrous. But, like NFTs and every other flash-in-the-pan-hoping-to-be-DOTCOM thing, hopefully we’ll only have to deal with it for a year or so.
Fingers crossed.
It’s already eating itself alive, so unlike most things these days…I’m hopeful
… before anybody else does an image search for “AI sheep” — first consider, as I should have, that “AI” also stands for “artificial insemination”
Well, I guess NewsBreak has completely immolated whatever reputation they had, and can no longer be considered a reputable source for anything. It’s not just that someone was so desperate to create “content” that they used a LLM to generate a hallucination and submit it as a story, it’s that the conditions that allowed this to happen even existed in the first place. This is the fate of all who make use of LLMs for presenting supposedly factual text.
and right there lies one of the problems; those fuckers pushed this term so hard, well knowing its not what its implying, and now almost everyone uses it, despite being completly wrong.
I remember a few years ago Cory Doctorow was asked if he was worried AI would take over. He replied something to the effect of “No, I worry it will continue to suck but people will also continue to trust it.” Have never found a better way to say it.
BB already did with that MidJourney creation from the AI article seed.
The AI seed you say?
I think of it as “Artificial Imbecility.”
Horror movie prompt:
Paper uses AI to write story blurbs on the top news, only it keeps spitting out articles of events that didn’t happen.
Until several days later, the events do happen, exactly as the AI articles says. It starts off innocuous enough, with stories about a cat rescued from a tree, and an old man donating a large sum to charity. But then things get dark… natural disasters, scandals, and even… murder!
Is the AI predicting the future, or causing it?