AI's blind spot: garbage in, garbage out


Originally published at:


garbage in, garbage out

To be fair to artificial intelligence, this is also true of regular intelligence.


I want that framed to hang in my office.



Neal Stephenson discusses something like this – “Artificial Inanity” – in his novel Anathem:

“Early in the Reticulum-thousands of years ago-it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.

“Crap, you once called it,” I reminded him.

“Yes-a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”

“What is good crap?” Arsibalt asked in a politely incredulous tone.

“Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors-swapping one name for another, say. But it didn’t really take off until the military got interested.”

“As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid-First Millennium A.R.”

“Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”



Back in my engineering school days, before the dark times, I had a professor who required us to learn to use a slide rule. This was the 80s - everybody had several calculators.

His complaint is that if you enter 2+2 = and the calculator comes back with 3.99999999, many people will dutifully write down 3.99999999. In fact, if the calculator says 2+2 = 6.022x10^23, many people will accept that too. The calculator said it, I believe it, that settles it.

But the slide rule won’t tell you what power of 10 to use, so you have to actually think. You know 4 is a reasonable answer and the other ones are not. You won’t always be right, but at least you can avoid some really stupid errors.


This sounds reasonable.

Of course I’m reading it on a computer, so…


2 + 2 = 3 for small values of 2.

I would hope anyone smart enough to get into an engineering program would understand tolerance and sig figs. I would hope.


I think we’re about 150 years too late. Corporations are a form of artificial intelligence, and they required hand to treat them as persons. And now that regulatory capture insures that the rule enforcers work for the rule breakers, there is simply no legal way to roll back these powers.


A quote I read about 40 years ago:

My worry is that at heart, many of us want magic. As AI becomes harder to understand why it does something, it’s decisions become more “magical” and thus more attractive than decisions based on factors we understand (and thus decisions we know can fail).


This topic was automatically closed after 5 days. New replies are no longer allowed.