AI is like a magic trick: amazing until it goes wrong, then revealed as a cheap and brittle effect

Originally published at:


Science fiction author Arthur C. Clarke figured this out a long time ago. His third law (1973):

 Any sufficiently advanced technology is indistinguishable from magic.

That being said, a lot of people like magic, and will embrace things that look like magic.


We’re still a long way off from true artificial intelligence. What passes for AI today is certainly artificial, but it’s not intelligent. It should not be trusted to make good decisions - there’s no one there, there’s no actual thinking going on.


This quote might be more accurate for our current technological and political climate.


A couple of years ago, I spent several months evaluating various captioning services. The big buzz was around the AI entry, IBM’s Watson. Watson (and similar AI products, let’s be fair) are intolerably awful at understanding human speech, and cannot generate captions that are worth a damn. Even at its best, the resulting caption files required hours of human correction.

Since this project, I have referred to AI as “manufactured stupidity”.


Crazy dystopian idea:

What about replacing artificial intelligence with real intelligence?

Terminal patients being given the option to implant their healthy brains into consumer products such as cars, trains, etc. A second lease on life.

Talk about the possibilities for sci-fi. The very wealthy being able to pay for a luxury life support system to spend their days watching movies, while the poorer folks continue their lives as indentured taxi drivers. Your self-driving car showing up late because it was tired and watching cat videos instead of working.


The metaphor of AI as a “magic trick” in this article is weak; like debunking the public’s illusion in 1908 that a Model T is better than a horse. A better analogy is a two year old seeing a giraffe and calling it a horse. That a child and a machine can make such a mistake is remarkable, even magical. And the child and the machine are only going to get better and better. Rather than debunking the public’s illusions about AI (rather someone’s projection about the public’s illusions), our efforts are better spent preparing for the certain eventuality of smarter AI, and at least trying to make sure that the effects are beneficial.

IMO, the present danger of AI is that owners are using AI to replace human jobs so they can make more money. We already have machines doing customer service and deciding whether you are worthy of a loan, using all the judgment and competence of two year olds, not adult humans. But this situation is not the caused by misapprehension by the public or by the field of AI.

I do agree that self-driving cars in the next years is an empty fantasy, and that we would be better off knowing the failure rates of algorithms we are subject to. The root problem here is not the public’s gullibility or malfunctioning AI, but the Corporatocracy. In any case though, those failure rates will inevitably go down, cars will self-drive, pixels will not cause catastrophe. What kind of world will we want to live in then?

A few comments on the Forbes article:

  • Practical solutions are NOT “formed by grouping algorithms into pipelines”. Most machine learning (ML) networks are designed and trained as one piece.

  • Machine learning development is VERY dissimilar to how code has been written traditionally (not “since the beginning of time”). The algorithm itself writes what we traditionally call code; the developer will wrangle data, design and test architectures, and verify results.

  • Machine learning algorithms are not “piles of code that do little more than record simple lists of statistical correlations”. This statement is entirely untrue. Rather, they definitively surpass statistical tools in their capacity to detect subtle patterns and generalize effectively from them.

  • "this way of thinking leads us to be less cautious in thinking about the limitations of our code, subconsciously assuming that someone it will “learn” its way around those limitations on its own without us needing to curate its training data or tweak its algorithms ourselves. "
    No machine learning practitioner believes this - it’s ridiculous. Curating data and tweaking algorithms is ML’s bread-and-butter. Perhaps he means that corporate management wants to believe this way of thinking.


Would you like some book recommendations?

1 Like

Hit me.  

10 Computer invented.
20 Someone thinks “hey if we can just make one of these that thinks like a person does then we can start replacing menial human tasks.”
30 Someone else realizes that’s really goddamn hard barring some insane breakthrough in computing power.
40 Human imagination briefly moves on
50 GOTO 20


I’m just curious. That AI by Google DeepMind that beat expert Starcraft players at Starcraft, and taught itself from scratch (there was no training set at all, it just started with the rules), that found new Starcraft strategy that Starcraft experts learned from, is that stage magic also? I’m just askin’, really I don’t know.

1 Like

“The Ship who Sang” multi-book series by McCaffery
“The 5th Head of Cerebus” by Wolfe (although Mr. Million’s brain is destroyed)
“The Forever Man” by Dickson

I’ve spent all the time since you asked trying to find a certain Zelazny story that is apropos, because, well, Zelazny. But I failed… “The Engine at Heartspring’s Center” is close, though.

Edit: How did I forget “Becalmed in Hell” by Niven?

1 Like

This is surely debated within the AI gamer world, but the AI could exploit things that gamers cannot: incredible rates of precise clicking and the ability to monitor the entire map and not needing to move the camera. The article explains it a bit, and once the AI team made adjustments the AI lost.

I believe AI will surely be amazing for a lot of applications, meanwhile it will produce new problems that humans will have to manage. I know this is objective bias, but it seems like the way all technology works… my job is to fix things when automated systems fail. The automated system replaced a lot of people for sure, but now it needs a team to manage the screw ups.

Is AI going to be flawless? That’s just not how anything in the known universe works.


And then there is Pratchett’s corollary, “Any sufficiently advanced magic is indistinguishable from technology.”


K.W. Jeter did this years ago in Noir. Asp-heads (a backwards formation form ASCAP) are able to punish copyright infringers by ripping out enough of one’s spine and brain so that consciousness is preserved and then implant that into, say, a toaster for a very long and deadly dull existence.


Human intelligence is like a magic trick: amazing until it goes wrong, then revealed as a cheap and brittle effect.


For all of these same reasons I suggest we ban statistics.

Oh wait, they are simply tools that tend to deliver results in the form of probabilities? And not actually artificial minds? Who knew!

It’s funny, if we lived in a more egalitarian world, this would be seen as suuuuch a nice feature, instead of a bug. Alas, economic policy wielded as a means of ensuring power by a very small number of humans over most of the rest, is what makes the above a problem.


This topic was automatically closed after 5 days. New replies are no longer allowed.