Deflationary Intelligence: in 2017, everything is "AI"


#1

Originally published at: http://boingboing.net/2017/03/06/chatbots-rebranded.html


#2

This is one reason I use the term machine learning far more than artificial intelligence; AI is a poorly defined term and always has been. For most things people bandy about the term artificial intelligence to describe, I instead use algorithmic intelligence or algorithmic automation, the programmed discovery, correlation and rudimentary analysis of data such as search algorithms. Even semi-spontaneous pattern generation isn’t really intelligence in the sense humans usually use the term. Charles Isbell should add a third criterion: the ability to interpret spontaneously generated or discovered patterns. Indeed, that more than creativity is the heart of intelligence, atop which true self-directed and self-redirected creativity is built.


#3

What does Artificial Intelligence look like?


#4

To paraphrase the president, “Nobody knew artificial intelligence could be so complicated.” Which is to say that we have learned enough about the problem to have at least an inkling of how complicated it is. I can’t think of how many science fiction stories that I have read where a fully conscious AI just “emerged” when computers got big and complicated enough. Now we realize that the tasks that seem the simplest to us are often very difficult because we are hard wired to do them and so they require no conscious thought. So everything that was handwaved in science fiction has to be actually thought through, and even simple tasks are called “AI” because they are baby steps along the path. And we still have NO conception of how to create some sort of “consciousness.”


#5

“Deflationary” - glad to learn a new term! It is the same throughout a lot of tech…rc devices are “drones” and “robots”, “algorithms” are infallible and inherently neutral and “dynamic modeling” is a perfect predictor of future outcomes.


#6

Steal a term from Mass Effect. Call it VI.


#7

What do you mean it “has become meaningless”? People have complained of its nebulous quality for decades, ever since the name originated.


#8

Sitting in on a graduate course on Heidegger was when I first came across an old philosophical conundrum: when is a chair a chair (or a hammer a hammer or whatever). At the time (mid 70’s), this was a tricky concept to grasp (for me at least). Douglas Hofstadter, however, made this very easy to grasp by re-framing the concept as: how do you get a computer to recognize when something is, say, the letter ‘a’? You have to be able to do this in general. Solve this problem and you are on your way to AI. Otherwise, forget it. “Fluid Concepts and Creative Analogies” is a great collection of essays on this subject by DH. [bonus trivia: this was, oddly enough, the first book sold by Amazon]


#9

These kinds of philosophical questions have been floating around in my semi-sleep-deprived head in the context of my 2 year old daughter as she learns to name the things around her. It always gives me pause when she points to a poorly drawn hippo in some shitty book we got from the library and calls it a rhino. Do I correct her? Do I congratulate the attempt? Do I tell her she’s close? Am I misleading her? Why is this illustration so shitty? What is the true difference between a rhino and a hippo? In this context does it matter? Do I explain that in some phenotypical ways they are related? Could I make a living being a children’s book illustrator? I draw hippos and rhinos WAY better than this…5am is a strange time.


#10

See, this is likely why I shouldn’t have kids. “No, no, rhinos are awesome and hippos are stupid water cows. Try again, and draw the hippo stupider. And they aren’t purple, they are more… Peppermint.”


#11

I thought rhinos were more spearmint than peppermint.


#12

Every AI Winter follows a Tulip Bulb Spring.

Or something like that.

The research doesn’t not pan out, it’s just impossible to live up to the AI hype machines that starry-eyed, headline-seeking non-techies come up with.

“Machine Learning” is a smart side-step.


#13

have got a picture yet


#14

I think I’d call all those inflationary, like the “title inflation” we run across all the time in the corporate world - think of all the support “engineers” who haven’t a BEng to save themselves. All these terms don’t really deflate the language so much as depreciate it through puffery. How should we say it? The term has more currency and less value than before, hence it’s undergoing inflation. :wink:

Hippos are the smartest, most ornery, most dangerous stupid water cows you ever did see. If you see some relaxed lions nearby, you’re probably safe enough with a bit of common sense. If you see any hippos nearby, start opening up some distance.


#15

They’re stupid water horses. It’s right there in the name!


#16

dude that’s the best part of being a parent…
sadly my kid was usually on to my attempts to fill his head with crazy ideas. not so much for his friends though.


#17

I call it moving the goalposts of AI. Young and eager programmers think “how hard could this be”, only to founder on the harsh rocks of the unpleasant reality that this completely intuitive thinking shit we biological creatures do is very, very hard to figure out from a programming perspective. So, unwilling to admit total defeat, they redefine their goal (a computer that can learn to play games) to something they can sort of figure out to do (a computer that can beat a human player by brute forcing all possible moves and choosing the winning sequence). And then they declare victory as an advancement in AI research when actually it is nothing of the kind.

And this constant lowering of the bar has been happening in the computer industry for, oh at least 30 years. All that’s new, it seems, is that the bar has been lowered so far now that any clever programmer can claim to have hurdled it with their completely pedestrian software.


#18

Hippos look cute, but are actually the exact opposite. Whereas rhinos look mean, but generally aren’t. False advertising at work!


#19

This problem was first noted in the late 1960s at the MIT AI Lab. Basically, whenever AI researchers figured out how some seemingly intelligent behavior worked and implemented it in an algorithm it suddenly was no longer AI. It became machine vision, proof search, case based reasoning, automatic learning and so on.

This drove the AI summer / AI winter cycle. AI researchers would study all sorts of intelligent seeming behavior. They’d make a few breakthroughs and areas of application. There would then be an AI summer with lots of funding, lots of hype and so on. Then those breakthroughs would form the heart of a new field of study, but one without the mysterious cachet of AI. Disappointment would set in. The magician just hides the coin in the palm of his hand and makes us think he found it in our ear. AI winter would set in. Funding and hype would vanish as AI researchers re-branded their efforts or found refuge in an alternate field of endeavor. Eventually, there would be a new breakthrough or new area of application and spring would come.

I think Noam Chomsky described it when he said “Colorless green ideas sleep furiously”.


#20

If you look like enough of a badass, you don’t actually have to get into many fights. If you look like the overweight kid picked last in gym class however…