Distinguished scientist on the mistakes pundits make when they predict the future of AI

Originally published at: https://boingboing.net/2017/10/09/clarkes-first-law.html


The trolley problem is irritating as it assumes a perfect set of information to make instant value judgements on.

I would much rather the cars work off a set of predictable rules.

Also obligatory XKCD:


Open the pod bay doors, HAL.


Fortunately, we anticipate AI replacing all futurists within the next 10-20 years, leading to much more accurate predictions.


I wonder what portion of people making outrageous claims for AI are AI researchers?

26.3 to 27.9 %.


You do not need full blown A.I. for job automation to decimate the middle class. The transition to self-driving cars can happen very fast. What if the switch over is as quickly as the time when the Model T rolled off the assembly line to when horses were essentially replaced in major cities? From about 1910 to 1925 we saw a steep decline of the horse and buggy in major cities. That’s a 15 year span! Rapid technological change is nothing new for the people of the 20th century, and we should expect more rapid technological and social change in the 21st century.

You’re basically acting like an idiot if you state that A.I., self-driving cars, or automation is not going to happen quickly and alter society in profound ways.

1 Like

Just so we’re on the same page concerning the terminology:

Kind regards, an idiot.

Cars had been a commercial production since Benz launched his in 1885. The change you are describing took place over 40 years not 15. And that clock has not even started for fully autonomous cars yet. None are commercially available.

As the author point out, that pace of change is often over-estimated.

1 Like

“Prediction is very difficult, especially if it’s about the future.” Niels Bohr

Yeah, but nobody but the super wealthy had cars then. Benz was then, as now, a luxury brand. Henry Ford may have been a terrible person, but he did make cars into a mass produced commodity affordable to most people. The Wind in the Willows, albeit British, shows what people thought about cars in the early 20th century – as impractical toys of the rich like Mr. Toad.

I agree. All technologies have an early adoption phase. Whether its cars, PCs or mobile phones. All of these products existed in the marketplace for years, even decades, before mass adoption.

Many predictions about driverless cars seem to ignore this phase and assume that mass adoption will happen as soon as the technology is available. It almost never works that way. It’s one of the reasons people over-estimate the impact a technology will have in the short run.

Like all technologies, driverless vehicles will probably spend years in niche markets before they go mainstream.

While it sounds like an interesting book, I think you have all the toolkit you need if you just assume everything you read about The Coming AI Menace is pure swivel-eyed fantasy.

What normal people imagine as “AI” (i.e. HAL 9000) has been getting steadily further away, not closer, since it was first conceived. The more it’s investigated, the harder and less commercially desirable it looks.

Perhaps tech cheerleaders want us to worry about machines turning into people, because it distracts from the more immediate threat of the exact opposite happening. I mean, if you look at supermarket self-checkouts, would you say machines have become menacingly smart and flexible? Or have we have become so good at following on-screen instructions that we can now buy groceries from a 1970s-level robot?


Fuckit. We know what you are up to. Soylent Green is people!

This was a nice, thought out piece that is better than my oft repeated statement that “Most of the progress towards the goal of AI has been through moving the goal posts.” We are only just beginning to get over our Dunning-Kruger phase in AI where we just wave our hands and talk about thinking machines.
Rather than saying that we overestimate short term progress and underestimate long term progress, you can also say that we overestimate quantatitave changes and underestimate qualitative changes. So we talk about AI eliminating jobs rather than how they will change jobs. Long before self driving vehicles are ready for the hustle and bustle of busy city streets, they will be able to handle the much more limited task of interstate driving. So one can imagine truck fleets being autonomous on the highway and still having drivers for city streets. Or even being driven remotely on city streets.

Not unrelated: horses live in adulthood for about 20 years.


I keep reading Al instead of AI. I can’t help it. It doesn’t really ring a menacing bell. It’s more like a saxophone.

A car drives down the street… …says why am I short of attention? Gotta short little span of attention? Whoa, my nights are so long. Where’s my wife and my family?
What if I die here? Who’ll be my role model, now that my role model is…

Gone. Gone. It ducked down back the alley.

(Leaving now for a ridiclously underpaid job in a line of work an AI could do as well, but my prediction is people would still prefer a human being doing it. Most of them, anyway.)


I’m not a pundit, but a big part of my job is making predictions about future technology impact (in my case in materials rather than AI). An I agree that all of these are very common failure modes. I know my colleagues and I try to make it very clear to our clients that whenever we publish a prediction, the specific number is almost certainly wrong; we explain our assumptions, and end up having lot of conversations that start with “How do you think X would change if…”

Doing things that way, we’ve usually managed to mostly about the “short term overestimate” problem - the company I work for has published a lot of forecasts, and we’re almost always more conservative than those of our competitors, and usually closer to what ends up happening. As for the “long term underestimate” problem, I think this is a combination of black swan breakthroughs/events and second-order effects/impacts. I try to guess what some of those might be (sometimes possible, usually not feasible), and carefully label them “X could maybe, possibly lead to Y if Z, Z’, Z’’, etc.” or something similar. Then my clients mostly ignore it and reporters focus far too much on it.

There areprobably no unbounded exponential growth curves. But 1: stacking lots of s curves from different sources of growth can extend the apparently-exponential phase for a good long while. And 2: if you don’t know in advance where the leveling-off will occur and why specifically there, then it’s a really good idea to explore scenarios ranging from, “What happens if there are no more major advances?” all the way to “what happens if it continues being exponential for a long time (at a minimum, until after I retire)?”

1 Like

And you may ask yourself, “How do I work this?”
And you may ask yourself, “Where is that large automobile?”

1 Like