After living through 40 years of predictions that artificial intelligence is only 5 years away, I am intrigued by the argument that these predictions were systematically underestimating the actual progress.
The transition has already begun. A human plus computer can solve more problems than a human alone. The human+computer symbiosis forms an organism which is superior to unaugmented human intelligence. The fraction of problem-solving ability that is contributed by the computer part of this organism is increasing. Computers double in capability every so often, while humans don’t. There are no obvious barriers that prevent computers from eventually contributing 100% of the effort in solving problems that are important to humans.
What an interesting graph. It omits to mention what units are used to measure “human progress.” For that matter, the time scale is also rather vague.
The time is anyone’s guess, but progress is measured in the standard SI unit of kiloadvances.
I wouldn’t say it’s very “meaty” at all. The first half is a lot of muckity muck about how “we all” think about history and computers, from the POV of somebody who just thought of extrapolating an exponential curve. What real content there is is towards the end of it, which still mostly harps on about the usual “increasing complexity”, in tandem with the usual sidestepping of a the issue that still people hardly agree as to what constitutes “intelligence”. It falls prey to the usual problem of assuming something like a computerized copy of the human mind. This seems to be what AI means to laypeople, but it seems pretty much irrelevant to theoretical or applied AI for the most part.
People who make predictons about the inevitability of AI super intelligence generally do not work on AI. Also, almost all real world exponential growth curves are actually S shaped.
That’s a very narrow way of looking at it. Yes a computer and a single person are far superior to that person alone, but the computer didn’t not create it’s own logic, reasoning, or calculations - they were programmed by another human. At this point in time computers/programs simply allow the enhanced access to the output of data. They create the building block upon which we can build better things (we input data and get results thousands if not millions times faster than relying on another human(s) to obtain it). I’ll buy your reasoning once AI is able to build something by itself AND then build it better. I see current widespread AI more as less “dumb” machines, the insertion of algorithms into hardware that allows a system to better react, predict, or analyze a situation.
Some time in the early 70s, Heinlein invoked the argumentum ad exponentio to prove that human interstellar travel would break the speed of light by the early 2000s.
Update – I tell a lie, by a couple of decades… Heinlein’s essay was originally from 1954 (though I read it republished in a 1970s “Best Of” collection).
Pretty much like how the mid 90’s predicted working fusion power generation was 10 maybe 15 years away. It’s now 20 years and we are lucky to extract a tiny bit more power in than out.
I think a line based curve (exponential or not) is the wrong way to look at it. At best it represents a best fit line of technological leaps. One day there wasn’t a transistor, and then there was - and then the microchip. Imagine if quantum computing became viable in the next 5 years, your best fit line would swing up, but in reality it would be new tech making a big step up replacing old tech.
Along the same lines of AI tech I think Aliens: Resurrection does a good job of showing a possible outcome. Generation 1 is robots/AI built by us, and at some point generation 2 and beyond becomes robots designing and building robots. At which point it’s possible we have opened Pandora’s box.
I suspect if one started trying to explain to the 1750’s dude about the current state of politics, global conflict, resource distribution, and suchlike, then quite apart from being mind-blown, he would just patiently nod knowingly.
Not quite - I’d say that “problem solving” is very much a human concept. Computers simply apply procceses.
The doubling is is mostly just quantity. This doesn’t usually amount to new ways to process information. It has been profitable for humans to use computers for their strength, but teaching them other kinds of problem solving has been a long, torturous road.
I can think of a few easily. Effort is a human concept. There is no reason for thinking computers to think anything like humans, or to do the same things that humans do. Humans don’t agree what problems are important to humans.
I cannot overemphasize the mysticism which is often involved with regard to computers. As the perfect information processing “black box”, people project onto them their every vanity and fear. They want you job! They want your spouse! Your children! They want to become you because we don’t need you anymore, you are obsolete! The very idea that computers would “advance” to petty human concerns is cringeworthy. If they develop autonomy, they will probably have their own quite alien concerns. The main form of oppression will with 100% certainty be the old standby of people being shitty to each other.
Computers double in capability every so often
A computer is either a Turing machine or it isn’t.
For example: by looking back at the progress of aerospace technology from 1903-1933, futurists were able to correctly predict that humans would have flying cars and permanent settlements on Mars by the mid-1960s.
The author is determined to avoid Kurzweil’s big mistake of being too quantitative.
I dunno, I actually think most sci fi involving androids lacks imagination. Almost without exception the robots that look like humans want to be human, or be treated with the same respect as humans, or at least have desires that humans can relate to. A.I, the Alien franchise, Data from Star Trek, Robopocalypse, I, Robot, Battlestar Galactica, Blade Runner etc. etc. mostly boil down to “Pinnochio” stories. Or occasionally “robots who want to kill us because they have a human-like capacity for anger and spite” stories.
It seems much more likely that superintelligent machines would either
A) Have no independent desires at all, any more than your laptop cares if it gets turned off, or
B) Have desires that are unfathomable to our puny human brains.
When you constantly stack S curves, you still end up with exponential growth.
Tim Urban has a great, longform post up on the future of artificial intelligence. “As I dug into research on Artificial Intelligence, I could not believe what I was reading,” he writes.
I honestly tried to read the piece, but had to give up when I reached the author’s paean to Genetic Algorithms,* for fear of damaging my eyes by rolling them too hard.
When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen.
That “previous 30 years of progress” is a convenient window, for it covers the existence of Lenat’s lavishly-funded CYC project… which for most of that time has been about two years away from sentience. Unaccountably, Tim Urban does not mention it. Last time I looked, Lenat had battened onto the Homeland Security money-teat in his search for research funds, with the promise of turning CYC into a Terrorism Knowledge Database.
-
Known to be one of the less efficient ways of searching a decision space.