Critical perspectives on the Singularity from eminent computer scientist Ed Felten

Originally published at: https://boingboing.net/2018/01/04/exponential-growth-linear-prog.html

I would only take issue with one idea. Not only does nothing grow “super-exponentially” in nature. Nothing even grows exponentially. It may appear that two rabbits become three, become five, but the number of rabbits can’t grow without bound. Eventually you run out of rabbit chow and they turn cannibal. Even the universe of paper clips was only a very large, but finite number.

So Moore’s Law must break down. Even if every person on Earth was enslaved to making transistors 20 hours a day, there is still a hard limit on the number of transistors our robot overlords will receive. Personally, I doubt they will ever simulate even one human brain, let alone the entire graduating class of MIT, but I can’t prove that part.

3 Likes

I note that Stockfish seems to be the high player on the graph provided. Where would Google’s AlphaZero be?

Another singularity? I already have dozens of them!

…runaway advances in computer science coupled with AI creates a kind of shear with humanity’s ability to cope with technological change, ushering in an era in which computers transcend humanity’s ability to control or understand them, and humans and computers begin to merge

That’s a rather cynical framing which appears loaded with essentialist baggage. Why is some sort of incomprehensible complexity the only reason why such a convergence might occur? It sounds as if in such a scenario humans don’t know any better, and are fooled into giving up some unique quality. I approach such tech from a more McLuhanesque perspective, that humans tools have always literally been extensions of the human mind/culture/organism. If a computer functions as an exteriorization of your nervous system, then integrating it back into you, or you into it, does not change anything essential.

I agree that the subject of technological singularity does tend to attract lots of wooly-thinking, so more critical thinking is generally a favorable development. But I think that what I have read of Felten depends too strongly upon starting from a very skewed definition, and analysing only the implications of that. So it is subject to a lot of the same pratfalls as articles “debunking AI” with simple charts of CPU speeds or other simple empirical benchmarks. There simply is not much consensus about what constitutes “intelligence”, or “complexity” here.

Speculation about such convergence is almost always technologically superficial, describing the ways that a computer algorithm may resemble human interaction, or a human with prosthetics may resemble a robot (another value-laden term in need of unpacking). It almost invariable fails to consider contemporary developments which are less commercial, such as molecular computing, bacterial computing, biological storage of digital data, and other tech which at a much lower level blur the distinction between what is biological, and what is artifice. Also, contemporary microbiome and epigenetic work suggest that organisms function more as networks than they do monolithic entities in the first place.

Such a convergence might well not at all resemble a Moore’s law style curve of where you can digitize a circa 2000-era human brain and run it on a circa 2000-era silicon computer. The point is that neither of those would quite exist in a sense that one from that time would recognize. It’s the same extrapolative failure of trying to evaluate modern AI work from the perspective of 1960s “electronic brains” of sci-fi. Or the 80s Terminator/Electric Dreams perspective of technology possessing its own motivations which happen to be analogs of (stereotyped) human motivations, because they hate us - rather than because they already are us. It’s a naive critique of a naive position.

I wrote about the Singularity as a kind of metaphor and spiritual belief system, the so-called Rapture of the Nerds.

I suspect that by “spiritual belief system”, what you really mean is some kind of drive towards metaphysical transcendence of the biological, which is not quite the same thing. Is my genome “alive”? It can be argued that life is already, at the very least, meta-biological.

2 Likes

Yes but there is a lot of silicon in the universe. Computers could theoretically reproduce much faster and for longer than humans because they don’t need to build habitats.

1 Like

Dude, your infinity keeps getting smaller. Soon the singularity will be “even more computers than Wal*Mart.”

Moore’s Law already broke down, thanks to heat dissipation issues, some years ago. (They could halve the size of transistors but not put twice as many in a given space.) On top of which we’ve just about hit the physical limits of transistors anyways. That’s not to say that someone might not invent a completely different computing technology that allows for further Moore’s Law-style progression, but I wouldn’t count on it. So while we saw linear improvements to software despite exponential increases in computing power, we no longer see exponential increases in computing power. It’s not 1998, so I’m not sure why anyone would still talking about the singularity (and everything else dependent on Moore’s Law) as if it were.

2 Likes

Transistors and silicon are not contemporary media for computation. That’s state-of-the-art 60s-70s tech. It was never going to scale very far, but was exploited because of sunk costs in development.

I am skeptical if there is even much consensus as to what activity we can agree constitutes “computation”. Are you not a nanotech computer? Does your chair not symbolize and decode information effectively enough to maintain its structure as a chair? Everything is “computation”, it’s so ubiquitous as to resist meaning.

There are no limits! In Iain Banks’ stories, the Minds actually keep most of their hardware in an alternate dimension. I read it in a book, it must be true!

1 Like

I haven’t read this article yet, but I have been thinking about this graph and what it means. Most of the time it seems to be indicating that exponential growth of processing speed is required to make incremental gains in computer chess ability. But there are two periods of rather dramatic gains on the graph. And there has just been another dramatic gain, as chess has been “solved”(most probably) by Alpha Zero. Stockfish can do no better than a draw against it, and loses half the time.

Processing speed increases at a rate faster than humans can take advantage of properly. A novel approach to solving a problem can result in rapid gains in computer ability. What was needed to solve Go and Chess was a program that could play games against itself and “learn”. This could have been done before now, just not as fast, if someone had thought to use this approach.

I’m not losing any sleep over the coming AI apocalypse, but we may already have ample processing power for an effective AI, we just haven’t figured out the right way to do it yet. Modeling the human brain is probably not the best way to go about it.

3 Likes

Its really just the big step up to Deep Blue, and that was when IBM got serious about chess software. Looks like it took a decade for everybody else to catch up. If you ignore the catch up, the second step around 2005 doesn’t look nearly as impressive.

Cory, you’re letting your hatred of the ideology of many Singularity believers cloud your critical thinking here.

Ed’s point about chess ratings only shows a linear improvement if chess ratings are themselves a linear measure of algorithmic complexity for computers. This is almost certainly not the case.

First off, most human competitive ratings scales are obviously non-linear, at least by one important measure: number of people in the world at that rating. Take rock climbing ratings. The Yosemite Decimal system rates technical rock climbs on a seemingly linear scale of 5.0 to (currently) 5.15, with 5.10 through 5.15 being divided into sub-grades of a/b/c/d. In my experience as a 5.10b/c climber, almost any, reasonably fit person can thrash their way up a 5.5/5.6, but climbing 5.10 requires either exceptional physical fitness or lots of training, the gap between 5.10 and 5.11 is just as big, etc. There are tens of millions of active climbers world-wide, but only low thousands of 5.14 climbers, and probably less than a hundred 5.15 climbers. So, exponential difficulty as measure by #s of people at a level. Also, if climbing ratings are taken as a measure of TRAINING EFFORT AT A GIVEN LEVEL OF INNATE ATHLETICISM (all caps because this is important for machine learning - see below), climbing grades are more or less an exponential scale: improving from 5.10->5.11 is twice as hard as 5.9->5.10.

Based on my limited knowledge of chess, chess ratings have a similar, exponential “number of people at rating” or “training effort” quality. So, Felten’s graph may look linear, but it’s actually exponential. And that’s just for humans.

Since we’re talking about computers, however, we can’t just reference difficulty for humans. We need to address computation complexity. If you assume chess computers used the same algorithm and just depended on more HW horsepower, the graph would show exponential improvement (see above). But you can’t assume chess is as easy for computers as it is for humans, since they have wildly different architectures.

In fact, this simplistic view of Moore’s Law vs. computer chess ratings is all a massive red herring, since the breakthroughs in game-playing computers, especially since 2012 (when HW capacity effectively “unlocked” practical DNNs by crossing an unexpected scale threshold for key problems), have been via a new class of SW algorithms (deep neural network-based machine learning or DNN-based ML). So Moore’s Law isn’t playing out only in HW. It’s mostly playing out in SW, a point which Ray Kurzweil makes in his seminal works on the Singularity.

Once you are using DNN-based ML, training data sets also become important. The amount and quality of training data becomes the determining factor, opening up another battleground for Moore’s law exponentialism with lots of OOCP solutions like “make everyone in the world do something” and “have the AI play itself”. These training solutions mirror those available to humans (pick the best teachers from around the world, sports improve faster when they’re popular, spend more time competing against the best and you will improve faster), but computers can take advantage of them more easily.

And don’t get me started on how computer chess is already a machine/human cyborgization, just without the wetware. What’s the difference between chess board/chess rules and computer+chess board/chess rules? It’s just a tooling evolution. So, conceptually, they can be seen as the same class of “game”. AI chess just sticks some of the metagame implementation (rules and state evaluation) in a box.

I’m surprised you’re posting this with such a simplistic take, given I just finished Walkaway and I know you get this stuff. Time to separate the people who you believe have misinterpreted the meaning of Singularity from the actual thinking on which the Singularity concept is based. There are lots of key principles in Singularity theory, like double exponential feedback systems, that are true and broadly useful when applied to short-term strategy analysis. Singularity theory mirrors other important thinking like Nassim Nicholas Taleb’s theory of anti-fragility, risk management and black swans. And like Singularitians, Taleb takes his theory a bit too far, applying it somewhat bone-headedly to things outside it’s core competency.

Again, I know you get this stuff. Walkaway is about people dealing with technological singularity, after all. It doesn’t matter whether rogue AIs rule or not, the underlying concepts are valuable. Reach out to preach out, dude!

3 Likes

It may be that chess ratings are mathematically, tautologically exponential by # of players at a given rating, because of how players get a new rating. But that doesn’t undermine the qualitative analysis here.

I’d argue that the second leap is even more impressive, as improvement becomes more difficult the higher the rating goes. I haven’t followed the history of chess playing computers, but I imagine that jump was a pretty big deal when it happened.

I have a friend (also a software engineer) who plays Go. He marvels at recent developments in game playing software. He says that the newer systems can not only teach themselves to play a specific game, they are generic game learning and playing systems.

This makes me think that the software can now invent and follow brand new game playing strategies, if you consider a strategy in (say) chess, as a sort of sub game.

No, I’m pretty sure I’m a steam engine. Either way my brain’s definitely not subject to Moore’s Law-style scaling.

1 Like

Simulating a human brain always seemed like an unlikely way of achieving artificial intelligence to me. What good is a human brain without a lifetime of sensory input to help shape it into a human mind? Next thing you know you’re stuck trying to simulate a whole universe.

But then you can make an apple pie from scratch!

1 Like

I have found generally that experts in a field minimize the rate of progress they predict. What Mr Felten fails to realize is that the Singularity Feedback Loop guarantees faster progress, although it may not be a straight exponential curve, and instead goes in spurts. Intelligence creates technology, and technology improves intelligence. Inevitably that increased intelligence will dramatically improve the area of technology concerned (like chess playing), although in chess there isn’t that much room for improvement at the very top. DeepMind has made dramatic improvement in chess and go playing in recent months - I don’t know if that is reflected in this article since it is so new.

This topic was automatically closed after 5 days. New replies are no longer allowed.