“The Road to Superintelligence,” meaty long-read on future of artificial intelligence

Per cost, rats (or ants) will beat silicon per computing unit for a few decades to come. It’s the fault tolerance that makes meat-brains so theoretically inefficient. Self-organisation, too.

If you scale (say) Intel I5 CPU from the cubic millimeter (I think I am generous) of computation substance into the volume of an apple, for the cost of growing an apple, you still grossly win, efficiency-wise.

You can write off several orders of magnitude with comfort if you get even more for it.

A rat has the problem of being difficult to attach into the circuit on a stable and functional basis. Sometimes it self-installs, but usually the assembly then refuses to work (or at least starts to smell after a while). Similar with ants.

An apple sized volume of chip? Really? Let’s come back to this conversation 20 years from now. If we can, of course, chances are the exponential takeoff was an illusion. I’m happy to wait through the singularity if I’m wrong. Drunk, if necessary :slight_smile:

Why not? Think some organic goo, not necessarily silicon. Brains typically have such dimensions, or even bigger.

Chances are they aren’t. Let’s see.

I cannot wait. Especially if it would be possible to merge with the machines.

…and to see the faces of various philosophers, just before their heads explode…

Good idea even for now! :smiley: Will make the waiting more bearable, if nothing else.

1 Like

There are no examples of never ending upward growth curves. None. At all. All plateau or crash.

You have no theoretical arguments in support of their existence either. You have only vague assertions that in some unknown way technology will provide. Well it might not, probably won’t. It won’t provide perpetual motion and it likely won’t fix resource constraints either.

Essentially, your argument is “This time it’s different” and your evidence is faith and that in the end puts you in the same ashcan as James “Dow 36,000” Glassman:

He was full of shit too. I find reliance on faith over facts disturbing. Not only that, it is dangerous because in the end nature doesn’t care about your faith. Refusing to live in a fact based reality leads to things like exploding space shuttles and trashed climate.

Yes. What I proposed is a long sequence with the plateau far enough to not worry about it anytime soon.

Expect the unexpected. Usually it works. We live in the age of wonders, inventions that many claimed to be impossible - until somebody came and did it.

I proposed possibilities. Beats saying that it can’t be done by a wide margin, doesn’t it?

Except that this time it is all the same. We were through these cycles of technologies emerging, developing, saturating, and being replaced by new ones for so many times, and they happened so gradually and off the sight, that you do not notice them. That doesn’t mean they aren’t there.

Except when backed by facts. Look at the history, again.

Similarly, inventors and developers do not care about the naysayers. If they did, they would sit on their ass and become economists or lawyers.

And nature also doesn’t care about your skepticism. So far, the inventors are winning over the skeptics, and it seems it will last for quite some more time.

Refusing to live in what you call a “fact based reality” leads to things like space shuttles.

And such things blow up. When that happens, find out why and build it anew. (Or find that it was a bad idea and design something else.)

For everybody who does something deemed impossible there are myriads of those who insist on the impossibility. Only a few things are actually impossible.

1 Like

But…my body is a temple man, ya know?

then it hasn’t been “erased” since it hasn’t interacted with anything. related to this is the fact that you can’t copy a quantum bit either.

notably, all pure quantum algorithms (meaning actual qubits, not ensemble approximations) have to be reversible. quantum logic gates are unitary matrices, which automatically have inverses. many universal classical reversible gates also exist, for instance the Toffoli gate.

3 Likes

Yeah, we should have the whole ‘physics’ thing all wrapped up in, what, 20 years?

Much of this discussion has been dwelling upon the same ideas which I found annoying about the article. Such as using Moore to rationalize AI as some inevitable emergence. The article revealed itself to be unabashedly eschatological on this point. Regardless of how many transistors or gates fit on an IC, their doubling has never been a sufficient factor of intelligence. I find this all more interesting from an artificial life perspective - that self-modifying code demonstrates it’s own sort of intelligence.

The sort of AI most people seem to dwell upon, making computers act like people, I think is a mere UI problem. The only real reason to make computers like people is that they are easier for people to interact with. Hence initiatives such as natural language processing, which has always had a high profile in AI because of a “wow” factor. And the Turing test is even based upon the pretense that this is what AI is and what is worth striving for.

Despite the difficulty in duplicating in silicon the raw connective power of a human brain, it’s debateable to what extent human have mastered duplicating the thought processes of even smaller, less sophisticated brains. Whereas at the dawn of programmable computers, they were often described as “electronic brains”, we now know enough about both to be able to define brains as being essentially “molecular computers”, and what these computers are optimized for is keeping a body alive - something an electronic device has absolutely no use for. Whatever intelligence or sentience humans feel that they have appears to have developed mostly as a side-effect of what the brain does.

So instead of worrying about raw power, the more interesting developments focus upon efficiency. Is it efficient to model a giant complex analog blob to exploit an interesting information-processing side effect? The work which I think really applies the advantages of organisms to electronics is the move from steady-state computation to more adaptive models. Swarms of autonomous agents are developing types of intelligence just fine, by playing to the strengths of software instead of mimicking biology and human standards of intelligence.

1 Like

You won’t get quite there (as each answer usually brings more questions), but if it gets the basic research on an accelerated track, with the applied research (with stress on tools, as lab tools further accelerate the research progress in many fields) just behind, it’s worth the effort.

Yuuuup.

I’m sorry, but

The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading.

was not an auspicious way to begin a “meaty” article on AI.

2 Likes

Sorry. I shoulda /s

That’s a bit premature.

This is my theory behind Iain M. Banks’ The Culture series.

1 Like

Makes sense to me. It’s pretty clear that the Minds pretty well run things, they just occasionally ask for human input or help in the same way we might climb a mountain instead of just helicoptering up there: sometimes it’s more interesting to do things the long and complicated way, and you get bragging rights. (“Oh yeah? Well I toppled an alien society just by sending one slacker gamer and a drone!”)

3 Likes

I think it’s a variation on the alien problem… it’s hard to portray a truly alien way of thought, and even harder to do it in a way that makes them compelling and appealing, so most aliens have weird biologies but rather human-like personalities, maybe with a few tweaks. Not impossible, just hard.

But there are authors out there doing interesting things with AI, Karl Schroeder has done a number of novels, not all focused on AI but many of them have interesting models (his debut novel Ventus has a world full of AIs that operate on ‘thalience’ rather than ‘sentience’, the idea of giving nature a consciousness that’s not just a reflection of what humanity wants but rather self-evolves, in his Virga series he explores consciousnesses for disposable purposes and AIs that define themselves as an object outside themselves, like a swarm of bots around a random brick that all believe it IS the brick and defend the brick’s interests). Charles Stross in his singularity-era books does unfathomable AIs (humans tend to exist around the margins of their existence and still have levels of technology far beyond us) well.

Btw, that Cavil moment you quoted later in the thread is possibly my favorite moment in the whole series.

1 Like

I agree!

I don’t think it’s as difficult as many make it out to be.

^ This is the real problem. To make what they hope will be a popular, engaging narrative most people don’t bother imagining anything significantly different than human. And unfortunately this is also why most sci-fi media have practically nothing to do with science of any kind.

This topic was automatically closed after 5 days. New replies are no longer allowed.