“The Road to Superintelligence,” meaty long-read on future of artificial intelligence

Except you have exactly zero evidence for this, either empirical or theoretical.

With the exception of the world we live in.

Nice non sequitur. The world we live in shows no such thing. It shows boundaries to growth imposed by resource constraints. It shows no examples whatsoever of ever increasing growth rates.

RISC, VLIW, ETC

Things that were thought to be revolutionary increases in computing power live on in niche applications. How is that evidence of what you claim? Niche applications are not revolutionary advances. That’s what makes them niches.

How do you know you can compute with molecules or photons?

By trying both and comparing?

How do you know either can even be tried? How do you know they will work? Molecules are seriously complicated by thermal effects, photons by dispersion, and both by quantum spreading. If you can’t tell us how those problems can be solved, then you are advocating a position based on faith. You have a religious perspective.

There are huge technical and theoretical obstacles in the way.

Yes. That’s why it is called “research”.

As someone who actually does research, I can reliably inform you that quite a lot of the output of research involves specifying what you CANNOT do rather than what you can. Research does not mean “I have an idea; research will tell me how to make it feasible.” Research does not inevitably tell you how to solve a problem. Sometimes it tells you “you can’t solve that problem.”

I think all predictions of eternal exponential growth are bullshit because
they always have been bullshit. And they always have been bullshit
because eventually you will hit a resource constraint.

And hitting the constraint is the opportunity for paradigm change.

Every instance so far shows that it is the opportunity for either plateau or death.

And computing has simply not erased that. It still depends on resources.

True, in principle. But I see the hard stop somewhere at the level of
the Dyson Sphere. Until then it’s a merry ride. And even then, with the
harnessed power of an entire star, who knows if there aren’t spacetime
cheats to make interstellar travel possible.

Now you’re treading into my territory (relativity theorist here). There are no spacetime cheats. All possibilities depend on violating energy conditions that don’t appear to be violated. And if you think Dyson spheres are the limit, you obviously haven’t been paying attention to climate change and the Fermi paradox.

It is touching, and to some extent charming, how you have faith that technological development will conquer all and that all ideas will somehow pan out. But they don’t. Bacteria can’t innovate their way out of the constraints of the petri dish. We can’t innovate our way out of the resource constraints we face either. We’ve essentially run out of oil. If you have to drill more than a mile under the ocean, you’ve approached a limit.

What will you use to replace it? Nuclear? Wind? Solar? Nuclear has waste problems. Wind and solar, as Max Planck Institute has show, will themselves cause climate change if used on a broad enough scale. Since we are nowhere near building a Dyson sphere, how will you bridge the gap?

Fisheries are collapsing. Topsoil is vanishing, as is groundwater. This is evidence that the carrying capacity of the Earth has been exceeded. There are far too many people chasing limited resources and they all want more.

When someone was discussing the possibility of extraterrestrial life with Enrico Fermi, he eventually said “Where are they?” That was a cogent point. We have been broadcasting radio for about a century and a half. Anyone within about 150 light years of us therefore knows we are here due to unnatural radio emissions. But despite 50 years of looking, we have never detected an unnatural radio emission from anywhere else in the universe.

Until recently, you could argue that maybe this is because planet, and therefore life, is relatively rare. No more. We know of about 2000 planets around other stars and we’ve only just become able to discover them. Planets seem to be the rule, not the exception. Presumably, where there are planets there can be life so life is probably also the rule, not the exception.

So again Fermi: Where are they? Some civilizations should surely be older than ours. We should detect their emissions, even if radio is only a transient phase in their development. But there’s nothing. Why?

Natural selection by and large produces species that compete for resources and, when resources are available, will exploit them to the maximum extent possible. That’s certainly what mostly happens on Earth. Natural selection is unlikely to be a scientific principle restricted to Earth. Likely, it is universal. So everywhere is much like here.

And here, we are killing ourselves off by trying to exploit our resources before someone else does. I don’t have evidence, but I have a suspicion that we’ve already passed a tipping point on climate change. Change is occurring faster and to a larger extent than even the most pessimistic models. The west Antarctic ice sheet is doomed to melt and there is nothing that can be done to stop it. That’s a big deal.

Once our civilization has collapsed due to massive climate-induced dislocations, there will be no second civilization. The reason is that all the easily available energy resources are gone. Remember resource constraints?

I’ve come to the conclusion that intelligence is not a long term survival trait for a species. It leads you to destroy your own environment. And that accounts for the Fermi paradox. No doubt there were many intelligent species in the universe. They all killed themselves off through environmental destruction. I don’t see any other plausible explanations.

What does this have to do with computing? Well, quite a lot. We’re the ones who are going to build the damn things. It doesn’t look to me like we will do so before we kill ourselves off. Your judgment of that depends on your judgment of how close to AI we actually are, but AI is historically marked by over, not under, estimation. It has been 20 year off for 70 years now. I also know a bit about cognitive science and realize that the problem is not nearly so simple as just throwing more computing power at it.

But suppose we could beat the clock. Suppose we create AI before we kill ourselves off. All that does is reproduce the same problem in a different medium. Natural selection doesn’t give a shit about the substrate of the organism. It only cares about competition for resources. And resources are always limited.

There are, by the way, no Dyson spheres. They’d be pretty easy to detect with infrared cameras. So nobody else solved it either.

I want you to notice that everything I wrote, except certain elements of speculation about the Fermi Paradox (and not all elements) was based in known fact. You rely on faith. You’re confident, for no particular reason, that problems will always be solved. I invite you, then, to create a perpetual motion machine.

Often research tells us “Sorry; Can’t do that.”

2 Likes

Nope, this is wrong. And I don’t mean particularly wrong, but completely wrong. In the end, there is the heat death of the universe, at least. Or the like.

Since Tim Urban did not see fit to cite Heinlein as the source of his exponential-growth extrapolation diagram, allow me to do so.

No. But it shows stacking S-curves.

Desktops are a niche. Tablets too. Smartphones as well. Supercomputers too. There is no mainstream anymore, the applications are too diverse.

The revolutionary progress happened under our feet in so sneaky way that you completely missed it. Or you have impractically high threshold for “revolutionary”.

How do you know it can’t? How do you know it won’t?

I don’t expect them ALL to work. I expect SOME to work.

I heard some similar arguments when photolithography was reaching the diffraction limits. Then inverse photolithography and the phase shifting masks cheated around this.

Or, again the diffraction limit, in optical microscopy. Now there are near-field superresolution ones.

There are always limits in tech. Then they are reached and either they are worked around, or a new tech is deployed.

A specific problem can stay unsolvable. If you let this cloud your perspective, so you do not make a step back and find a way around (e.g. a different, solvable problem).

Yes. That’s expected. But you won’t find what truly cannot be done by not at least trying.

Yes. That’s why there are multiple simultaneous directions of research. Ninety nine won’t find anything, the hundredth will be a breakthrough. Of course you can’t see in advance which one it is.

You keep seeing the S-curves and refuse to see the jumps to new ones.

That you cannot see it does not mean it cannot exist. There are quite some weird noises about space-curving methods for faster-than-light travel. So far crazy, impossible with today’s tech, and fitting in current theories.

One of the possibilities, and I assume not the last one, is the Alcubierre drive,

ORLLY?

We don’t need no stinkin’ oil. There is a lot of ocean surface, enough to grow pontoons and pontoons of algae-based fuel. These ship arrays can be mostly self-replicating, if the “mothership” converts the green crude to plastic and prints the other ships.

Wind and solar in supportive role, and nuclear on thorium-cycle reactors. Molten salts seem to be safe (though there are corrosion issues with materials, certain composites may be helpful here). Or you can sink the whole reactor plant undersea; the high outside pressure can keep the equilibrium with the internal pressure.

Then there is the chance of subcritical reactors, which can burn already burned-up fuel. The show-stopper here is the high-power neutron generator. Again, nothing that any theory says would be impossible.

All translate to energy problems. You can grow anything for arbitrary amount of people as long as you have enough energy. (And the people are willing to live on reformulated algae. Which, given what the contemporary flavor industry can do with even a cardboard, does not sound as unpalatable as it may seem.)

…and then you have the issue of developed nations that have decreasing natality. Bring higher standards of living to the developing ones and the overpopulation will solve itself.

Are we sure we are looking for the right markers? At the right angles? Why do we assume their tech development went the same direction?

May finally be enough force us to accept some risk of untried technologies and try out some new toys. The Manhattan Project got us from theory to functional fission weapon in couple months. At the beginning they barely knew what they are doing; at the end, instasun.

Define “civilization”. The sun keeps shining. The plants keep growing. The knowledge is in so many copies it is not likely to be entirely lost.

So what to do? Throw in the towel and die off peacefully?

What about TRYING? Either we succeed, or not. If we won’t try, we will not succeed for certain.

I am aware of the problems. There is a number of theories, and new ones are emerging as the old ones are dying. This is not my area of expertise, so I can just watch from the peanut gallery and cheer in a general direction. But cheering I do. And yes, there are tough problems. But we solved many other tough problems already.

And buy some hundreds of millenia of additional time.

Yes. And there are always some new approaches waiting for being discovered. Or stumbled over or sat on while looking for something entirely different.

Which can be read as “Try another way”. Of course if you decide to read it as “Give up”, you’ll give up and I’ll pity you.

Then it is time to step back, reevaluate what we want to achieve, find that we overspecified the problem, and restate it and try again. In this case, we cannot make a perpetual motion machine but what we wanted it for? Tabletop curiosity? Making energy for cheap? Publishing a paper? Scamming investors? Fun weekend project?

2 Likes

Fact is great at drilling down to the truth. However belief in the impossible is what fuels innovations.

1 Like

Isn’t this the same basic idea Alvin Toffler had with his book “Future Shock”? Like, its not wrong but as with any prediction of the future its not right either.

If I had a time machine, I would use it to go and ask Pointed Questions to those prognosticators as to the whereabouts of all these spiffy spaceships they promised me. Wait, no…

I read an excellent little bit of swashbuckling SF a while back, that I sadly forget the name of, which had AIS that they used very briefly, as in 5 minutes they were too dangerous to be around, and they were regulated like atomic weapons. Was good.It has lots of fighting and spaceships, also.

Yeah, there are some years remaining of possible improvements, but the previous era of Moore’s Law style doubling seems to already be over. And we can’t say if any successor technologies will be amenable to the same exponential improvements. Probably not, given the particular issues that made it possible for chips to be shrunk, but who knows at this point.

Probably yes, because we have the entire third dimension to take advantage of. The chips of today are all planar, all is happening in 2D, in a thin layer on the surface. The chip-stacking techniques in use are just gluing several wafers on top of each other and connecting the vias.

There is a lot of potential in the undeveloped.

Also:

1 Like

The linked article was rife with bad statistical evidence, but who cares? It is easy enough to dismiss the article and enjoy the work which is and has been advancing computers and computation.

For somebody who just went off poo-pooing a lot of ideas as not being based upon evidence, you basically followed up with stuff which has even less evidence. One can at least visit a photonics or molecular computing lab to talk about what they are doing now, how far they’ve come. Your outlook sounds a bit populist, as if these advances are only “revolutionary” if everybody has heard about them and uses them. The achievements are what they are, and how many or few use them does not change that.

How about energy from hydrogen? It’s only the most plentiful molecule in the universe… Good luck running out of it.

Then I’d say you’ve got a quite dysfunctional definition of intelligence. If you haven’t noticed, humans have not been applying their intellect towards the problem of how to squander their resources. What has been happening is that intellect has been a negligable factor, while the civilization has been driven by instinct. An instinctual drive which appears to support individual survival can be at odds with survival in broader terms, when somebody can change their environment with no larger, more objective perspective. Hoarding, competition, and war are hardly intellectual pursuits, yet these are what have shaped the current crisis.

Perhaps instilling intelligence in something without self-destructive instincts will allow something to survive.

2 Likes

I’m reading and posting to this thread from my phone.

15 years ago I was still using an amber screen IBM clone (the next year I “upgraded” to Windows ME).

This phone is more powerful than either of those machines by several orders of magnitude. Also, it has a better digital camera than the one I got about the same time. (Although the lens is somewhat crackerjack)

2 Likes

no limit but the Landauer limit.

2 Likes

And even that can be worked around by reversible computing.
(Thanks, wasn’t aware about this one.)

What if it’s state vector doesn’t collapse for you, and the bit remains both true and false simultaneously?

1 Like

Well, fuck.

2 Likes

The s-curves you’re stacking may be (are probably) general growth curves, not necessarily to do with AI. The last major leap in computing/AI will be with 3-d structures. Then computing becomes another linear commodity like steel production. And AI/the_singularity will happen or it won’t within that framework.

That’s not last. Self-assembly of these structures from cheap, mass-produced parts (perhaps something akin to virions?) will follow, together with fault-tolerant architectures. It’s one thing to make chips in an expensive foundry from superclean materials, it’s another to grow them in vats like if making beer or tofu.

Then you may not be able to scale the size, but you’ll scale the cost per unit. Take the cubic micrometers of the actual volume where computation (or storage) occurs in the computer chips; all else is a “dead matter”. Now assume that after some time we get the same volume per computing power on the self-assembled structure, or even 1000 times less. Scale to something sanely small, e.g. a match box. Need 8 times more power? Scale each matchbox’s size twice. We have a plenty of physical space; we used to dedicate entire buildings to early vac-tube computers, we do now with datacenters, we will do so with later architectures.

And then there are further possible improvement steps hidden from our current imagination. That you cannot imagine something, does not mean it cannot exist/happen.

The method of assembly, though, just accounts for a few orders of magnitude after the cubization.

Fault tolerance actually reduces raw capability—probably not canceling out the other improvements, but contributing to the regression to linear growth.

In what metrics? Per volume, yes. Per cost, due to vastly increased yields, HELL NO! :smiley: