Even if Moore's Law is "running out," there's still plenty of room at the bottom

[Read the post]

Not speed. Number of components.

8 Likes

An FPGA, eh? I’m currently working on compiling a big FPGA design for a spectrometer. I was handed a reference design including board support circuitry to start with, that contains all sorts of stuff that I haven’t explored for trustworthiness. It could have an NSA backdoor in it, for all I know.

3 Likes

Moore’s law has several components. Depending on what component you look at, it’s either still going strong, is on its last legs, or it ended a long time ago.

In its basic form, it says that every (time period),. the number of transistors per unit area doubles, or the cost per transistor halves. The time period in question has been revised a few times, from one year, to two, and now it looks like it needs to be changed to every three or four years. The number of transistors per area is still doubling just fine, just more slowly.

The cost per transistor, however, has stopped falling – either 32 or 28 nm is the last node that was definitely cheaper than the previous node for the same chips. The cost of each new node since then has been going up, with the number of companies that can afford to build new fabs on the latest node dwindling to a tiny handful of giant firms. CPUs, GPUs, and NAND are really the only chips still being made on the very latest process node available. Double patterning is just too expensive. And with NAND, those who have the capability have been backing off the latest nodes and going with layered 3d NAND at 28nm instead.

Meanwhile, in terms of processing power, Moore’s law ceased to work over a decade ago, when CPU makers discovered that power and heat requirements were ballooning out of control as they got into the 3ghz range. Since then we’ve had a shift to multi core CPUS, but the performance per core has only increased arithmetically for the past decade, after growing exponentially in all the years previous.

11 Likes

In spite of different high-level functionality, at the lowest level, FPGAs are typically made of the same bits and pieces that most chips are. Accordingly, they are manufactured using the same process technologies, and are also subject to the “end of Moore’s Law” in most of the ways that non-reconfigurable chips are.

2 Likes

Moore’s Law was a fiction from the start, a way to express how low-hanging the fruit was. Each transistor in Moore’s time used enough silicon area to contain a few megabytes of DRAM in the modern age.

Now we are seeing the laws of physics and optics getting in the way of further density increases. It’s not surprising that it took 50 years to get there. Engineers can only make a thing twice as small in one iteration.

A radical change in the structure of computing machinery is the most likely outcome. I would love to see the mysteries of the human brain plumbed in CPU architecture.

8 Likes

Although FPGAs are great for classical computing, I’m throwing my time and money at neuromorphic (imitating the brain) hardware. I write about it here, if you’re curious.

With respect to parallelism, it’s worth mentioning “Dark Silicon”.

Before it was clear that device dimension scaling was going to stop, voltage scaling plateaued. This happened about 10 years ago. About 5 years ago, people started to get worried about it. Without shrinking voltages to accompany shrinking device dimensions, power density started to increase.

You can only pull so much heat out of the chip, so the implication is that even if the new process technologies let you have a lot more transistors in the same area, you can’t use all of them at the same time without making the chip too hot. The “Dark Silicon” is the fraction of the chip that has to be off at any given time to prevent this.

Here’s where parallelism comes in: even if I could break my program up into as many pieces as I desired and I had massively parallel, massively dense hardware courtesy of Mr. Moore, I wouldn’t be able to fire up all those independent threads simultaneously because I would melt the chip.

Obviously, parallelism is great, but there’s a limit to how much computation is possible in a given amount of space, and it’s not changing much. To be able to get the most out of the available computational fabric, we need to do something more than just adding cores, even if you can actually find new threads to run on them.

4 Likes

Moore’s Law is neither very accurate, nor really running out. What has happened is that the tech industries have become satisfied with the price/performance ratio and have been optimizing other areas, such as power consumption. As somebody who was trying to run Pentium-powered wearable computers in the late 90s, today’s modern low-power multi-core processors would have seemed like a wet dream. In the 90s, there was no market push towards reducing Wattage.

Even by the late 80s to early 90s, the fastest and densest CPUs were made on GaAs rather than silicon. Why didn’t that take off? Perhaps the initial investment was too large. Or the materials are not abundant enough. Or the tech doesn’t scale well. Today, the best size, density and power going is probably carbon. But I haven’t heard of actual production hardware being made using it yet. The industry is continuing to push old tech probably because it is profitable, and they are stodgy. It really has nothing to do with Moore’s Law.

5 Likes

What it really comes down to is a silicon atom around .2nm. With today’s 10nm lithography processes you can’t get much smaller. Today’s chip designers are already struggling with quantum mechanics problems at this size.

Parallelization is great, but it’s incredibly easy to get wrong. Best case, your code runs slower, worst case you introduce all kinds of new and exciting (not to mention hard to reproduce) bugs.

3 Likes

I just want to know why ARM chips still aren’t first class citizens of the Linux world.

2 Likes

Just like humans though, any genius AI of the future will still need to make use of conventional computers to do its grunt-work.

2 Likes

Are you an electrical engineer? I ask because electrical engineers I know state that you are wrong, as does this article.

2 Likes

What’s the name of the law that industrial output of every kind must increase exponentially forever? Is Moore’s law related?

4 Likes

I do some electronics engineering, but it is an activity, not an identity.

It seems unlikely that they would know me, or my position on this. I create models of the world, but I have no reason to identify with those either. I can devise two mutually-exclusive models of things - which one is me?

No, the article does not. At least, it does not unpack any implicits which would indicate this. What I did was place a distinction between what is possible with the engineering, and the conventions of the marketplace. The convention has been to still make CPU dies from silicon, when this tech was long ago surpassed. There might be legitimate reasons why more recent techniques aren’t used, but as I said, I am not privy to the decision-making process.

The article did point out that measurement of “speed” depends upon how the architecture and tasks are organized. Unless one is performing the same task in the same way, no benchmark for speed is being used apart from the density of transistors. Which also leads to the question of “what precisely is a transistor?” There are many kinds of semiconduction which have not yet been exploited in integrated circuits. Like I said before, most of these speculations about Moore’s Law are based upon the use of silicon and CMOS logic, which are economic rather than engineering conventions. My point was that there are other technologies which do not suffer from the same limitations (but trade off with others), but pressing on with silicon has been “good enough” to make economic sense for the computer industry rather than pursuing something more modern which would require an investment in radically changing how they do things.

For example:

1 Like

9 Likes

Cheers.

9 Likes

Capitalism?

9 Likes

The next frontier is figuring out how to express your algorithm as a directed acyclic graph, or some other way that lets you take advantage of parallelization.

1 Like

There has been about a trillion dollars invested in this technology so far, so of course these are economic conventions. Technology will only get investment at this level if there’s guaranteed profit. CMOS is it.

By the way, I found a symposium proceedings from 1965 at the local science library that discussed ways to make electronics move forward. Amidst all the tinker-toy packaging ideas, one prescient paper actually said that CMOS in silicon was the future, because of the low cost and small size of the process. It was right then and it’s still right.

8 Likes