The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Originally published at: https://boingboing.net/2020/01/13/serial-killers.html

Just a nitpick but Moore’s law is about doubling transistor count not speed as a result. The doubling of speed hasn’t been a thing since the end of the clock cycle wars of the 1990s and early 2000s. We’re reaching the limits of Moore’s law in terms of doubling transistor count for sure but not speed.

13 Likes

Hey Cory,

Are you going to ever write a post praising machine learning? It’s done some awesome things in predicting flu and other contagious disease breakouts; early earthquake and avalanche detection; improve worker safety at factories and worksites …

It’s a tool, quit trying to ascribe virtues to it.

4 Likes

Part of that is because the pricey processors (incl. GPUs) paid off the capital expenses for the last-generation fabs and that makes the cheap processors feasible running on old and paid-for processes. Not too long ago when I was still in the business the highest-volume process was still 180 nm, even as the money had all gone to 45 nm and below. Too many applications were just fine with 180 and moving to anything smaller would have liabilities besides cost (notably supply voltages that don’t line up well with batteries and increasing leakage currents.)

Meanwhile the yields of the processes that lend themselves to applications like appliance and automotive controls (much less phones and toys!) are still getting better. The newer processors’ installed cost is benefitting from things like progressively cheaper power management devices so batteries are less of a problem, leakage is easier to manage thanks to cell phones, etc.

So the cost of cheap processors goes down.

9 Likes

checks the url slug

nice

It has been estimated that electricity prices make up 60 percent of the cost of Bitcoin mining. If “cheap processors continue to get cheaper” that percentage will increase. We need a Moore’s law about the doubling time of processing power per unit energy.

2 Likes

Not just “reaching” it, but have reached it. It happened a few years back, from what I’ve read. (Not because transistors couldn’t be physically shrunk any further, a point we’re hitting now, but because fundamental issues around heat dissipation prevented putting more transistors on the chip.) It’s all been about changing architecture and adding more cores. But software doesn’t necessarily take advantage of the extra cores the way it should (or at all), so there have been some stalls in speed, but unevenly. So this whole issue is a past/current one, not a future one.

4 Likes

image

4 Likes

This was writtien in 1999:

The Effects of Moore’s Law and Slacking on Large Computations
Chris Gottbrath, Jeremy Bailin, Casey Meakin, Todd Thompson, J.J. Charfman
Steward Observatory, University of Arizona

Abstract
We show that, in the context of Moore’s Law, overall productivity can be increased for large enough computations by ‘slacking’ or waiting for some period of time before purchasing a computer and beginning thecalculation.

5 Likes

Aren’t there plenty of people signing it’s praises? Does it really hurt to have someone saying let’s be careful too? I ask

1 Like

At a local scale – that is, writing the code that runs on a particular computer – programmers are not at all attracted to parallelism. The typical view is that you write single-threaded code and then, if there is a particular, expensive task that could benefit from being parallelised, you might tackle that as a subsequent, painful project in its own right. At a fine-grained level, parallelism is a headache – probably intrinsically, and certainly with the tools developers currently have.

Modern computers do benefit from having a few cores, because even if every process is single-threaded, there are several processes doing work at the same time. But the nineties Danny Hillis type of vision, where computers have thousands of relatively feeble processors, currently looks like a dead end; we just can’t make software that could exploit that kind of system. (With one significant but narrow exception).

Still, I don’t have the sense that anyone’s hitting a ceiling in terms of processing power. For most purposes, single-core performance has long been so excessive that we’re still working out how to waste that much power. Like, if you’d told me in 1996 that there would be 3D design apps written in JavaScript running inside a web browser on top of a bytecode interpreter, that would (a) sound totally insane and (b) make me not at all worried about the future of scalar processing performance.

11 Likes

We have one: Koomey’s law.

4 Likes

Ray Kurzweil’s visions of the future notwithstanding, there are a LOT of ways we can increase computing power, & most of those will increase computing power way more than the doubling of Moore’s Law.

The trouble is that these will happen once, & there’s a finite set of them.

Ways we can increase computing power (in terms of effective productivity per second, as not all of these correspond neatly into operations per second)

  • Optical computing
  • Quantum Computing
  • DNA computing
  • Memristor-transistor logic
  • Coprocessors dedicated to specific tasks like
    – GPUs (which we already have)
    – Encryption
    – Garbage Collection
    – Security
    – Uh… database operations?

As you can see, things tend to taper off at the end. Each of these would provide faster completion of certain functions by several orders of magnitude, but they would do nothing to speed up most other functions, & the problem is that once those gains are realized, there’s nothing more that those designs can improve in the future apart from some synergistic improvements when the next new method comes along. & there are only so many of these changes that are useful.

We’ll have to go back to actually optimizing code sooner than we think.

2 Likes

I was going to suggest that the only practical application the casual user would have for that kind of processing power is souped-up video games, but then, it seems clear these days that you don’t necessarily need to squeeze all the processing power out of a machine to give people an entertaining experience. And developers probably wouldn’t make the effort to write something for an enormously powerful machine that most people can’t afford anyway when there are cheaper thrills to be be had.

Not a CPU but… brain performance is estimated at 10^25 flops down to 10^18 (https://aiimpacts.org/brain-performance-in-flops/)

Current consumer grade GPUs - $1000 buys you somewhere more than 11.3×10^12 flops (https://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude and https://www.techpowerup.com/gpu-specs/asus-rog-strix-rtx-2080-ti-gaming.b6107).

If we assume Moore’s Law continues to hold, and ignore all the other problems (heat, interconnectedness, material advances…etc) then we get to human-brain-amount of computing in just a little more than 12 Moore cycles - 18 years - at the lower end of the estimated brain power range, and somewhere around 40 at the upper end.

But even animals with much less brain power are capable of amazing things. So it won’t be surprising to see truly smart systems coming online in the next ten years. Imagine a virtual dog which wants to seek out information for you; a virtual rat, looking for virtual landmines which have been planted in your computing; or, a simulation of fetal brain development from the moment of first neuron until 5 months, to give us hints about abortion. (Caveat - I haven’t done calculation to estimate the computing power needed for any of these examples, but my gut feeling is that they are likely to be in the feasible spectrum from a pure computing power POV.)

Obviously all of these need other tech advances, and more importantly there are massive ethical issues involved in systems which have awareness-levels of intelligence; once you have an info dog, is it okay to switch it off? What about killing the virtual baby? What about pain receptors? Do we subject these things to virtual pain?

Ok, so the premises are that (1) there’s a mysterious base cost to a single-core CPU of $500 or so which prevented low-cost accumulations of cores either in multicore chips or in clusters; (2) and the folk version of Moore’s Law (“speeds double every 18mo”) continued its exponential way. Yes, I know that’s not the real Moore’s Law, and that actually Frankenstein was the scientist not the monster.

So, first let’s figure out how fast they’d be. For convenience, let’s pick 2002 as the date that the world started faking it with parallelism. There have been six 18mo periods since 2002, so CPUs would have doubled in speed five times. That means CPUs are now 64x 2002 single-core speeds, which this page tells me was 2.4GHz.

So, welcome to Single Core 2020 where $500 buys you a 153GHz Intel Pentium AD.

Off the top of my head, I can’t think of any changes to OS or client-side software that has been enabled specifically by multicore CPUs. (Think of multicore as basically “let’s duct-tape two/four/eight CPUs together and sell it as a single CPU”. This is making your reader from the chip industry wince, but it’s a roughly useful mental model) Windows got better because of Apple and because MSFT failed a couple dozen times. However, another reason is possibly that the rising interest in cloud computing took the company’s focus from the legacy retail OS business Windows to the sexy cloud business Azure. Perhaps you could argue that Windows still sucks in our Single Core 2020, but it will suck FAST.

Cloud could go either way. You might argue that because CPUs are still expensive, the first scale compute farms built by Google, Twitter, Amazon, etc. would be limited in size because cheap hardware doesn’t exist. But those compute farms were never built on the cheapest chips around; the building blocks were whatever the current commodity mass-produced CPU was. (They pretty soon moved to custom motherboards, then custom everything as there was a Cambrian explosion of creativity in how you solve data centre problems like access for repair, or cooling)

So maybe our first premise, the base cost, means that there’s never a slow install base of hardware in these centers which companies can then sell on to the public and start the consumer cloud (Amazon Web Services) play. Only the big guys have the ability to go parallel. This means Google still exists and is dominant, because web search (indeed, all search) is a fundamentally parallel problem (the number of pages in the web doubles faster than the CPU speed, so the problem is growing faster than any single-core system could deal with) and the high cost of parallel systems means nobody else can enter.

You could hypothesise some kind of socialist computing resource, where the government funded an on-demand cloud and allocated time on it – this could be USSR-esque in its pessimal incarnation, or like the USA highway system in the best (“we all need them, so we all will pay for them, heavy users pay more”). Under this system, parallel compute-driven resources might well have stayed university advances (think of Yahoo! and Google fighting it out as rival research projects), and struggled to pay for the resources consumed by their popularity. Can you imagine Google’s advertising model making it past ethics approval? Such a hypothesis is beyond the premises we established at the beginning.

That does, however, raise the question of what would the Web industry look like in Single Core 2020: if cloud clusters are insanely expensive, that removes horizontal scaling (lots of computers) as a solution to “I’ve just started Twitter and now millions of people want to use it”. In fact, Twitter itself is a fundamentally parallel service: when Lady Gaga tweets, 80.7M timelines need to update. You might be able to serve a small town’s Twitter from a 153GHz CPU, but nothing national.

If we didn’t have GPUs, we wouldn’t have deep learning’s renaissance. We’d still be in the AI Winter. So goodbye facial recognition (unlock your phone with your face? Only if you type your passcode with your nose), deep fakes, speech recognition (“hey Siri” doesn’t exist in this world), and “people who bought also bought …”.

In our Single Core 2020, gaming still feels more like Space Invaders than World of Warcraft – GPUs are still ridiculously expensive because they’re inherently multicore. Client-side parallelism started with GPUs and consoles are life support systems for GPUs. Xbox preceded our magical 2002 date by a bit, so in our world if there’s gaming or VR hardware then it’s still insanely expensive and the realm of the rich and the military.

But to your question … new apps? I don’t know of fields whose progress has been hampered by lack of growth in single-core CPU speeds. Perhaps this guy would have an opinion?

2 Likes

The fuck it does…

17 Likes

Talk about empiricism-washing

3 Likes

Broken link. Were you looking for https://scholar.harvard.edu/files/mickens/files/theslowwinter.pdf ?

2 Likes

Thanks for that link.

There are probably some things that machine learning would be good for, making public policy on bodily autonomy is emphatically not one of them.

10 Likes