The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Let’s say we started to run into a wall in 2005 when AMD sold the first x86 multi-core CPU, GPU’s had been available for purchase for about 10 years by that point (Voodoo Graphics). That was a world without the pressure of Moore’s law and even then we chose more parallelization because it made things possible that even Moore’s law couldn’t account for. Machine Learning, Cryptocurrency, VR all take advantage of the gains we made with GPU’s (specialized parallel co-processors) . In a world where CPU’s kept on course with Moore’s law we still would have created GPU’s so it wouldn’t look any different.


Voodoo didm’t even handle T&L in hardware, let alone programmable shaders.

1 Like

How is VR considered to be an example of a parallelizable computing task? Yes, individual GPUs themselves have become highly parallelized internally, but I am not familiar with VR applications that can readily take advantage of multiple discrete processors utilized in a parallel fashion. It’s all about having a pretty decent CPU and the best single GPU you can get. Heck, most VR setups can’t take advantage of multiple GPUs at all, due to timing issues, or so I thought?

I guess maybe we chalk this up as another not-fully-informed Cory Doctorow “tech is horrible but I’ve made my entire career of it” rant? :wink:


Thinking of computers in terms of “artificial brains” just because they both process information is about as useful a metaphor as thinking of helicopters in terms of “artificial birds” or thinking of wheels in terms of “artificial feet.”

Beyond a few superficial similarities the comparisons break down pretty quickly. With all due respect for Kurzweil, I don’t buy the idea we’re on the cusp of bridging the two.


The idea that human-like intelligence is bound up with the fact that the brain is emplaced in and connected to a human body goes back to Kant (Critique of Pure Reason).


Makes sense given that a human mind is largely the product of a lifetime of interaction with sensory input. If you were able to grow a human being into adulthood while immobilized inside a sensory deprivation chamber you probably wouldn’t end up with a healthy human mind.


Nobody has a real answer so far.

My outlook on the question of what you do with the faster speed but less parallelism is to try to translate the question. What do you do with less latency on a singular problem? The gamer in me says you get better emulation because most older hardware was inherently single threaded with specialized hardware for it’s particular application, so faster single thread gets you there in hopefully the right amount of time. Some hardware had it’s own multitasking by having different chips to run different tasks in parallel, e.g. the gpu and apu in the SNES running the graphics and audio as two independent, simultaneous tasks.

But what else does that translate to? I’d say since (in this hypothetical future) we have less server grade CPUs (slower but more cores) that computing would have to be more individualized and less cloud based. End user operating system and distributed systems would be more important; if you don’t have local parallelism, you use remote parallelism. So what do those classes of problems look like? Now you’re depending on network latency, so it would have to be collaborative problems that don’t require (near)instant results. I’m reminded of the SETI project from years past where you ran a client and the results were sent over the network for validation and analysis.

So in some ways it would be more community oriented projects; where do you want to use your CPU cycles? What projects benefit from that, or who can you rent out your computing power to? So I think it looks like the user has a little more power in this hypothetical future.

1 Like


I read the wall of text and found a sincere, honest and sensible answer to Corey’s question, I was not expecting that, thanks! :hugs:

1 Like

Have you come across the work of Hammeroff and Penrose and their idea of Orch OR? Some pretty interesting stuff, and yeah, if it even hints at truth, Kurzweil is way way off. Which seems kind of obvious in so many ways, really…

For business use, COBOL could surely benefit, since it was originally developed on single-CPU IBM (and other vendors, but I’m more familiar with IBM’s) mainframes machines and most COBOL is single-thread. Mainframe OSes handled dispatching various levels of tasks (operating system, transaction system, application program to simplify) on the single-CPU with a sophisticated dispatcher (well, sophisticated for the time… still sophisticated but I’m not familiar with all of the other CPU-scheduling dispatchers out there).

1 Like

We already know a lot about abortion but most of this knowledge is simply ignored.

How about a simulation of societies that are free from laws and prejudices suppressing women and minorities to give us a hint on how much fairness, happiness and other surprising benefits we’re all missing out on?


Why are you using a picture of Icarus spartain 6 fpga miners, when you don’t talk about them?

Were you looking for … ?

Yes, thanks for catching that!

I speed vs parallelism reminds me of the contest for market dominance between the Concorde, and the 747.


Financial transactions tend to be single threaded. Maybe we would finally be seeing the micro transactions that long ago were predicted if single threaded computing were dominant. You know, like paying half a cent to read a boingboing article instead of maxing all the parallel computing power on my device to stream ad videos that nobody watches.


I spent a decade or so being laughed at or otherwise contradicted with irritation about a 1996 editorial for the Calgary Unix User’s Group on the coming end of Moore’s Law, which, from Scientific American articles, looked to be coming around 2005.

The cores and architecture dodges did keep faster computers coming for some years after that, but the i7 I got in 2013 is only a bit bettered now, with faster memory buses and a few more cores; still not the jump in practical speed for most things you used to get in those early 90’s days from a two-year gap between purchases.

The parallel “bubble” is kind of like, when you get a parallel hammer, everything looks like a hundred nails.

“The End of Moore’s Law - Thank God”

The title isn’t about looking forward to more-elegant programming, but that the changes in OS and programs would slow down too, for the lack of new possibilities (like graphical interfaces that get ever more elaborate), so that one could write software that wouldn’t be obsoleted in a few years; and thus would put in more effort into the software because of the longer payback time.

So there’s that.


Not sure where you were 25 years ago, but I can assure you that Moore’s Law has been “applying pressure” to the entire industry ever since it was first uttered.

I remember the newsgroups full of chatter with people talking about the various laws of physics that were going to bring Moore’s Law to a halt back in the 80’s (optical mask resolution and the wavelengths of light), then the 90’s (EMF crosstalk at 133MHz), etc. And every year the foundries would announce a new amazing breakthrough in some aspect of chip performance that bypassed last year’s restrictions.

And they just keep shrinking those features to even more amazing levels. This year they’re touting that the next Zen 4 Ryzen may be a 5nm chip, which is claimed could boost transistor density by 80% over the current 7nm Ryzen (which already boasts 4.8 billion transistors!)

Moore’s Law has never stopped squeezing the almighty transistor. There has always been tremendous pressure on this industry.

If you’re talking about Google Flu - no it didn’t:

We’re right in the middle of the ML hype cycle so it is hard to see the useful applications for the snake oil. And we should be really cynical about ML because it is practically impossible to explain how a complex system came to the conclusion it did. When ML is going to be used to change peoples’ lives not being able to explain its decision-making is a disaster waiting to happen.


No, it’s not.

It’s impossible to explain how a system with the complexity of ML came to the conclusion it did, but not necessarily all similarly complex systems. ML develops along a branching path over millions of generations. To explain the system, you have to explain the complexity of the end state but also the complexity of each prior generation. It’s not just that this is so complicated, but rather that it would take longer than the lifespan of a human being just to recite the complexity of the system itself.

1 Like