That’s a gross oversimplification. You will not be able to get much more throughput out of a word processor by making it run on 384 cores, as opposed to one or two. And yes, in fact, if those 384 cores are slower, your word processor will be slower for most tasks on the 384-core machine.
SOME tasks strongly benefit from massive parallelization, of course:
- En/decoding media, especially video.
- Displaying complex graphics. Every modern video card, whether integrated or discrete, is a massively parallel, very small “supercomputer”. Most of Humanity’s raw computing power, in fact, is in GPUs ^^’.
- Almost any complex modeling/simulation task, whether atmospheric, fluid dynamics, stock market sims…
- Cryptography/cryptanalysis.
- Massive userbases, of which many can be using resources at once (like most supercomputers).
- And so on.
Once again, however, for most common computing – that isn’t one of the above tasks, of course – single-core speed is significantly more important than number of cores. Referring to hypothetical “better programming practices” of the future simply is not at all useful for what is in use right now.
As an excellent example of MANY processing units running at slower-than-PC clock speeds, while there are several useful programming languages for running programs on graphics cards – OpenCL, CUDA, and Halide, for example – we simply don’t run our general-purpose PCs that way. First, it’s actually significantly less efficient to do a strongly linear task on a massively parallel system. Second, programming massively parallel systems is much more difficult (and thus more expensive) than programming more traditional systems.
tl;dr? My nVidia 2060 RTX graphics card has WAY more processors, yet would run Word absolutely atrociously, possibly except when importing images. Core counts mean little for most tasks.