Apple to switch Mac lineup to its own chips

That’s a gross oversimplification. You will not be able to get much more throughput out of a word processor by making it run on 384 cores, as opposed to one or two. And yes, in fact, if those 384 cores are slower, your word processor will be slower for most tasks on the 384-core machine.

SOME tasks strongly benefit from massive parallelization, of course:

  • En/decoding media, especially video.
  • Displaying complex graphics. Every modern video card, whether integrated or discrete, is a massively parallel, very small “supercomputer”. Most of Humanity’s raw computing power, in fact, is in GPUs ^^’.
  • Almost any complex modeling/simulation task, whether atmospheric, fluid dynamics, stock market sims…
  • Cryptography/cryptanalysis.
  • Massive userbases, of which many can be using resources at once (like most supercomputers).
  • And so on.

Once again, however, for most common computing – that isn’t one of the above tasks, of course – single-core speed is significantly more important than number of cores. Referring to hypothetical “better programming practices” of the future simply is not at all useful for what is in use right now.

As an excellent example of MANY processing units running at slower-than-PC clock speeds, while there are several useful programming languages for running programs on graphics cards – OpenCL, CUDA, and Halide, for example – we simply don’t run our general-purpose PCs that way. First, it’s actually significantly less efficient to do a strongly linear task on a massively parallel system. Second, programming massively parallel systems is much more difficult (and thus more expensive) than programming more traditional systems.

tl;dr? My nVidia 2060 RTX graphics card has WAY more processors, yet would run Word absolutely atrociously, possibly except when importing images. Core counts mean little for most tasks.

Good thing Apple’s A12Z has single-core performance that’s on par with relatively recent Intel chipsets, then. The chip going out in the dev transition units is the same one they use in the latest iPad Pro, and benchmarks for its performance in comparison to the rest of the Mac product line aren’t hard to come by. I wouldn’t be surprised if they take advantage of the larger power and thermal envelope of a laptop or desktop to really crank up what the A13 series is capable of in the Mac lineup.

1 Like

Certainly! Just don’t bother going on about having a gazillion cores =). That’s mostly marketing, as manufacturers try to push clock speeds to the absolute limit. It’s STILL hard to break 5 GHz with conventional cooling, although certainly not impossible, so multiple cores becomes the norm to push the “next gen” chip ^^’. Having all those cores won’t make most tasks even one tiny bit faster.

Considering their significantly better performance:watts consumed+heat ratio, I hardly doubt server farms will go ARM pretty fast, especially =o.

You are correct that single-core performance is king for games. I think you are misjudging the audience if you believe “most gamers” are running the fastest possible chips and buying new rigs every year, though. That is not a cheap way for anyone but enthusasts to go.

18 months ago(!), benchmarks popped up showing that the ipad pro’s ARM chip performed very close to the current MBP of the time:

And that’s taking into account the very different thermal requirements of an ipad, versus say a MBP, Mac Mini or iMac ARM-based mac.

The idea that “ARM is slow in single-core performance” is flawed based on evidence, but more importantly, if these move to even low-end macbooks and the like, it is very likely that the lower end mac portables will compete well with midrange windows laptops - and that gap will be a massive chasm when you compare laptops with intel integrated graphics versus the integrated graphics on apple SoCs.

2 Likes

In every benchmark I’ve seen to date, yes, ARM IS slower at the same clock speeds, up to very recently at least; kind of the point of the above article, right (which is btw fascinating)? I also made no assertions of any other sort; you are arguing something I am not ^^’. What ARM is famous for is performance relative to power used and heat emitted (and therefore, overall cost and efficiency). Speed is a very new thing for them.

Once again, however, drop the “384 cores” line, folks. It won’t help. my graphics card has more, again, and would run Word terribly (yes, you actually can via Linux, if you’re a committed masochist).

Edit -> Some excellent explanations on this Quora thread, following my GPU example:
https://www.quora.com/Are-you-able-to-use-a-GPU-as-a-computer-without-a-CPU

Not to mention that even with a blazing fast CPU and lots of cores, you’re still stuck with comparatively slow I/O from disk, network, and even RAM. You can improve on many of these things the more money you’re willing to spend – but only so much.

1 Like

Well… There the ARM SoC often has an advantage, considering much of that is on-die with the cores. Of course, both Intel and AMD long ago started their own versions of the same thing, in different ways, but not as well-integrated as ARM, although AMD has done way better there than Intel imo.

But yes, you’re absolutely correct! It’s like when my buddies build supposed “gaming PCs” with a great CPU, cooler, graphics card…and crap RAM; facedesk d’oh! You can only go as fast as your worst component will let you, of course.

Everything will be just fine:

This topic was automatically closed after 5 days. New replies are no longer allowed.