Apple to switch Mac lineup to its own chips

For reference, Big Sur supports Macs going back to mid-2013, and Apple generally has a 7-year support window for its Mac product line. Even with yesterday’s announcements, an Intel Mac that you bought last week will almost certainly be supported with OS updates through 2027, and if they continue to sell Intel Macs in some capacity through the next 2 years of the planned transition period, they’ll likely be supporting Intel through 2029.

I haven’t yet upgraded to Catalina because I have a handful of apps that would fall afoul of the 32-bit apocalypse, and my MacBook Pro has officially aged out of software updates as of big Sur since it’s from mid-2012. I’ll probably hang onto it for older apps and look to replace it with a new machine at some point in the near future.

2 Likes

4 Likes

PowerPC macs used OpenFirmware for their boot environment which is Forth based.

4 Likes

: state-truth ( – )
." You are correct." cr
;

ok> state-truth

2 Likes

Yeah, Catalina kind of screwed all that up for me anyway by breaking a ton of games. :pensive: At least I can dual boot into Windows.

1 Like

“Since when has Apple ever given a flying chip about backward compatibility?”
Yeah, you tell’em .That’s why my 8 y.o. iMac, 6 y.o. Mini, 9 y.o. iPhone and halfl a dozen other older Apple things can’t still work. Oh, wait.
Rosetta 2 will handle pretty much any old x86. Any even faintly sensible software will be a simple recompile. Really only code generators (something I make a living from) will have any problems and most of us have been doing ARM for ages - 35 years in my case.

2 Likes

That would be funny if it were actually, y’know, funny.

And then someone will do a clever hack to allow a Raspberry Pi 4 8G to run Mac Arm stuff native.

1 Like

I hear you my friend! As a graphic designer and fledgling app designer with occasional video projects, I need more grunt than most current day ARM processors/devices for prosumers. I mean I love my 3 year old 12.9" iPad Pro (never stutters, never slow, only gets warm playing World of Tanks on gonzo quality). But games are already getting fewer and farther between for MacOS… otoh I played the iPad demo of CivVI and was surprised at how well the whole thing ran (but the UI wasn’t as fun and I’m already tooled up on mac Civ on Steam). It’ll be a rocky transition but maybe parallel Intel/ARM macs will be the best of both worlds? mb light a fire under Intel’s posterior…

Where I see this taking things to a whole new level is the iPhone becomes your Mac when you come home and plug it into your giant screen (which may have a slot for a co-processor?) Joe/Jane Shmoe’s average workload is surfing the web, email and some light video editing or social blah blah. I noticed the dev kit did NOT have Thunderbolt3 or USB4 listed… think what a monster of a product that would make: a phablet that you just plug in to your 4k or 8k giant format USB4 screen/hub and it all just works. Not sure wireless could compare to TB3, but some functions could be wirelessly enabled (ie you come home, the NFC unlocks your door, BT pings your screen to wake up and connect to the phone via Wi-Fi and start handshaking.) Your iPad is a phone and mac and pen enabled tablet… throw in AppleTV and make it just one more HomePod node. I’m getting tingly.

When Apple switched to Intel, and to OSX, we lost a lot of great software. But the computers and the OS were so much better it was almost worth it in a lot of cases. (The bigger loss was the change in general aesthetic and culture of independently developed Mac software. But that kind of change is also inevitable over time regardless.) Will we see that here? Will there eventually be compatibility tools? Will we be stuck settling for a lot of ported iOS Apps on Macs? Will it not matter because so much stuff will be browser/HTML based?

You need more performance than you think the cpu in a modern iPad can provide? No problem.
Look up Graviton2 and ThunderX3. Note that Fujitsu just took the fastest computer slot with an ARM machine.
Consider that, sure, maybe 4 cores of A12 isn’t enough but how about 384 cores of ThunderX3? Or perhaps more realistically, 32 cores at 3GHz for less energy cost than a 4 core i9.
Performance is not a problem.

1 Like

Core counts are pretty much useless when discussing most computing, frankly. Unless you are running VERY intense, very “atomizable” problems (atmospheric models, say) massive parallelism can only do you so much good. No, your word processor isn’t likely to run better with 384 slower cores, sadly, rather then 4 significantly faster cores; that’s not how it works. While multiple cores do have some real benefit, for most common tasks this plateaus very quickly past 2 cores.

Any given ARM SoC may or may not be sufficient for gaming, especially; most modern games are pretty heavily dependent on single-core performance. That – along with better thermal “headroom” – is exactly why (for x86 machines) a fast, 4-core i5 is quite often a much better gaming CPU than a slower i7 with 6 or more cores.

Write better software. That’s almost always the answer and almost always the approach completely ignored by the wider software industry.
Funnily enough, after 40 years of professional software & hardware engineering I’ve learnt a bit about this.

That’s a gross oversimplification. You will not be able to get much more throughput out of a word processor by making it run on 384 cores, as opposed to one or two. And yes, in fact, if those 384 cores are slower, your word processor will be slower for most tasks on the 384-core machine.

SOME tasks strongly benefit from massive parallelization, of course:

  • En/decoding media, especially video.
  • Displaying complex graphics. Every modern video card, whether integrated or discrete, is a massively parallel, very small “supercomputer”. Most of Humanity’s raw computing power, in fact, is in GPUs ^^’.
  • Almost any complex modeling/simulation task, whether atmospheric, fluid dynamics, stock market sims…
  • Cryptography/cryptanalysis.
  • Massive userbases, of which many can be using resources at once (like most supercomputers).
  • And so on.

Once again, however, for most common computing – that isn’t one of the above tasks, of course – single-core speed is significantly more important than number of cores. Referring to hypothetical “better programming practices” of the future simply is not at all useful for what is in use right now.

As an excellent example of MANY processing units running at slower-than-PC clock speeds, while there are several useful programming languages for running programs on graphics cards – OpenCL, CUDA, and Halide, for example – we simply don’t run our general-purpose PCs that way. First, it’s actually significantly less efficient to do a strongly linear task on a massively parallel system. Second, programming massively parallel systems is much more difficult (and thus more expensive) than programming more traditional systems.

tl;dr? My nVidia 2060 RTX graphics card has WAY more processors, yet would run Word absolutely atrociously, possibly except when importing images. Core counts mean little for most tasks.

Good thing Apple’s A12Z has single-core performance that’s on par with relatively recent Intel chipsets, then. The chip going out in the dev transition units is the same one they use in the latest iPad Pro, and benchmarks for its performance in comparison to the rest of the Mac product line aren’t hard to come by. I wouldn’t be surprised if they take advantage of the larger power and thermal envelope of a laptop or desktop to really crank up what the A13 series is capable of in the Mac lineup.

1 Like

Certainly! Just don’t bother going on about having a gazillion cores =). That’s mostly marketing, as manufacturers try to push clock speeds to the absolute limit. It’s STILL hard to break 5 GHz with conventional cooling, although certainly not impossible, so multiple cores becomes the norm to push the “next gen” chip ^^’. Having all those cores won’t make most tasks even one tiny bit faster.

Considering their significantly better performance:watts consumed+heat ratio, I hardly doubt server farms will go ARM pretty fast, especially =o.

You are correct that single-core performance is king for games. I think you are misjudging the audience if you believe “most gamers” are running the fastest possible chips and buying new rigs every year, though. That is not a cheap way for anyone but enthusasts to go.

18 months ago(!), benchmarks popped up showing that the ipad pro’s ARM chip performed very close to the current MBP of the time:

And that’s taking into account the very different thermal requirements of an ipad, versus say a MBP, Mac Mini or iMac ARM-based mac.

The idea that “ARM is slow in single-core performance” is flawed based on evidence, but more importantly, if these move to even low-end macbooks and the like, it is very likely that the lower end mac portables will compete well with midrange windows laptops - and that gap will be a massive chasm when you compare laptops with intel integrated graphics versus the integrated graphics on apple SoCs.

2 Likes

In every benchmark I’ve seen to date, yes, ARM IS slower at the same clock speeds, up to very recently at least; kind of the point of the above article, right (which is btw fascinating)? I also made no assertions of any other sort; you are arguing something I am not ^^’. What ARM is famous for is performance relative to power used and heat emitted (and therefore, overall cost and efficiency). Speed is a very new thing for them.

Once again, however, drop the “384 cores” line, folks. It won’t help. my graphics card has more, again, and would run Word terribly (yes, you actually can via Linux, if you’re a committed masochist).

Edit -> Some excellent explanations on this Quora thread, following my GPU example:
https://www.quora.com/Are-you-able-to-use-a-GPU-as-a-computer-without-a-CPU

Not to mention that even with a blazing fast CPU and lots of cores, you’re still stuck with comparatively slow I/O from disk, network, and even RAM. You can improve on many of these things the more money you’re willing to spend – but only so much.

1 Like

Well… There the ARM SoC often has an advantage, considering much of that is on-die with the cores. Of course, both Intel and AMD long ago started their own versions of the same thing, in different ways, but not as well-integrated as ARM, although AMD has done way better there than Intel imo.

But yes, you’re absolutely correct! It’s like when my buddies build supposed “gaming PCs” with a great CPU, cooler, graphics card…and crap RAM; facedesk d’oh! You can only go as fast as your worst component will let you, of course.