I’d definitely agree that that’s the plan; I’m just not sure how it will pan out. Initial reports on the native-ness provided by catalyst have not been terribly heartening; and we’ve had the opportunity to watch how UWP was supposed to seamlessly enable adaptive interfaces across all platforms something something on the Windows side; and it has since crashed, burned, been watered down almost to nothing, continued to smolder, been rebranded a few times; largely relegated to some pack-in apps that are mostly a pretty lousy fit for the desktop/laptop. And all that is in a case where there is now zero momentum on the mobile side. WM/WP bled out years ago; and UWP is still either a lousy fit or just win32 rebranded because you distributed it through the microsoft store.
Maybe Apple will do better, it’s certainly possible; but making an interface feel right in two or more radically different contexts is a hard problem; and phones and iPads are definitely where the volume is, so we’ll see how much love OSX gets.
For reference, Big Sur supports Macs going back to mid-2013, and Apple generally has a 7-year support window for its Mac product line. Even with yesterday’s announcements, an Intel Mac that you bought last week will almost certainly be supported with OS updates through 2027, and if they continue to sell Intel Macs in some capacity through the next 2 years of the planned transition period, they’ll likely be supporting Intel through 2029.
I haven’t yet upgraded to Catalina because I have a handful of apps that would fall afoul of the 32-bit apocalypse, and my MacBook Pro has officially aged out of software updates as of big Sur since it’s from mid-2012. I’ll probably hang onto it for older apps and look to replace it with a new machine at some point in the near future.
“Since when has Apple ever given a flying chip about backward compatibility?”
Yeah, you tell’em .That’s why my 8 y.o. iMac, 6 y.o. Mini, 9 y.o. iPhone and halfl a dozen other older Apple things can’t still work. Oh, wait.
Rosetta 2 will handle pretty much any old x86. Any even faintly sensible software will be a simple recompile. Really only code generators (something I make a living from) will have any problems and most of us have been doing ARM for ages - 35 years in my case.
I hear you my friend! As a graphic designer and fledgling app designer with occasional video projects, I need more grunt than most current day ARM processors/devices for prosumers. I mean I love my 3 year old 12.9" iPad Pro (never stutters, never slow, only gets warm playing World of Tanks on gonzo quality). But games are already getting fewer and farther between for MacOS… otoh I played the iPad demo of CivVI and was surprised at how well the whole thing ran (but the UI wasn’t as fun and I’m already tooled up on mac Civ on Steam). It’ll be a rocky transition but maybe parallel Intel/ARM macs will be the best of both worlds? mb light a fire under Intel’s posterior…
Where I see this taking things to a whole new level is the iPhone becomes your Mac when you come home and plug it into your giant screen (which may have a slot for a co-processor?) Joe/Jane Shmoe’s average workload is surfing the web, email and some light video editing or social blah blah. I noticed the dev kit did NOT have Thunderbolt3 or USB4 listed… think what a monster of a product that would make: a phablet that you just plug in to your 4k or 8k giant format USB4 screen/hub and it all just works. Not sure wireless could compare to TB3, but some functions could be wirelessly enabled (ie you come home, the NFC unlocks your door, BT pings your screen to wake up and connect to the phone via Wi-Fi and start handshaking.) Your iPad is a phone and mac and pen enabled tablet… throw in AppleTV and make it just one more HomePod node. I’m getting tingly.
When Apple switched to Intel, and to OSX, we lost a lot of great software. But the computers and the OS were so much better it was almost worth it in a lot of cases. (The bigger loss was the change in general aesthetic and culture of independently developed Mac software. But that kind of change is also inevitable over time regardless.) Will we see that here? Will there eventually be compatibility tools? Will we be stuck settling for a lot of ported iOS Apps on Macs? Will it not matter because so much stuff will be browser/HTML based?
You need more performance than you think the cpu in a modern iPad can provide? No problem.
Look up Graviton2 and ThunderX3. Note that Fujitsu just took the fastest computer slot with an ARM machine.
Consider that, sure, maybe 4 cores of A12 isn’t enough but how about 384 cores of ThunderX3? Or perhaps more realistically, 32 cores at 3GHz for less energy cost than a 4 core i9.
Performance is not a problem.
Core counts are pretty much useless when discussing most computing, frankly. Unless you are running VERY intense, very “atomizable” problems (atmospheric models, say) massive parallelism can only do you so much good. No, your word processor isn’t likely to run better with 384 slower cores, sadly, rather then 4 significantly faster cores; that’s not how it works. While multiple cores do have some real benefit, for most common tasks this plateaus very quickly past 2 cores.
Any given ARM SoC may or may not be sufficient for gaming, especially; most modern games are pretty heavily dependent on single-core performance. That – along with better thermal “headroom” – is exactly why (for x86 machines) a fast, 4-core i5 is quite often a much better gaming CPU than a slower i7 with 6 or more cores.
Write better software. That’s almost always the answer and almost always the approach completely ignored by the wider software industry.
Funnily enough, after 40 years of professional software & hardware engineering I’ve learnt a bit about this.
That’s a gross oversimplification. You will not be able to get much more throughput out of a word processor by making it run on 384 cores, as opposed to one or two. And yes, in fact, if those 384 cores are slower, your word processor will be slower for most tasks on the 384-core machine.
SOME tasks strongly benefit from massive parallelization, of course:
En/decoding media, especially video.
Displaying complex graphics. Every modern video card, whether integrated or discrete, is a massively parallel, very small “supercomputer”. Most of Humanity’s raw computing power, in fact, is in GPUs ^^’.
Almost any complex modeling/simulation task, whether atmospheric, fluid dynamics, stock market sims…
Massive userbases, of which many can be using resources at once (like most supercomputers).
And so on.
Once again, however, for most common computing – that isn’t one of the above tasks, of course – single-core speed is significantly more important than number of cores. Referring to hypothetical “better programming practices” of the future simply is not at all useful for what is in use right now.
As an excellent example of MANY processing units running at slower-than-PC clock speeds, while there are several useful programming languages for running programs on graphics cards – OpenCL, CUDA, and Halide, for example – we simply don’t run our general-purpose PCs that way. First, it’s actually significantly less efficient to do a strongly linear task on a massively parallel system. Second, programming massively parallel systems is much more difficult (and thus more expensive) than programming more traditional systems.
tl;dr? My nVidia 2060 RTX graphics card has WAY more processors, yet would run Word absolutely atrociously, possibly except when importing images. Core counts mean little for most tasks.
Good thing Apple’s A12Z has single-core performance that’s on par with relatively recent Intel chipsets, then. The chip going out in the dev transition units is the same one they use in the latest iPad Pro, and benchmarks for its performance in comparison to the rest of the Mac product line aren’t hard to come by. I wouldn’t be surprised if they take advantage of the larger power and thermal envelope of a laptop or desktop to really crank up what the A13 series is capable of in the Mac lineup.
Certainly! Just don’t bother going on about having a gazillion cores =). That’s mostly marketing, as manufacturers try to push clock speeds to the absolute limit. It’s STILL hard to break 5 GHz with conventional cooling, although certainly not impossible, so multiple cores becomes the norm to push the “next gen” chip ^^’. Having all those cores won’t make most tasks even one tiny bit faster.
Considering their significantly better performance:watts consumed+heat ratio, I hardly doubt server farms will go ARM pretty fast, especially =o.