Apple's fastest new MacBook Pro is slowed down by heat

…a laptop that can’t maintain it is own base clock speed…

What?

I keep hoping that the company will get it is second (third?) wind…

Huh?

They (and other chip makers) aren’t just pushing Moore’s Law at this point. They are practically choking it into oblivion as they reach the limitations of how small you can get with today’s silicon technology. It’s pretty astounding to think about how chip designers now need to worry about things like quantum tunneling.

It’s more that they’re desperately chasing it. Performance stepped off the curb a long time ago. And in terms of transistor gate size the claimed process sizes haven’t matched the actual size or been following Moore’s law for some time. Neither has transistor density.

But fact remains Intel’s 10nm process has been delayed 3 or 4 times. And their current one (think they called it 12nm) is fully mature. You aren’t going to see the sort of improvements in thermals and yields/binning that come with refining a node on these processors. Next year’s production run of that i9 will be the same as this year’s. Because the node has gone as far as it can. And Intel has been looking to replace it for 4+ years. For an improvement on thermals Apple needs to do something. Whether alter the chasis/cooling. Or promptly revise the model when Intel releases a new CPU that generates less heat.

Meanwhile AMD skipped the 10nmish stepping all together. And their already sampling 7nm parts, will purportedly ship 7mn productivity GPUs this year and are expected to ship 7nm consumer GPUs and cpus before Intel does.

It’s not a pushing the envelope problem. Intel planned everything on a production process that’s been problematic to execute in practice. It was already supposed to be out, and they’re openly holding back their next major architecture because it relies on 10nm and 10nm isn’t ready to go. Rumors about yields and quality (like thermals and potential clocks) off the node aren’t good. While they figure it out they’re a bit stuck putting out slight variations of what they already have.

1 Like

i held out as long as i could on this f’n sucky 2016 hoping the new macbook pro 2018 would be 32gb and at minimum 4 cores as i need both desperately.

i was pretty happy with the release despite the ridiculously high price, until i found out that they couldn’t even maintain their base speeds on the main core, neither one, the i7 or the i9 but the i9 is so bad. linus tech tips showed it overheating immediately under heavy load and after 1min on moderate load. smdh.

i have so many years sunk into apple, but f’n’a, they are making it very hard to stay on apple…
the unix stack argument is no longer an issue as with windows you have multiple choices of linux subsystems you can install alongside the windows subsystem simultaneously.

apple also no longer makes the sexiest hardware or even close. they don’t support touch or stylus on their computer screens, the dongles, etc.

i’m all but forced into having to switch but the company itself. oh apple…i actually need this stuff for work damn you.

i hear you… :beer:

1 Like

amd has had some really innovative ideas that rocked intel in recent years. sent intel scrambling and their old stuff can’t be pushed this hard and the new stuff isn’t ready.

Apple has already announced they are moving away from intel to their own chips, and osx has been in a holding pattern for some time with no touch or stylus support, so it looks like they going to produce a desktop operating system based on their own chips similar to the ipad pro.

amd has some interesting things coming up in the low power and heat but high performance mobile chip area with the move to 7nm, it will be interesting to see how they turn out.

I actually think that’s a pretty bad idea. Apple doesn’t have access to an x86 license, and they abandonded PowerPC for a reason. They’re apparently working on ARM based chips. A BUNCH of people have been attempting to push ARM as a full replacement for x86. As performance chips, desktop chips etc. Including AMD. And Intel attempted to move in that direction with their ATOM processors. Which were meant to take on ARM in smartphones and tablets. And go… somewhere? From there.

It just signals a bigger move away from performance/productivity from Apple. Unless they can make that instruction set do something its completely unsuited for and was never meant for. That noone else has really been able to make it do.

Production process isn’t really a big part of that. AMD doesn’t develop their own nodes. Their fabless, though still associated with GlobalFoundries. They can to a large extent contract work on whatever process is best in the non-intel fabs. But Intel’s current process is a lot better than the ones AMD has been using. Although since both have been around for a while they’re both pretty well improved from where they were years ago. Currently the process AMD uses for its GPU’s and Zen chips is bigger than Intel’s, and less able to push high clocks. But it supposedly has much higher yields. More functional chips, and more chips at high binning. Which significantly lowers production costs.

The 7nm processes AMD will be using from Global Foundries and TSMC apparently were both developed off a framework figured out with IBM. Sort of this industry wide “lets figure this shit out” initiative. All indications are that its ready to go and its a shockingly productive process for something brand new.

Intel’s just apparently been spinning its wheels on 10nm. There are lots of rumors about what exactly is wrong, but nothing seems to indicate its going to launch in anything like a timely fashion.

1 Like

Bring back the 17" macbook pro! More space for thermal performance. And you can use a 4k screen.

48 cores on a chip-- though Apple would likely come out with its own high core count chips.

but the idea seems to be dying on the vine.

This.

Which includes everyone who does serious computation. Windows is for running powerpoint and Outlook, not for doing serious number crunching. Even for that stuff, it’s pretty iffy, as I’ve found it requires fairly regular restarts. With a real OS, you only have to restart when you apply an update.

1 Like

You mean either GlobalFoundries or TSMC, AMD does not make chips themselves.

All the chip makers except Intel have stopped naming their process nodes after any actual metrics and instead their marketing departments pick a smaller number than the last one they used and claim that the newest optimization is that. Right now they may claim to be on 7nm, but in actual fact it’s probably closer to 14nm, more or less the same as Intel is on now. (IIRC, what they called 12 or 14nm was actually a highly optimized version of 20nm)

And yes, Intel has been having endless headaches with their 10nm node, and they very stupidly held back their new and improved designs until 10nm was ready, so they’ve been treading water for the past few years.

1 Like

Apple has announced nothing of the sort. There are enough rumours swirling that they are going to do so to suggest that it might be happening, but the company itself has said nothing yet.

As a general rule, Apple doesn’t announce diddly squat until they are a couple months away from shipping it to customers.

1 Like

Everyone who owns or considers buying a Macbook should check out one name on YouTube: Louis Rossman, a harware technician who repairs Apple product all day long and who shares his experience on Video:

In short: All Macbooks have aways had serious heating problems and summer kills more Macbook than anything else. Apple doesn’t really seem to care about changing the design of their always overheating machines, even though design is what they are obsessed with, rather than good performance.
I’ve owned many Macbooks myself and all of them had heating issues.

3 Likes

I had one of the 2011-2012 models. This was when they changed over to using lead-free solder. This got brittle on the early boards if they overheated. The GPU had no heat detector on it, so this could easily happen if you thrash the graphics. If you find your display disappears or goes blue or something else mad, the solution is to take out the motherboard, and stick it in the oven for 15 minutes at 200C. This can fix the solder joints without actually melting them. This can give you a few more hours on the machine to grab everything you need onto another drive.

This may sound mad, but it worked for me. The machine is not a lot of use in that state anyhow, so you might as well give it a go. You can do it more than once but the plastic fittings such as the ribbon connectors will get brittle, and eventually crumble…

2 Likes

To a substantial degree Intel has been able to do so; but the improvement in what you can get in thermally constrained options has been accompanied by an improvement in what you can get if you can handle a bit more cooling.

No amount of improvement in efficiency is likely to create an inversion where lower TDP parts are also top of the range(barring brief periods where the fancy new process is only available to a few select parts and all the others are using last-gen); that will be occupied by parts of similar design cranked up just a bit higher.

In this case Intel recently bumped what ‘best’ implies, bringing out the i9 flavor parts in response to AMD’s recent improvements; and they are generally more demanding that i7s. Shoving one into a chassis designed to just hug the curve of what you can get away with in i7s only works because their throttling support is also pretty good these days(compared to the traditional ‘good overtemperature protection means it will probably turn back on after it shuts down hard, bad overtemperature protection means that it won’t set anything on fire as it dies’ practice).

2 Likes

Cavium(now part of Marvell) has been doing that sort of thing for a while; but in somewhat lower profile areas. Poke a NAS that isn’t total trash but doesn’t advertise its x86-itude and you might find them there, along with some networking applications.

What has always sort of puzzled me about that niche is how there are a few, either niche or vaporware(Cavium is low profile, Qualcomm’s effort appears necrotic(possiblity for the best, imagining the collision of their theory of driver ‘support’ with real computers), AMD doesn’t exactly talk about the A-series Opterons now that they have x86 options that don’t suck again); but there seems to be very little going up against, say, Avoton parts(essentially Atoms with more cores and ECC support; for $$$); which are the ones that are likely to be within range of the cores you can just license off ARM, right now).

You can get all the smartphone and application processors you want; as long as nothing designed for more that a modestly chunky tablet and almost definitely horrible BSPs doesn’t bother you; but there’s not much 'we bolted 8 A-73s to a real memory controller and PCIe root; and that’s pretty much it, go forth and be merry" option.

I don’t know if the phone market is just that much bigger, if people thinking about that know that Avoton could get some hefty price cuts long enough to make them go away, or what.

Which I already pointed out in another comment. IIRC all 3 foundries AMD works with. Tsmc, gloflo, And Samsung developed “10nm” nodes. And had issues with them AMD opted to skip them and design around the 7nm process currently in the works. So those fabs are making “10nm” products they’re just not being used for standard processors and GPUs.

While Intel’s nodes tend to be closer to what they’re named for. They’ve basically redefined the metric so they can claim their numbers are “real”. So the whole thing gets pretty confused in comparing them both. Because you’ve got processes that are equivalent in size. But then you’ve got nodes that are equivalent in the order of stepping.

Intel’s 10nm is an intermediate node because it’s in between the industry standard 14mm and 7nm steps, whatever they call them and whatever the actual sizes are. All the other foundries effectively skipped that step (they’re using it but not as their main thing) . The tsmc/gloflo 7nm is smaller than 10nm but bigger than 7nm. And will be smaller than Intel’s 10nm. So it’s weird the 7nm we’re hearing about is closest in size to Intel’s 10. But it’s one step further in the order of operations.

The situation has lead me to believe there’s something off about that particular intermediate stepping. Regardless of relative size they aren’t the only ones to have issues with that step. Why goes a bit beyond my interest. Just been trying to get up on hardware again so I can rebuild my box next year.

Between “thinner and lighter,” and “our last desktop was a trashcan,” Apple doesn’t seem to have been taking these kinds of things into consideration as of late.

1 Like

I don’t think they care much. They’re making plenty of money off average users who just want a functional computer from a recognizable brand. And the hogs that will buy whatever top product every time it’s released because apple.

And that’s fine. It’s a decent reliable way to make money. Who cares if they’re the “best” if the products are actually good, worth it, And do what they say on the box.

But the quality issues they’ve had and their lack of attention to actual performance are going to erode the brand identity that all functions on in the long run. They’ve been going through a loooooong cycle of alienating professional and creative markets already. The sort of reputation they have can be hard to break though.

Yeah that seems to be what gets companies excited about ARM. Low thermals and power draw and it can apparently handle a nutty amount of cores at reasonable clocks. 48 isn’t even as high as I’ve heard. I think there was a 128 into in development somewhere.

That and the fact that it’s the only widely distributed instruction set that you can get access to. You aren’t getting on x86 without buying either Intel on AMD. And Intel’s a total no-go for that. It’s just not happening.

But apparently those cores aren’t really too capable, And ARM doesn’t scale well outside of mobile applications. So you get this thing that x86 is done. And we can beat it with ARM. Some one perpetually seems to be doing that, especially for servers. And then they pull out of ARM entirely. It’s interesting but it never seems to go anywhere, And every time x86 is dead some one (usually AMD) pulls something out their ass to reinvigorate it.

the paranoid interpretation is that apple designs its chips to score highly on geekbench.