Virtually every modern computer is vulnerable to a pair of devastating attacks, and there's only a fix for one of them, and it sucks

Originally published at:


Should have never sold my Amiga


Nothing available off the shelf in a store is ever “modern”, it is all going to be years-old tech at best.

Of course, if you were analyzing the traffic and packets on your network, you would probably already know if your systems were exfiltrating data. And if you don’t, there is hardly any point in simply worrying about it. The first question has to be: “What can I do that will have some practical effect?” NOTE: that what you can do, might be quite distinct and different than what you are willing to do.


Even someone with the know-how to do this, how do you continuously keep an eye on it? One can’t run wireshark all day long. Is there some toolkit I should be running on my home network I’m not aware of that you’re referring to?


Maybe it’s too much sci-fi or futuristic dread, but when something big like this breaks, all I can think is that these “bugs” have ceased to be useful to those who benefited from them and that it is time to get one bit of final leverage by using them to convince unwitting folks to upgrade to something new or different. That of course, brings lots of profit (gov’t sales, civilian sales, peripheral industries) but it also guarantees successful adoption of new technology with newer, less visible, more exploitable flaws/backdoors, and the cycle begins anew.

At least that’s how I’d write it, if I were an author.


As someone who tends to use old computers long after most geeks deem them to be woefully obsolete, I was wondering if this is something that I should be worried about. Finding the answer to that was frustrating as very few sites carrying the story were interested in explicating what “most Intel CPUs” meant.

The answer appears to be that the Meltdown vulnerability affects every Intel CPU that has out of order execution. If I understand it correctly, that excludes early Atom processors, but otherwise includes all Intel CPUs since the 90’s.

ETA: Aside from early Atoms, all Intel CPUs since the Pentium Pro. Thanks @jerwin


I bet there are some happy campers among the ‘cloud’ operators right about now.

Anything that offers a solid chance of breaking out of a hypervisor, or even nibbling a few bits from another guest VM is too dangerous to even think of tolerating in the context of a shared environment chewing minimally trusted workloads for anybody; but with the cost-sensitivity of ‘the cloud’ and the performance sensitivity of basically all the database backed applications ever(possibly excepting the ones using a tiny dab of sqlite because they think flat files are gross) eating those sorts of performance hits is going to hurt, a lot.

I suppose their only slight bright spot is that all their competitors’ days are going to hell just as quickly as theirs; so the relative difference shouldn’t be as extreme; but anyone with the clout to do so is probably having a little chat with their rep right now.


Well, it was good while it lasted.

6 Likes ← from here, if you want more than just a meme,


If you mean that these started as DOD back doors… Well, I don’t think you’d be utterly insane for thinking that. We’ll probably never know. Yay, The Future!


What i want to know is how the vulnerability is exploited? Through the traditional malware route? Which you should be protecting yourself from anyway. How do they exfiltrate this sensitive data through means other than the traditional virus/malware route? My limited understanding is that the data centres and whatnot are most at risk but the general home user not so much. Still, what a clusterfuck.


So I guess those ‘Intel inside’ stickers have lost their charm?


In the case of Spectre; one of the proofs-of-concept was implemented in javascript that one had merely to run in browser in order to be affected; which puts the bar at “basically just visit a web site”.

As for exfiltration, the attack doesn’t provide any new mechanisms for that, so programs exploiting it would use the same techniques as previously(though given that things like, again, javascript chattering with its mothership via XHR are enormously common and widely used legitimately, there are plenty of options all but the paranoid aren’t blocking).

VM hosts and servers with delicious private keys are obviously more worth the trouble, being more valuable; but the attack provides for levels of access often considered fairly benign to read whatever they fancy, so few are safe.


In the case of any shared service (e.g. any system with multiple users logged in or any ‘cloud’ system where multiple virtual servers run on the same physical server) then any one user is vulnerable to any other user on a given server.

For home users… from

Spectre attacks can also be used to violate browser sandboxing, by
mounting them via portable JavaScript code. We wrote a
JavaScript program that successfully reads data from the
address space of the browser process running it.

So, in short: you are vulnerable if you use any software that runs another server, and you are vulnerable if you visit web pages on your laptop.


This is looking bad. Without a hardware fix, there will be significant slowdowns in all applications whenever there is a swap from user to kernel space. Here is a good, technical description of what is probably happening. I say probably because it is still early days and many of the details haven’t been released yet.


What a total cluster. I can’t wait to see the fun that unfolds when certain types of customers now need to backfill a fifth of their current compute costs. Uuuuugh.


It will go poorly; but it could be quite interesting to watch, if details are available:

Outfits with significant on-premises activity will presumably be slighly less inclined to change that(as well as interested; probably up to and beyond the bounds of good sense, in pretending that some of their more computationally expensive activities are, um, totally built on trusted software munging on inputs in safe ways so it’s OK to stay as-is).

Vendors of ‘cloud’, both the VM-for-hire flavor and the various more elaborate constructs on top of shared computers will presumably have to adjust their prices(or make the product worse to fit the same price). This won’t be fun; but it should provide some insight into how much of the cost of the various services is based on CPU load vs. all the various other expenses; as well as (at least among the publicly traded ones) some insight into who is most willing to suffer in the hopes of providing stability and not spooking their customers; and who is already selling about as cheaply as they can afford to; and must change.

Should also be interesting to see if it leads to a burst of startups either growing up or bleeding out: for the less well financed a nontrivial chunk of their burn rate is probably tied directly to how much AWS charges to allow them to keep their precious app’s backend functioning. That is probably one expense they weren’t expecting to sucker-punch Moore’s law in such dramatic fashion.


Yeah, that kind of raised my eyebrows. How can unprivileged JIT Javascript code have access to cache contents?


I’m far from an expert; but my understanding from reading the piece(p.6-7) is that because high performance javascript is serious business these days, the x86 code produced by the target JIT javascript engine is predictable(though they don’t say one way or another if this predictability was merely useful in development; or if somewhat different exploit javascript would be required to target different browsers) and because they were able to construct a high resolution timer, despite attempts to blunt timing attacks by nerfing the default timing-providing functions, they are able to use the same technique of inferring values in cache by testing for exactly how quickly they could be recovered after manipulating the CPU into incorrect speculative branching.

I would, naively, assume that this attack would fail against Javascript implementations that are too slow or erratic to provide a high(relative to the rest of the system) resolution timer; but that if the timer resolution necessary is available; there isn’t any fundamental difference between the technique used by unprivileged native code to infer cache contents and the technique used by the Javascript implementation.


Well I am mildly glad I am not currently employed and having to deal with the work related IT freakout over this right now.