Originally published at: https://boingboing.net/2019/10/14/attiny85.html
Proof-of-concept supply-chain poisoning: tiny, undetectable hardware alterations could compromise corporate IT
Originally published at: https://boingboing.net/2019/10/14/attiny85.html
To this day, this is the kind of reporting that makes BB one of the most important and relevant tech zines. Without the vigilance of Doctorow and all of the BB crew, the interwebz would be a poorer and more dangerous place.
I find Elkins’ logic dismaying and straightforward; if he can do it with basic stuff, then it’s likely someone has already been doing it. I feel like this is a reverse Chekov’s gun - it already went off in the third act, but now someone had to go back and show that it was there (but not easy to see) way back in the first act.
Completely off topic, but that would be a really interesting twist on Chekov’s gun. The gun goes off in the middle act, and the third act is all about proving the gun really was there in the first.
I mean, both Occam’s razor and all available evidence have unwaveringly suggested from day one that Bloomberg’s story was just plain bullshit (which happens to sound a lot like other innuendo the Turmp administration has subsequently peddled to drum up fear of China). So I’m not sure why a post about actual, legitimate research on the topic would be prefixed with four paragraphs about that story.
Anyhoo. It’s never been news that it is possible for hardware manufacturers to compromise the security of their devices in ways that would be nearly impossible to guard against. It’s also possible for many of the world’s navies to mine undersea cables and destroy the global internet within days. Or for countries to nuke each other (etc.) The nuts and bolts of these things are interesting, at least to specialists, but no ordinary person should feel like this stuff decides whether you’re safe or not.
Security isn’t a matter of whether it’s possible for someone to attack you. If that kind of NRA moon-logic were valid, we’d never have made it out of the primordial soup. Security comes when peoples’ interests are woven together such that no group wants to wage war against another. E.g. China makes the world’s computer hardware, which gives them the means to spy on the world, but also gives them a very strong incentive not to.
Tom Stoppard is kicking himself he didn’t come up with this first.
I’m not totally convinced. I don’t suppose it’s never happened, but bugging hardware – especially at an early stage in the supply chain – is obviously much riskier than using software exploits and social engineering to accomplish the same ends. If you put a bug on a motherboard, you create a chain of physical evidence and leave it locked in your target’s data center, potentially for years after you needed the exploit.
Plus, there’s no particular reason a hardware bug has more access than a software exploit (unless you’re talking about a bug actually inserted into the design of a cryptography chip, for example – which would be much more expensive and leave much more physical evidence). And it still has to exfiltrate data through the same channels, so once it’s activated, it’s not even stealthier.
So on the technical level, too, I think the question is not if someone can do this but why?
My take is that the original Bloomberg story was a Supermicro hit piece, plan and simple. Who knows where it came from, but we could ask qui bono. It did raise a multitude of questions on the feasibility of an attack, which are perhaps academic but nonetheless useful.
Because a hardware bug isn’t erased when the storage gets wiped or software is replaced?
Well, a hardware bug is removed when a virtual machine is moved to a different server; and there are plenty of ways for software and firmware exploits to reinstall themselves or fake a successful update. Generally speaking, there’s very little difference in what you can achieve, and definite disadvantages in terms of cost, logistical complexity, time and stealth.
I’m on the fence here, but beware imputing incentives and motives to others - or assuming that others will treat the same incentive the same way - based on local moral or cultural references. Pretty much everyone tends to underestimate the lengths the Chinese state will go to to advance what IT sees as its very long term interests and objectives (which are likely not what anyone else may believe China’s long term interests and objectives are).
Frankly, while still fence-sitting, I wouldn’t put it past them.
Unless someone over there came up with the idea that “we are so clever we will never get caught”. People do stupid stuff all the time assuming they won’t get caught.
Really, any supply chain could be thoroughly tracked and put through multi-step quality control at an increase of cost. Making sure that you’ve vetted and tracked everything beyond just finding the cheapest knock off is baked in to some extent, so upgrading that process can be a security process. State sponsored interference is of course possible in any supply chain, but the eventual discovery would economically disastrous between fines, sanctions, new qc requirements, and heavy pressure to move business elsewhere.
A wafer level chip-size package (WLCSP) microcontroller would be even smaller and easier to hide.
I’ve used LPC1104 microcontroller in some of my designs, and it’s really small (2.3x2.3mm), has 32-bit architecture and twice as much pins as ATtiny85:
Soldering these by hand would be rather difficult though:
As for 8-bit microcontrollers, there’s ATtiny20-UUR and it is even smaller (1.55x1.4mm), but I haven’t used it.
That part comes down to risk assessment; what could we potentially get versus how bad would it be if we got caught?
More expensive, almost certainly; but some years back a rather cool proof of concept for quietly reducing the entropy provided by CPU RNGs. Extremely subtle, even against an adversary that optically inspects their CPUs and runs some RNG tests; but allows you to vastly more easily attack any keying material or ciphertext generated by using the RNG for the life of the system.
Certainly a waste against incompetent opponents; software attacks are cheaper and mostly consequence free; but has the virtue of working even against people whose software is without error in the relevant areas.
I suppose a real test, albeit one with non-ε legal risk, would be to obtain a few servers from the target manufacturer, modify them with the micro-controller then either RMA them or get them back into the wild through a used equipment reseller of some sort. Sit back, and wait for the pings from the compromised equipment… (or the lawsuit… or the black helicopters…)
It’s a shame there isn’t a sha256 for hardware.
A hardware checksum would also have broader uses. I have seen safety-critical hardware fail in an unsafe state (including microwave doors, electronic heater controls) due to minor component failures. If a monitoring circuit could have polled and verified the system before allowing power it would have saved some grief.
You could probably use X-rays or some kind of van Eck type probe to compare an assembled board to a reference article in a way that was sensitive to tiny changes. But since the reference article itself would probably have to come from the same factory you suspect of foul play, it would be kind of pointless. Much like how software hashes can’t protect you against bad actors working inside the Apache foundation.
(It’s possible to tamper with a device later on – without the complicity of the manufacturer – but that’s easier to guard against, and doesn’t serve to cultivate paranoia about China specifically).
I’m no China expert, so who knows. But if we get to the point of speculating that China would risk nuking its own technology exports to achieve some military or political goal, well, spying on server farms is an exceedingly minor part of what they can do. For one thing, they could do more damage by simply not supplying all these awful haunted motherboards. Oh, plus they have atomic bombs.
In The Spy In Moscow Station, the author makes the claim that the KGB succeeded in executing a similar exploit on IBM selectric typewriters in the US’s embassy in the early 70’s, and that nowdays in Russia, this kind of exploit is so well known and common that it is taught in basic business studies-type courses.