Bad infrastructure means pacemakers can be compromised before they leave the factory

Originally published at:


On the other hand, it leaves openings for implant modders!


… and hopefully patched proper-like?

My mom recently had a Medtronic pacemaker/defribillator installed and I was wondering if there were already a way to solve this problem, or if it was in the works, or where I might get any additional information.

1 Like

I have a St. Jude ICD with a monitor in my bedroom. St. Jude could program my device anytime they wanted if I’m in the room. If Medtronic wanted to send a security patch, it shouldn’t be that hard.

Really? The company that made pacemakers in the ‘70s that exploded, has a security problem?

Shocked/not shocked.

1 Like

I know this is not the point, at all, but has there ever actually been examples of hacked medical hardware in the wild? I don’t ever recall reading about such, it would appear to perhaps be a bridge too far for hackers (currently?).
Again: not the point, I know - and if I’m not wrong, I won’t be forever.

1 Like

As a user of a Medtronic insulin pump I have mixed feelings about what security they have added to the pump. The newer pumps are more locked down and I believe a lot of the reported issues about being able to remotely control the pump do not apply to the newer pumps, but it also means that I cannot use things like openaps and other homebrew kits, I have to use my older pump for that. It’s the vulnerabilities that enable us users to have more control.


“Tired of grand-dad’s war stories? Download our app and you can Shut! Him! Off!”

1 Like

We have a number of examples of whole hospitals getting hosed by fairly banal ransomware; so it doesn’t seem to be an ethical limit.

I’d be more inclined to wonder if we would notice a hacked implant: it’s not like people generally get opened up for pacemaker install because they were heart-healthy to begin with; and an exploit that lets you brick patients is one that is probably worthwhile used selectively than spammed.

Depending on the type and rigor of any failure analysis done(and I suspect that it would be from a perspective of compliance and/or product improvement in the face of what is suspected of being an edge case; not started from a “who hacked our stuff?” line of inquiry); you wouldn’t necessarily expect to detect malicious software tampering. Particularly if the design allows the malice to stay out of the nonvolatile area where device calibration/programming data are expected to live; even a non-security-focused root cause/product defect inspection would likely involve checking to be sure that vital calibration and programming day aren’t munged, corrupt; or badly set; and so would raise flags even if the inspection is a “need to make sure our storage isn’t causing corruption” rather than “could it be hackers?”

There’s a first time for everything; and perhaps we haven’t experienced the first time for this yet; but I’d give serious consideration to the possibility that we wouldn’t recognize an attack if we saw one; unless the attacker either really screwed up, or decided to go for quantity rather than selectivity (which I wouldn’t put past some people; but it would be a huge waste of an elegant technique for some sociopath lulz compared to using it in a more discreet way only in cases that merit special consideration)

1 Like

That’s the real tragedy of the theory that ‘security’ = ‘forever obeying the vendor’ Crops up wherever this theory is enacted(crypto bootloaders, tivoized devices); but is more personal when the vendors will is baked into something close to a surrogate organ.

“Freedom through bugs” is a pretty awful strategy (except in cases well understood enough that you can use the bug to get in then close it behind you once things are to your liking, as with cheapo routers well enough understood that anyone who can get past the firmware flashing obfuscation can drop a full replacement firmware establishing their control and closing the hole); but it can be better than “not freedom”.

What we really need is systems where a security model that isn’t trash is combined with respect for the owner of the device as a (or the) legitimate controlling party within that security model.

Unfortunately, it is more difficult to do things this way; and vendors often have no objections to a security model that leaves them in control indefinitely (or actively aim for one and fight to preserve it; as with consoles and iDevices).

Given the trajectory of general purpose computers I find it hard to believe optimistic on this score; but it’s only going to become a bigger and creepier problem as time goes on.

The fact that 3rd party hacks can be used to do useful things should make us a bit nervous; since the same control mechanisms could be used to do less helpful ones; but, unless there’s a mechanism for you to be able to authorize your 3rd party things, so they don’t have to sneak in through a defect that anyone else could be using as well, it’ll be the grim future of pancreas-as-a-service (correct response: “I think I’ll PAAS on that.”)

1 Like

Modern pacemakers and ICDs record the heart signals whenever anything goes wonky. There would be memory of the incident. Analysis of that would quickly show that the device wasn’t doing what it was supposed to be doing. The device also records every time it is reprogrammed. That gets stored in non-volatile memory. It actually wouldn’t take engineers from the company long to figure out that the device had been hacked. It would be harder to isolate how or who did it.

Thank you security researchers for not doing that. Killing animals for entertainment purposes is not cool or necessary. For a live demo you could just as easily and effectively hook the pacemaker leads to an oscilloscope and demonstrate the attack.

Excellent reply, you raised a number of things that hadn’t even occurred to me (in my defence I’m in the middle of some kind of viral infection and thinking ain’t mah strong point right now).
I can’t imagine a company responsible for remotely-contactable hardware would necessarily even be compelled to announce what had happened. I imagine that would be instantly crippling to their line of pacemakers, where as the law suits from people that died from failed hardware can probably be covered by insurance and bound up with NDAs.

This topic was automatically closed after 5 days. New replies are no longer allowed.