“Suspected ransomware attack” takes down Universal Health Services hospital network, doctors and nurses are using paper

Originally published at: https://boingboing.net/2020/09/28/suspected-ransomware-attack-takes-down-universal-health-services-hospital-network-doctors-and-nurses-are-using-paper.html


The way current EMRs work, if the system goes down we are pretty close to helpless. I cannot access any past medical history, allergies, previous plans or notes. And believe it or not, i dont keep all that data in my head! Controlled substances in VA cannot be prescribed on paper any more, and of course, if the system goes down, so does the escribing ability. And for inpatient docs, it is much worse. This is a nightmare.


For a system of this size to be taken down by a single attack shows a profound failure of IT systems management. In a critical operations scenario you should never have so many systems so inextricably linked that pushing down one domino causes the whole system to fall.

I remember years ago when on contract with a major world-wide financial firm one system I had to implement was a network edge protection server. It consisted of two servers connected to each other via SCSI cards. Communication between the two networks had to pass through these servers and across the SCSI connection through filtering software etc. Surely not bulletproof but a fairly novel way to segregate networks.

Hopefully they have good DR there and they regularly run exercises so they know how to use it.


(That’d be GWU…)

Imagine for a moment that the United States government had an agency employing the best pen testers in the world, tasked with identifying vulnerabilities in critical infrastructure and working with vendors and network operators to close them proactively.

You could call it a “National Security Agency”.


Typically on top of that they use outdated desktop operating systems, increase the surface for attacks through legacy applications with a healthy dose of snakeoil (i.e. malware scanners) and Cisco hardware, and are late with patching servers (instead of deploying automatically patched systems). Of course that shit can be taken down by a script kid.

And after the fact they say that it’s not their fault.


I just responded to our IT guy doing the biz continuity plan when he asked - what set of circumstances would mean you could no longer operate - we’re government- we’re not allowed to not operate.

I know sneaker net. I know how to use pencil & paper. We operated before computers- we can continue if we have to in an emergency. We quickly adjusted to handle the pandemic- we’ll do the same if systems go do globally.


That’s fairly unique way of doing it. I didn’t know that two computers could be connected by SCSI, did it require some specialized hardware, or you could just connect two controllers on the same bus?

I’m not sure. I’d guess that with SCSI being a bus architecture it wouldn’t take anything special with the hardware. Maybe special firmware or probably drivers.

It did just occur to me that with it being a bus architecture it would be ripe for snooping. Of course that would take direct physical access but that’s not always a really big issue. In a DC with nearly 200K sqft of raised floor no one really looked in racks unless they were working in them. If you could obtain physical access then an intrusion could potentially go undetected for a long time.

1 Like

This is, of course, speculative unless someone does us the service of elaborating on how it went down(not likely if they can avoid it; but maybe some insider will be be feeling disgruntled off the record); but it would not at all surprise me if it’s actually a symptom of success at IT systems management; but more or less exactly the wrong level of success when it comes to ransomware.

At one end of dysfunction you have organic complexity, thickets, fiefdoms, barely-cobbled-together systems that have grown out of departments or developed by accretion through acquisitions and mergers and ‘upgrades’ that fail to kill of the legacy stuff that they were supposed to replace. The sheer variety of that tends to impede lateral movement(at nontrivial cost to user sanity and efficiency in a lot of cases); but it’s a defensive structure more or less inadvertently: because it’s such a complex mess odds are excellent that there are holes all over the shop and retaining persistence is easy(find the right corner and you may be the only administrator around…); but expanding your foothold is less about lateral movement and more about knocking over systems one after another.

If you reach the level of architectural sanity and standardization it’s vastly easier to reap the benefits of automation, repeatability, data actually being interchanged without a layer of secretaries manually munging the CSV output from system A into the peculiar input format demanded by System B, and so on. Unless you are really slacking off the average security level is also going to be higher: stuff actually gets patched and has its configuration managed, people actually know what systems are in use so there’s less incentive to just assume that mysterious activity is probably something important that nobody understands and you don’t want to break, that sort of thing. It’s just that it’s all too easy for that standardization to extend to credentials, or an endpoint configuration management tool, or the deduplication of systems rendered redundant by standardization, so if someone does get their foot in the door their lateral movement will be brutally swift and efficient(in some cases exploiting literally the same channels used for management; otherwise attack tools making use of the standardization and broad willingness to respect certain credentials that makes the management work).

That level, especially if anyone tries to cut costs around the edges, is just ransomware heaven. Well run enough that attack tools spread like wildfire; some consolidation of critical systems that’s just fine on capacity grounds but not so much on redundancy grounds, that sort of thing. It’d be my suspicion that UHS might have been hanging out around here; or at least the portion that was taken out was; could certainly be some little Win2k embedded lacunae in the network diagram; but if they are all cut off from patient information systems that doesn’t help so much.

The ideal case is, of course, the efficiency and standardization of the ‘not a total mess’ case; along with rigorous attention to backups, redundancy of critical systems, defense-in-depth design against privilege escalation, and so on; but that’s harder and more expensive and nobody knows the difference when nothing bad is happening; so just try to get the budget for it…

I’ve worked in large data centers where systems where put in place to mitigate attacks like this. Strict network segmentation, rigorous procedures for accepting systems into production, unique credentials. It’s all stuff that makes day to day administration less convenient but can have significant impacts on the abilities of system vulnerabilities to be exploited.

Getting to a point where you’ve got highly integrated and standardized systems without going the step further to protect against the risks that presents shows a profound failure of IT systems management.

I don’t think that we’re in fundamental disagreement on the point; it’s definitely the case that homogeneity without actual security design means that you are pretty much one phishing email or drive-by download away from someone mimikatzing merrily around until they find a domain admin hash(and it won’t take them long) and then game over; and that is indeed a very bad place to let yourself sit.

I’ve just had enough traumatic encounters with opaque legacy messes to give some credit to the people who can cut their way through that particular jungle and arrive at something that is at least sane and well understood enough to have good security practices applied; they just need to never forget that there’s no rest until the second part of the job is also done.

It’s the first I’d heard of it; but it looks like the actual standards body at least nibbled at the idea; and there’s a delightfully hackish Portuguese university project that appears to target 2.x Linux versions.

Honestly, it’s a little surprising that the idea didn’t get more traction back in the day when ethernet sucked and/or cost actual money. At this point the pendulum has well and truly swung and people are network-attaching so close to the storage that the SSD controller chip itself will probably have a NIC by Q4 of 2021; but once upon a time SCSI could have made a reasonably mean rack-level interconnect for the money.


This topic was automatically closed after 5 days. New replies are no longer allowed.