Originally published at: https://boingboing.net/2020/11/30/ransomware-attack-closes-baltimore-schools-115000-kids-affected.html
…
FYI - this affected Baltimore County, the suburban/exurban/rural area north of Baltimore, not the city itself.
With increasing ransomware attacks on hospitals, city administrations, and now schools, maybe it’s time to build out separate infrastructure to combat this problem. I imagine a future need to protect airports, bus terminals, etc.
Is a seperate infrastructure even possible?
Ransomware attack closes Baltimore schools: 115,000 kids
affectedthrilled
FTFY.
Obligatory
More or less, maybe: there’s a lot of variables to contend with, each with an associated cost (both money, time, and hassle)
as with all things security, it’s a balance of risk and utility, and a multi-layered approach is almost mandated. (one of those layers being ‘THOU SHALT HAVE A BACKUP SYSTEM THAT IS TESTED ON A REGULAR BASIS’ )
Teachers, classrooms, pens, paper, black/white boards. Why the need to close them all?
(I know, I know - but it does make a point.)
Thanks, the clarification was needed as those not from here may not realize that Baltimore County and Baltimore City are distinct and separate entities. The tweet included in the OP, of course, makes it plain if you’re paying attention. I’ll also upgrade my pedants license and point out that Baltimore county is north, east, west, and most of the south side of the city (Anne Arundel County shares a very small section of the city border on the south side).
@anothernewbbaccount the kids (my kids!) are 100% virtual so this outage means no schooling. This is another thing Rob should correct in the OP - he says “Administrators hope to bring students back in on Wednesday.” There’s no “in” to bring them to.
Ah - now that’s a problem. Understood.
In theory it’s a fairly trivial; but in practice it tends to either be impossible or useless depending on how you run it.
People sometimes over-connect by mistake, or in ignorance of the consequences; but we network computers because it allows all kinds of handy features that are otherwise either impossible or radically more laborious.
If you run it strictly, your air-gapped infrastructure can be pretty solid; but will have its utility and convenience massively reduced.
If you don’t run it strictly it quickly stops being ‘separate’ and just starts being ‘connected by sneakernet and similar ad-hoc arrangements to the real network’; which is generally the worst of both worlds: inefficient, probably full of questionable shortcuts people took “because it’s isolated what could happen?”, and poorly monitored at the weak points because those weak points officially don’t exist and so aren’t written into any plans and may be actively concealed from administrators.
There are places where the risk of exposure is just too high; and you tell people to suck it up; but that’s a fairly draconian case and even then stuff can sneak through(pretty much anywhere you need to let a computer or program from the outside inside even temporarily, which is common for installs, updates; configuration, etc.)
THOU SHALT HAVE A BACKUP SYSTEM THAT IS TESTED ON A REGULAR BASIS
Sadly, I think this is an unrealistic expectation for most K12 IT. In my experience, they’re lucky to keep basic functionality limping along. Security, backups, etc. is solidly in the “nice to have” category.
Easily fixed by reducing the administrator’s salaries to something sane, and dumping some money into the IT group.
Don’t forget, Baltimore City (government - not schools) was hit by a ransomware attack in May of last year (the RobbinHood variant). That was quite literally close to home and would presumably – to non-IT me – have been a wakeup call that should have other jurisdictions, especially neighboring ones, on the path to a better plan.
–shrug–
Apparently they didn’t learn a lesson from the Baltimore city government ransomware attack last year. https://en.wikipedia.org/wiki/2019_Baltimore_ransomware_attack
I understand very well how many resources and dollars it takes to implement decent cyber security in a large organization. Not only do you have to have a competent (expensive) team watching your network 24x7, you need a process to be built in to creating and maintaining a secure IT infrastructure. You need training and awareness so personnel don’t click on that phish. You need testers who can examine and monitor your systems and apps for vulnerabilities. You need engineers to build and manage the various security systems. And there’s so much more you can and should do.
But nobody who built an IT department in the 1980s and 1990s ever gave a thought to security at this level. People who built them in the 2000s gave it little more than a passing thought – hey, let’s make sure people run anti-virus. It wasn’t till the mid 2000s when the PCI started imposing fines on companies with poor security that handled credit card info that security started maturing more broadly. Unfortunately, lots of the systems administrators from the bygone era still see budgets that reflect an unrealistically trivial spend on security. That has to change because these attacks will not stop.
Further making this situation worse for schools is the fact that their budgets are set by politicians who promise to cut taxes, their student-facing systems are exposed to a hostile environment, and the ignorance that believes they can fix security simply by threatening students with severe “anti-hacking” rules. I sometimes think the only reason more schools aren’t shut down by these attacks is because there just aren’t enough bad guys to hit them all at once.
We have been in the same boat for 5 weeks, cleaning up a mess. Even with backups, you’re faced with rebuilding the infrastructure, cleaning the infection up, and only then starting the recovery.
Fuck Russia.
I can only hope, for both my comrades in the data mines and for everyone in the school system, that “basic functionality limping along” turns out to be a (slight) blessing in this case:
Doing recovery is always a lot of brutal ditch-digging; but if the state that was defined as ‘working’ previously is relatively simple(in terms of likely being very, very, heavy on homogeneous endpoints without much user state or specialized configuration and relatively light on freaky legacy app server thickets or exotic infrastructure) it’s going to be a lot less of a nail-biter to get it back up and running; and it’s more likely that when X% of the work is done it will be X% useful rather than “might as well send everyone home until the ERP system is back up.”
Absolutely going to suck regardless; but some things are easier to rebuild than others.
I’m not quite sure why separate infrastructure isn’t done more in doctor’s offices and such. There’s often a dedicated EHR program and billing software that is easily run off a single computer. That computer doesn’t require the internet, give it no connections. Back it up daily, take an empty thumb drive, download it to a connected computer that can store the data on the local connected computer as well as offsite on a secure server. Billing, emails, etc works off the connected computer. Wipe the thumb drive before using it to back up again. A bit cumbersome, and not practical for a hospital, but for small offices and ambulatory centers, or law offices, does this make sense?
This topic was automatically closed after 5 days. New replies are no longer allowed.