Originally published at: https://boingboing.net/2019/09/05/nobus-under-the-bus.html
…
NOBUS is not by any stretch confined to “great power” adversary relations and Cold Warriors. It’s also built into our antiterrorism planning. TSA is perhaps the extreme pathological case: our planning for attacks on (for instance) air travel are predicated on the premise that Bad Guys are too stupid to think of any attacks that haven’t already been tried, or at least minor variations of them.
This rapidly turns into the Streetlight Fallacy: they look for things that are easy to detect regardless of actual threat potential. The “knife” on a charm bracelet is treated as an existential threat, while the diagonally-cut structural tubing in your roller bag (AKA pipe knife) never gets a glance. They worry about toothpaste tubes and ignore large chunks of thermite.
It goes on forever. Ask any sophomore engineering class and you’ll get dozens of different not-even-trying-to-find-them attacks. Yet that gets dismissed on the grounds that Bad Guys are too stupid to think of stuff that 18-year-olds can spout off by the hundreds.
There’s a lot wrong with this piece.
First, it’s critical to differentiate between an exploit and a payload. An attacker can develop a sophisticated payload (stealing bank account data, or destroying nuclear centrifuges) completely isolated from the vulnerability that may eventually be used to distribute or deploy it. Using an exploit doesn’t mean showing your hand and giving the enemy a copy of your payload. It only reveals the vulnerability.
Next, payloads can be encrypted. We saw this with the Gauss malware about 8 years ago. The attackers conducted recon which provided them with a unique fingerprint of the target, and used that as the decryption key. Nobody has ever guessed the fingerprint, so nobody’s been able to decrypt the payload. A precisely targeted weapon engineered to strike once stands a good chance of being able to destroy its tracks. If it’s eventually recovered, it may be so finely tuned as to not be very reusable or retargetable, like Stuxnet.
Also, while traditional military concepts like “cover” and “high ground” may not have direct analogues in the cyber world, you can bet that there are concepts that do apply, and that the military minds are applying them to strategies and tactics. Using the language of warfare isn’t necessarily wrong.
And another place where the analogy is strong is in the need for cyber warfare to be studied by military historians. It’s only through learning from mistakes that they can hope to improve, and to apply the lessons in a novel fashion.
Yes, NOBUS is a huge mistake, and an utter failure. The NSA can and should (but probably won’t) fix it.
We don’t need to discard our old conflict metaphors, we just need to adjust them…
Cyberwar is like a football game…
- It is a perfect environment for spreading contagious disease.
- The players end up with debilitating, long term physical problems.
Cyberwar is like a Cold War…
- We take turns giving each other colds, flu, and pneumonia.
I tried to create content to teach my security students about the differences in cyber conflict. My bottom line is that Cyberwar is like poisoning the water, and hoping that you die last: https://www.youtube.com/watch?v=sGg6YV7cVSk
This is the key difference between “cyber” and other forms of warfare: every offensive measure weakens your own defense.
Wait, can you explain again why this isn’t true for conventional weapons research ?
A conventional weapon doesn’t depend on defects to have an effect.
Of course it depends on defects in the enemy’s defenses. You could have a bomb in a plane ready to drop on the target, but if the enemy can shoot down your plane, the bomb is ineffective. It takes a defect in the anti-aircraft gun system (sleeping gunner, jammed gun, bad ammo) to make the bomb work.
A physical bomb does it’s job by exploding. It can be used against a well built bridge or bridge with serious defects in its construction. The destructive potential is in the bomb.
With cyber weapons, as described in the OP, the destructive potential is in the target not in the weapon. Pointing a script designed to exploit a specific buffer overflow bug in MS SQL server will do nothing if the system is patched to fix this bug or if you try and use it to attack some other type of server.
The only way the weapons created by the NSA or by random hackers continue to have any effect is if all the systems with exploits stay broken for both us and “the enemy”.
It’s like building all of our bridges with the wrong mix of concrete just in case a bad guy tries walking on a bridge. Meanwhile bridges are collapsing right and left with innocent citizens.
Or legislate it.
Insightful read on the importance of information security…
This topic was automatically closed after 5 days. New replies are no longer allowed.