Installing a root certificate should be MUCH scarier

Originally published at: https://boingboing.net/2019/02/14/dire-warnings.html

2 Likes

This is arguably a flavor of ‘much scarier’; but it seems worth emphasizing the “not only are you being asked to grant it great power, you are being asked because someone has a use for that power that they can’t achieve with the long list of defaults (iOS, Android-if-nobody-else-has-changed-it).”

It’s not just that being a trusted root is crazy powerful; it’s that it’s something you’d never bother to try to achieve without something special in mind; so the only safe assumption is that anyone asking you to install one is planning on using that power.

4 Likes

Relax, they’re an important company. What could possibly go wrong?

3 Likes

Last time I tried looking up some field manuals, the US navy wanted to install its root certificate.

3 Likes

Yeah. DoD makes fairly heavy use of encryption(more than a great many people since CACs are actually a thing) and doesn’t buy commercial certificates for the purpose(their position, not unreasonably, is more along the lines of trusting their certs and being leery of other people’s); but commercial vendors are, understandably, generally loathe to put US DoD root certs into the default trust lists, which aren’t necessarily a high water mark for conservative risk assessment but tend not to get closer to “Not Even Pretending to Not Be State Agents” roots than perhaps a national standards body or two.

If you just want to go to a site using a DoD root for SSL stuff you can just click through as though it were self-signed or similarly untrusted; if you want some hairier CAC and client authentication arrangement to work you’ll probably need to go with their configuration advice.

2 Likes

Yeah, and a boss of mine used to say it should be painful to be stupid, but that doesn’t happen, either.__

1 Like

My iPhone warns me:

Warning: enabling this certificate for websites will allow third parties to view any private data sent to websites.

It certainly could be scarier, but there is a fairly clear explanation at least.

It’s worth noting also, that if you work for a mid-size or larger company which allows “BYOD”, you probably have to install their root certificate in order to access internal sites, email, etc. from your phone. This is why I keep a separate phone for personal use (fortunately my employer provides me with a device rather than requiring me to give them a backdoor to the one I own…)

3 Likes

This is the argument (if not the real motive) for not letting people have root access on devices they own. To the extent that there’s any way at all for users to modify privileged parts of the system, there will be ways to coopt that acess, and that breaks the founding assumptions of any kind of security.

It’s easy to say that the Keychain app on my Mac should present a slideshow of war atrocities before it lets me install a new root CA. But that would also have to happen every time I install software that requires an admin password, since it amounts to the same thing. Either people become inured to the terrifying warning, or they are misled to think that the risk is not present when the warning isn’t terrifying.

It is possible to thread that needle, with sandboxing and granular permissions, but that looks likely to remain a pipe dream on platforms that weren’t designed for it from the ground up. For now, the pragmatic attitude is to assume that crypto guarantees are just not very strong on systems that don’t tightly restrict what users can do.

2 Likes

This reminds me of the early versions of UAC on Windows, where the popup would intentionally make the whole screen go black for a few moments before popping up, just to scare users into taking it seriously.

Look how well that worked… UAC broke compatibility with so many pre-Vista apps which just assumed unfettered access to the system that most people just turned it off entirely, as I recall.

(It’s possible UAC still does this? I haven’t used Windows in a while…)

UAC still does the screen darkening. It’s a deliberate attempt to make UAC prompts harder to spoof.

Any native program can do an excellent job of drawing a window that looks almost exactly like a UAC prompt; and web pop-ups aren’t quite as convincing but even easier to generate; so they needed something that would be less trivial to fake to phish admin credentials off users.

Dimming the entire screen and doing some nonstandard focus control; while not entirely ironclad, makes it a lot harder for basic spoofing attempts to get the effect right. Could probably still do it by grabbing a screenshot, launching a fullscreen program as fast as possible, and using that to display a dimmed background and a ‘UAC prompt’; but just abusing basic dialog box capabilities won’t do.

It has been toned down a bit over time; but the same intention that UAC be distinct from normal desktop(you’ll also notice that any remote control tools not specifically designed to work around this tend to freak out when a UAC prompt shows up; the special mode affects tools for programmatic button manipulation and the like as well, also by design) remains; and isn’t really optional if you want to keep spoofing to a minimum.

As someone with a CS degree who works in IT, encryption as a whole is entirely too convoluted and complicated and it generally makes me want to tear my hair out. The fact that it is this difficult is why there are so many security issues.

If you do something and you don’t understand the consequences it’s your own fault.

That’s true (actually incomplete, it’s also your fault if you do understand the consequences, doing rather than understanding is what has effects) in the weak sense that the universe appears to be causal; but seems like it’s missing the overwhelmingly salient point when we know that the likely degree of understanding(and how often the prompt to do something pops up) is very strongly influenced by the designers of a system.

Perhaps some moral abstraction is served by the belief that users get what’s coming to them; but that doesn’t change the fact that, human nature and cognitive capability being what it is (and while not perfectly nailed down; not an unknowable mystery for a lot of useful cases) the fact that some systems generate a lot more instances of error ID10T / PEBKAC than other systems do starts to look like a reflection on the design of those systems.

It’s especially overt when the EULA lawyers or the ‘dark patterns’ UI crew show up and the design of the system starts to show signs of quite deliberately impeding user understanding of consequences. Can we cogently maintain that it’s the user’s fault when it is the active intent of the system designer to minimize his understanding of the consequences to the degree possible?

1 Like

I’ve seen “help documentation” at a Fortune 500 that instructs users on installing a MITM attack root certificate to make “intranet errors” go away… for visiting contractors, vendors, et al who don’t have a company standard machine and don’t have the MITM attack certs already loaded. :slight_smile:

On the other hand, those MITM proxies are how they actually catch APTs on the network. Otherwise when the data is opaque it’s pretty much impossible to determine if a short port 443 request from a random machine to something in AWS hosting is malicious or not. You can’t fight sophisticated hackers with one arm tied behind your back. They already have several advantages you need to overcome.

I understand how requiring MITM certs for outside encrypted contact can prevent the APT having secure, unobserved command and control… but telling visitors “sorry, no encrypted out line” vs. “install this cert to fix the problem” would be a bit more above-board and would provide the same exact prophylactic benefit. I can see the benefit in terms of watching the context of information flow to catch proprietary information exfiltration, etc… but that’s another topic entirely.

Seeing (and storing) the traffic of visitors means seeing internal price information, possible competitor information, etc, in addition to health information and credit card information, without informing those visitors that you’re asking them to break that pretty little padlock in the URL bar. It also means the organization running the MITM attack should be complying with HIPAA, PCI, etc… for all that sensitive and not-theirs information they’ve decided to gather. I find it all quite interesting.

This is only a bit scarier:

2ndscary

5 Likes

The entire CA security model as it’s currently implemented is broken. Yes, it should be hard/scary to install a new trusted root cert, but it doesn’t matter as long as it’s still easy for users to click through invalid certs.

1 Like

Sounds like someone just read Halting State…

This topic was automatically closed after 5 days. New replies are no longer allowed.