Given the amount of hyperbole and FUD in Corey’s posts I have a hard time taking much of what he says seriously.
So now I have work I never asked for. Great. Thanks, Google, for that. Pity this whole process isn’t simpler/accessible. (Hell, it’s barely understandable, and there are literally millions of other things I’d rather be doing with my time.)
If I must enable this, can I ask for a certificate that will last essentially forever, thus minimizing my involvement with this mess?
Thanks for the the response. My ignorance is showing, and I appreciate the help.
Much of what I understand about certificates in embedded devices comes from the news surrounding littleblackbox some years ago:
http://www.devttys0.com/2010/12/breaking-ssl-on-embedded-devices/
Here, they implicate HTTPS as well as vpn and ssh traffic. Like a commenter there, I and your explanation here, I thought that ca-signed keys didn’t work that way. I remember commentary at the time about companies choosing to have browsers show the secure lock icon over actually being secure, but I can’t find any of that now.
I’m just not confident that we won’t have to deal with browsers showing “This site is insecure!” every time we connect to something that’s not a traditional web server.
“Cory”
Only pendantically commented on after repetition.
My phone likes to autocorrect Cory to Corey for some reason.
The real irony is that this change will do nothing to prevent situations like the Iranian scenario mentioned. Good authorities will keep getting compromised, bad authorities will keep shitting out bad certs for cash, users will see nice locks even when they’re being monitored, and nothing will change. Because the model is flawed beyond salvage.
Google should spend some of those untaxed billions to develop a decent and practical multi-trust security model that does away entirely with the single-POF authority model. Everything else, at this point, is theater.
Your suggested fix is what?
Oh, that Google should spend a bunch of money and figure out how to fix it for everyone? Ok.
There is no fix. The current https model is broken, flawed, kaputt. Any minute spent trying to “fix it” rather than experimenting with alternatives is an utter waste of time and money for Google and everyone forced to jump through these hoops for ridiculously low gains.
We spent 20 years paying the price for Netscape’s rushed decisions. It’s too late to do anything about JS, but Https can and should be phased out for something better.
Any centrally managed public key infrastructure will be vulnerable either to technical or social engineering problems. I’m not sure what alternatives you would propose. There’s only so much you can do to balance security and usability when it comes to cryptography.
At the very least we should have something that requires multiple validation from separate parties, so that any attack to the certificate chain would require compromising multiple entities at the same time. It’s not perfect and it will have its problems, but it will inevitably be a step forward from the current “one certificate from total breakdown” state of things.
Legacy systems say that saying it is broken doesn’t really solve the problem in a way that all the major vendors and stakeholders will follow or how we evolve to get there. Your criticism is noted but ignores the reality of software and companies.
I’ve seen many proposed solutions. Some would even work but none of them say how we get there in a realistic fashion acknowledging the realities of the existing users and situation.
Google could create your perfect system but if Apple and Microsoft ignore it, to use examples, it won’t matter. (Or Amazon or Mozilla…)
The distinction you make is true (HTTP to HTTPS redirect can only be broken on compromised networks), but I hold it to be a distinction without a difference.
That is:
-
You actually have to work very hard to construct a scenario where the attacker can eavesdrop, but not intercept, web traffic. It’s not 5% of realistic attacker scenarios where you have to worry about SSLStrip and 95% where you don’t - it’s more like the reverse, 95% where you do and 5% where you don’t.
-
If you assume a non-compromised network, there’s no point encrypting the traffic anyway. The whole reason to implement HTTPS is that you assume the network is compromised. If you assume it’s not, then you don’t need to encrypt.
HTTPS is ONLY for the benefit of clients who are accessing your site via compromised network segments (i.e. probably way more of them than you think). So if you implement HTTPS in such a way that it only works if they aren’t accessing it through a compromised network segment, then you’ve wasted your time.
I think I could clarify though: An HTTP to HTTPS redirect in the absence of HSTS is security-wise barely different from not bothering with HTTPS at all.
Thanks for the reply.
Wait, what? Sniffing packets to steal unencrypted passwords and financial data is a much much easier thing to do then to compromise DNS or intercept network traffic and perform a MITM attack. HTTPS keeps people on the same network or at any hop between you and the server from “eavesdropping”, which doesn’t take nearly the level of comprimization or sophistication of attack.
An HTTP to HTTPS redirect has nothing to do with how HTTPS is implemented. The HTTPS is implemented exactly the same was as it always is. It is a complete separate thing, and is simply a server rule to send a redirect to any request coming over HTTP to instead try HTTPS. It is what google does. What facebook does. What amazon does. Pretty much every single major site on the web will send you a 301 redirect to the HTTPS version of a page if you request the HTTP version. Are you saying EVERYONE is doing something wrong?
I agree that HSTS is good, but according to the HSTS spec HSTS headers are ignored if sent over HTTP and only valid if sent over HTTPS, and do not add any additional protection if the first request is over HTTP. I don’t get an HSTS from any of the major sites until after the HTTP to HTTPS redirect, not before, i’ve just verified the headers.
A server implements an HSTS policy by supplying a header over an HTTPS connection (HSTS headers over HTTP are ignored).[12] For example, a server could send a header such that future requests to the domain for the next year (max-age is specified in seconds; 31,536,000 is equal to one non-leap year) use only HTTPS. The HSTS header can be stripped by the attacker if this is the user’s first visit. The initial request remains unprotected from active attacks if it uses an insecure protocol such as plain HTTP
SO while HSTS is agreeably good, it does absolutly nothing and add zero extra security if a HTTPS connection to the server has not been previously established. Doing HTTP to HTTPS redirects is the recommended best practice, and sending HSTS afer HTTPS adds additional security, but you are incorrect about it adding that security before instead of after. Also it is incorrect that a MITM attack is not more difficult to pull off then simple packet sniffing/inspection/snooping, and the latter is much more common then the former.
Cheers.
How? Try to work out the network layout where you are situated to sniff packets, yet somehow unable to insert bogus DHCP, DNS, HTTP, or ICMP redirect packets.
It’s actually quite difficult.
And yes, I get that HSTS is ignored over plain HTTP. The point is that if you successfully do a redirect to HTTPS (which isn’t guaranteed, but is likely on a per-connection basis), you achieve a connection over which HSTS can be established. Then you’ve got the secure channel locked down. The attacker gets only one chance to break your secure channel - the client just has to connect the first time from a non-compromised network, and then you’re good.
If you don’t do HSTS on that first HTTPS connection, then the attacker can breack any secure channel on any future occasion, because you failed to take your chance to secure communications. The client has to connect from a non-compromised network every single time forever, which is not going to happen.
Have you used a packet sniffer before? This is EXACTLY how they work. Listening to packets is quite easy and can be done on the original or destination subnet or at any of the hops, and simply requires the sniffer to passively be running on the network with the same traffic it is listening to. Now, intercepting and altering packets requires a much higher level of privileges and either redirecting the traffic or breaking the code receiving the packets and is much easier to detect. Injecting packets and having the injected packet arrive before the original packets doesn’t block the original packets and requires a flood attack or a denial of service attack on the original sender which most devices and hops are setup to detect and block, or a completely compromised network stack. They do not require anywhere near the same level of privileges or skill to pull off.
The first is very easy to achieve, the second much harder, so it doesn’t take much imagination to see why packet sniffing occurs so much more frequently then MITM attacks. This is pretty standard industry knowledge.
yep, which was my original point. Best practice is to do the redirects and then HSTS. glad the conversation has come full circle.
well no, they can’t break an HTTPS connection in the future just because HSTS wasn’t done, spoofing a CA for a valid HTTPS/SSL cert chain is incredibly difficult to pull off so even if the DNS is spoofed an HTTPS connection wouldn’t connect with a valid cert. future HTTP requests are still vulnerable, as I pointed out, and that is the reason for the HSTS, not future HTTPS connections.
These distinctions may seem minor on the surface but they are quite large crucial distinctions. Cheers.
This topic was automatically closed after 5 days. New replies are no longer allowed.