how does this change the threat model compared to other CAs with authentication via a handful of administrative mail adresses?
No, no it isn’t. A redirect initiates a second secure request via a 301, which is just as secure as if someone had gone to https to start with. All servers should redirect to https if they have it, ensuring that bookmarks and copied links are secure going forward.
This is a fair point, but if it redirects to https, they can not fake the new https connection to the original domain, unless they redirect to a similar yet compromised domain name, like in a homograph attack. This is a crucial distinction and one important to note if your point is to carry any weight. right? or am i missing something?
It makes an attack after https redirect very difficult and in most cases near impossible, with the exception of a MITM attack on the http shunting them off to a secondary domain. This is not a valid argument against all servers that have https doing http to https redirects, if anything it is a case for as it mitigates future MITM attacks by pushing all new links and bookmarks to their https counterparts.
I agree browsers should always try https first.
Let’s encrypt isn’t using self signed or bogus certs, they are a part of the trusted root certificate chain same as any legitimate legitimate cert provider.
Corey was making a point of Let’s Encrypt is great making it easy for anybody to get an SSL cert (which is wonderful, get me wrong) while decrying sketchy root CAs issuing MITM certificates. I don’t see how they differ.
so far let’s encrypt has a clean record and showed no fishy behaviour?
Trusted certificate authorities don’t issue MITM certs unless your DNS records are set to allow it, like when you use CDNs. With the exception of a few oopsies like, ahem, symantec’s mistake which caused a huge stink and almost resulted in them losing their root CA cert, this is a secure system.
Lets Encrypt doesn’t issue these types of MITM certificates at all.
So long as your DNS records aren’t compromised, then a cert that has the proper trust chain up to one of the CA root certs, is always valid and secure.
It is complicated, and if us techies struggle with the nuance of it, it is a safe bet that end users don’t have a clue, which is why CDNs be dammed all MITM certs should go away imho and browsers should check and verify cert chains and alert if a MITM cert is being used.
Cheerios!
Yes, it definitely is complicated and nuanced. I think my debate here is based on web security being more than simply “lock = safe”
The needs of the many outweigh the needs of the few, or the one.
Offering the free certs standard to all their blogs, like most alternatives are now, is the needs of the many.
Charging for these certs while penalizing for not having them only serves their needs, the needs of the one.
A better guess is just that their browser team is ahead of their blog platform team and that the blogger team will catch up eventually, hopefully, or see mass exodus.
I agree with @GeekMan that it is pretty lame and could be easy construed as a money grab and recommend any blogger customer open a support ticket to light a fire under blogger’s development team since this is an issue their own company is causing/creating.
No really, it is.
Axiom: There is an attacker who is able to alter all plain HTTP requests and responses, and you see only what they want you to see. If there weren’t such an attacker, there would be no point encrypting.
You send a plain HTTP request to the server. The server never sees that request, the attacker sees it, and makes an identical one to the real server. The real server sends back a 301 to the attacker. The attacker doesn’t send that response to the client, instead it re-sends the request via HTTPS, receives the response, and sends that response back to the client via plain HTTP in response to the client’s original HTTP request.
As far as the client is concerned, everything is hunky dory and there has never been any HTTPS in this conversation at all.
That’s how tools like SSLStrip work.
Again, if the server sets HSTS, then the client only does a plain HTTP request the first time, and the MITM attacker has to catch it then. But if it does a redirect without setting HSTS, then every time the user types the address in their address bar, it gives the attacker another pre-HTTPS request to attack.
The real server sends back a 301 to the attacker.
The point i made in the next paragraph is it cannot send the attack back to the client on https to the valid domain, the only vector after a redirect is a homograph attack. Sure they can spoof and compromise DNS, but an SSL cert from a spoofed server won’t have a valid certificate chain. So all they could do is block the redirect. If a redirect happens the attack is mitigated unless a homograph attack is used. Again, this is what i viewed to be the crucial flaw in your reasoning, is it incorrect?
The server never sees that request
yes i get that, my response accounts for that quite clearly.
As far as the client is concerned, everything is hunky dory and there has never been any HTTPS in this conversation at all.
In that scenario, the server based https redirect adds ZERO additional risk or attack vector, it is still a no brainer. Saying it doesn’t prevent all attacks isn’t a case against it being vastly better then the alternative of not doing server http to https redirects or claiming server redirects aren’t more secure. blocked redirects != redirects not being more secure, rather that the might not happen, hence the “crucial distinction”. right? if not then why?
That’s how tools like SSLStrip work.
I’m familiar with attacks like SSLStrip, If your network is totally compromised, as with an SSLStrip attack then there isn’t much you can do without disabling http in the browser altogether, which would break the majority of the internet. If your network is that compromised, they can block the original HTTPS request altogether forcing the user to try HTTP and then MITM HTTP <=> HTTPS. This is not a valid argument against http to https redirects, right? if not then why?
Your initial assertion that HTTP to HTTPS redirects were not more secure then the original HTTP, your second assertions seems to be that HTTP to HTTPS redirects can be blocked on compromised networks, which I agree with but isn’t the same point, hence the “crucial distinction”.
Could you clarify where you view my assertion above to be incorrect? Thanks.
My iPhone keeps telling me that the wi-if points need to increase security , including the Apple Store.
Ah, the advertising networks. Yep, that would certainly explain it. They’d better catch up soon!
Meh. I block all ads anyway.
Offering the free certs standard to all their blogs, like most alternatives are now, is the needs of the many.
I’m pretty sure Google doesn’t care about Blogger anymore and only leaves it standing because they haven’t gotten around to taking it down. It’s an old legacy thing.
@enso, do ad blockers such as uOrgin blocking insecure ads trigger a “green lock” so long as all the other page resources are over https? or does the browser lock show broken because those references exist even though they are blocked by a plugin/addon? Is the plugin removing those DOM nodes before or after the all resources https check?
(is my question clearly enough worded?)
(is my question clearly enough worded?)
No. (Seriously)
If I understand your, ahem, thrust…you’re asking about the lock icon? That isn’t affected by extensions, at least on Firefox, AFAIK. That’s core “chrome” (browser UX).
I don’t know the details of when uBlock Origin does the things it does. I’ve never looked at its code or walked through it while it ran. People are work have examined closely and said “this is pretty good” and I trust them for that.
Update: Reading it a fourth time, I understand what you’re asking and “I don’t know.” If the browser ever notices mixed https and http traffic, it will flag that. I don’t know if the extensions are seeing it before that though I doubt it since the browser as a whole is running the networking code and hitting it first (for rendering, etc) before passing it to content playing with the DOM, etc.
Update: Reading it a fourth time, I understand what you’re asking and “I don’t know.”
yeah i didn’t know either, and i realized my question was a hot mess as i was writing it…lol.
thanks for reading past my “wording good not so”!
Your reply makes sense and is kinda what i figured without running some tests cases of my own to look deeper.
I’m skeptical.
Why does, say, my site need to display this information at all? I am not a bank or a public forum.
So, what is the solution for devices with embedded web pages?
Do you mean devices like routers and networked printers, etc.? I access mine via IP address on the local net, so certificates would be irrelevant, because certificates are tied to a public domain name not a local system and very rarely to a public IP address (never local). In other words, they’re not tied to the actual destination at all, they only give that illusion. That’s just one of the problems with the TLS system.
Maybe for WAN access you could get an IP cert for your router if it has a dedicated public IP (but it probably doesn’t, and/or doesn’t support that, and if it did the configuration for it would most likely be obscure and unusable).
https://localhost is also messy. Because it’s not a domain name, you have to use a self-signed certificate, which browsers think are insecure. Another problem with the TLS system is that it assumes that you trust some CA company you’ve never heard of or done business with more than you trust yourself or people that you know and have done business with. It would seem more reasonable to trust a certificate signed by the owner of a site than a third party, but the point of the CA is to verify that the person generating the certificate actually is the owner of the site. So the 3rd-party CAs are needed. But that breaks things when they can’t be or aren’t used. There is a workaround - you can add the signer of a self-signed certificate to your list of trusted CAs. But no normal user knows how to do it and the UI is obscure and unusable and varies by browser or application (if the option is even present).
A third problem is that the ideal of TLS is secure end-to-end communication between client and server. That’s mutually incompatible with the idea of sticking some other system (CDN / proxy / cache / load-balancer) in between the client and server and having the client communicate with that instead of the server. But people want those systems. And they want their security certificates and green lock icons too. Hence the complaints elsewhere in the threads about CloudFlare and other CDNs.
My site is run with Google Apps on Blogger, which doesn’t appear to have Let’s Encrypt support. So, the very company forcing me to move to HTTPS doesn’t provide a way to let me do it for free.
If you’re not using a custom domain (you’re on their domain), you can enable HTTPS. If you are using a custom domain, the only suggestion I’ve seen so far is to use CloudFlare’s Flexible TLS (which as you can tell by this thread, has its own problems).
The point i made in the next paragraph is it cannot send the attack back to the client on https to the valid domain, the only vector after a redirect is a homograph attack.
Not to speak for someone else, but as I read it, the point was that if you allow initial HTTP connections, and an attacker intervenes then, then the client may never actually get redirected and may always be stuck on HTTP, never even knowing that they should have been redirected to HTTPS. Attacker, meanwhile, could be requesting via HTTPS such that you don’t even realize that the actual client is still on HTTP. It’s possible, whether common or not, therefore defaulting to HTTPS would be better than defaulting to HTTP and issuing redirects.
Why does, say, my site need to display this information at all? I am not a bank or a public forum.
Herd immunity. If secure is the standard that everyone always uses by default, we’re all safer.
Not to speak for someone else, but as I read it, the point was that if you allow initial HTTP connections, and an attacker intervenes then, then the client may never actually get redirected and may always be stuck on HTTP, never even knowing that they should have been redirected to HTTPS. Attacker, meanwhile, could be requesting via HTTPS such that you don’t even realize that the actual client is still on HTTP. It’s possible, whether common or not, therefore defaulting to HTTPS would be better than defaulting to HTTP and issuing redirects.
Thanks for replying even though you weren’t the original commenter. Yes if your network traffic is hacked/owned to the level that all your http is being intercepted to a server, the redirect to https could be blocked, but you’d never think you were on https. a connection redirected from http to https would be secure, because you can’t spoof a valid certificate chain to the fixed list of trusted master CAs. a connection prevented from ever redirecting would not be secure, but that is a different thing then a redirected connection not being secure, that was the crucial distinction i was trying to point out.
CloudFlare’s Flexible TLS (which as you can tell by this thread, has its own problems).
yep.
A third problem is that the ideal of TLS is secure end-to-end communication between client and server. That’s mutually incompatible with the idea of sticking some other system (CDN / proxy / cache / load-balancer) in between the client and server and having the client communicate with that instead of the server. But people want those systems. And they want their security certificates and green lock icons too. Hence the complaints elsewhere in the threads about CloudFlare and other CDNs.
good explanation of the issue.
Yes. By an intial and then continuous domain validation mechanism.
Do you honestly think that if Letsencrypt was as lame as you imply then organizations like EFF and Mozilla would be backing them?