In the environments where I have worked a great deal of effort has been taken to ensure that data for software testing never resembles the real operational environment. For this reason we would never have created test google certs. We would have used example.com or similar.
This is good and bad.
This kind of certificates can be used to MITM the users and spy on them.
It can be however also used by the user himself to MITM his own machine’s communication for the purpose of auditing or reverse engineering. It would be nice if we had a way to keep this possibility.
For now, in such cases we’ll have to block the browser communication with the CT servers and sanitize its logs.
Even if you’d need to monitor the communication between a software component (or, worse, a closed-architecture hardware black box) and a Google API?
If you are testing properly you would have a simulator for that.
In the ideal world all APIs can be simulated and all vendors are cooperative towards both testers, security auditors, and reverse engineers.
Alas, the world we live in is less than ideal.
For purposes of this question you may safely assume I am your tech-illiterate great-grandfather who doesn’t understand dimmer switches, much less the internets.
Is it really impossible (or even that unlikely) that this was an honest mistake by an otherwise competent company? I’m not seeing how Symantec benefits here: they look stupid, and apparently they can’t do this without getting caught anyway.
I can’t tell from this whether the problem is shitty certificate-issuers, or a shitty system. I can see the inherent problems with a system where your browser automatically trusts hundreds of issuers (any one of them might be corrupt!) but I can also imagine the problems people would have with certificates being issued by just a few or one issuers (we’re giving Google total control over the internet!) and I don’t really know how you’d fix either case.
I’d actually have felt better if it said “…this failure to follow policies so enraged us that we became a wrathful, unreasoning berserker and fired them immediately. Did they try to explain themselves? We don’t know. We couldn’t hear them over our bestial screams of rage. When we came to our senses half of our employees were gone and there was blood on our shirts that we’re pretty sure didn’t come from us.”
The details are a bit sparse, but the way I read it, it wasn’t necessarily meant to do any harm, but it was an egregious breach of proper procedures. Imagine a bank employee borrowing money from Google’s account to try something and then putting it back. Some things are just not done in industries built on reputation and doubly not to major players, even if there is no specific damage.
A google.com certificate issued by a major CA is such a digital nuclear weapon that allowing that to happen doesn’t make Symantec look good at all.
It is totally believable to me that this was an “honest” mistake, and symmantec’s description of their response sounds correct. CAs can and do have stringent access policies governing the use of their signing certificates, but eventually some employees have to have access to those systems for various reasons for the business to actually function, and people make mistakes and do stupid things.
Both. Symmantec definitely screwed up here, but the system is pretty bad and prone to errors like this. Unfortunately it is very hard to fix in an acceptable way. You can mostly trust the big commercial CAs to not do something like this intentionally since their business depends on their reputation. You need to worry more about smaller CAs that might take a big payout for issuing some fraudulent certificates even if it puts them out of business, or government controlled CAs that have limited accountability should they misuse their authority. Unfortunately, the mechanisms in place are not much more sophisticated than “trust” vs. “don’t trust”, although they are getting better.
By the way, the generic correct way to “test” things like this is to either make fake root CAs and install them as trusted on your test network, or make certificates for either fake domains (example.com) or domains you control.
Great day for corporate responsibility: first VW/Audi demonstrates why open source vehicle software should be standard, and now Symantec with google certs.
It is a bit odd to call them “outstanding employees” and then say they didn’t follow policy so we fired them. Obviously mistakes can happen, and the thought of rehabilitation short of firing would seem appropriate if it were an honest mistake. On the other hand, if you know a policy and don’t follow a policy, who knows? I suppose from the corporate side it depends what level of risk it puts the company at. Maybe their work agreement outlines significant penalties for this sort of thing. I know that in the USAF, for instance, there are relatively “minor” offenses (eg, pee positive for marijuana once) that can lead to what is essentially an automatic discharge. So long as you know going in what the rules are, it is hard to complain—and we don’t even know all the facts here. Clearly Symantec felt these actions put them at some level of risk either due to the event itself (eg, damaged reputation) or potential for future similar mistakes, and they acted accordingly. Hard to know how bad to feel for the employees without knowing just what policy they didn’t follow properly.
“We fired them” is a good public relations phrase. We don’t know if anybody was actually fired, or just instructed to filter out the Chrome cert log communication from the given lab.
I for one wouldn’t be eager to fire a good engineer. I’d however be willing to lie I fired him, if there’s no way for the media to validate the claim.
It is a shitty system. Too many certificate issuers and not enough diligence.
But domains are much, much worse. If there was a decent centralised system for issuing domains, requiring proper validation, and if they weren’t just seen as an infinitely extensible revenue source, spam and malware would be very, very much harder to do.
With the ridiculous extension to top level domains, you can’t even just block .cn and .ru any more. Greed is winning out over security and the convenience of the public, every time.
Especially since we don’t know if Symantec was actually doing this for an
Which most likely did not. I think they experimented with MITM in their lab, or were analyzing some malware that communicated with some API, and a MITM using a proxy setup they already had in place was the easiest way. With deadlines on one sides and family commitments (or just a need to sleep) on the other, the easy ways get tempting. So, let’s make a time-limited cert that expires the same day, nobody will care like they did not care for all the times they did it before.
I know that under the same conditions I’d do it that way. If a certificate is faked, and it never leaves the lab, does it make a sound?
depends, is vanishing trust audible?
Did the thing get in the wild?
We all know that certs can be issued by whoever has the root CA one. This is unsurprising. If it escaped the lab, I’d see it as a problem. If it was issued for a longer period, so it could have a chance to migrate out of the lab by neglect and end up in the wild, I’d see it as a problem. Otherwise, nothing to see here, move on.
This one? No. But Symantec (and I’m not blaming the engineer here) nicely showcased the trust issue in the TLS cert industry.
Which was there for its very beginning. Important to be aware of, but nothing new to see here.
What is of MAJOR importance is the demo of the cert checking tech in Chrome.
We need both a way to fake the certs (for auditing and other uses, when we are in control of the system) and to be aware when that is happening (for when we are NOT in control).