I know nothing of the specifics, but “harmless” rules bending usually occurs when there are serious deadlines and no way of meeting them without breaking the rules. Things like password sharing, direct transfer of files, etc occur because it would take weeks to do things “correctly” and nobody, not management, not the customer, wants to hear that the project is delayed weeks for simple security concerns.
Employees realize this, and usually will succumb to pressure to take security risks in order to get the project moving along.
Which brings me back to a point that I harp on. Almost no-one is willing to accept that proper security has a very high cost in terms of lost productivity. Customers will drop suppliers who use real security because the cost is outlandish compared to competitors claim the same, but take small risks to vastly reduce costs. Suppliers will terminate teams who practice real security because other teams take slightly insecure shortcuts in return for much higher productivity. And teams will shun employees who demand proper security protocols be followed instead of taking small risks because all team members suffer for the seriously delayed project.
Given the reality, there’s a lot of incentive to choose 0.1% chance of catastrophic failure vs. 100% chance of deadline failure.
And given that everybody in the chain claims 100% compliance to strict security protocols, I don’t know you fix the situation. I suspect we’re in this situation because this is the natural equilibrium.
(And note, I am talking about small risks taken in furtherance of project delivery, not deliberate malfeasance.)
So I used to work at symc. I also used to be a key officer and key custodian at a huuuge financial company (those are neat titles for the boring job of making, maintaining, verifying encryption material).
This is as symc says, a monumental brain fart. And you know how they know? At no time is there any less than three people, all digital activity is logged and stored, there is always a camera on you, and there is always a person literally writing down in a physical ledger every detail of what happens.
For EV certs there also has to be either a public Dunn and Bradsteet record that is provable you checked, or an attestation letter from a lawyer. You screw up any one of several hundred steps and it will be caught.
It’s like when I took down an entire production, customer facing backend because of a lapse of double checking. (But this is waaaaaay worse).
Yeah @codinghorror and whomever you think should see this, I’ve noticed HTTPS seems to work on Chrome in Win7, but not Chrome on Android. Wish it did though. I can get you Android version if needed.
I tried out AVG a few weeks ago and was no longer able to access my bank’s website. It turns out that the software uses its own root certificates, as apparently it couldn’t scan the HTTPS traffic without doing so. It made me wonder how Symantec (or whoever) got around the issue, so before I read the article I thought perhaps it was some discovery that they were doing the same thing.
Couldn’t Symantec have created a bogus internal root certificate that would have allowed them to mimic Google’s certificates, without using a root certificate that could compromise external clients?