Originally published at: Meta fined 91m euros for storing passwords in plaintext - Boing Boing
…
At first I was thinking 91m euros for a brief, self-reported accident with no breaches seems like a lot, but that’s a pretty sloppy mistake involving a lot of users, and they can’t be entirely sure there was no (e.g. internal) access to the data. Plus, Facebook clearly do a lot of egregious shit, and in the grand scheme of 2.5 billion euros of fines, that’s a rounding error.
This reminds me of a time when I was working with a company whose DP manager was an ex-army non-commissioned officer. A Corporal, I believe. He had completely absorbed the idea that everything needed to be “tidy”. It puzzled us why the company’s printers kept going wrong until we found that he was resetting the connections to be full duplex in every case. “A foolish consistency is the hobgoblin of small minds” as Emerson once said. And then users begun to be unable to log in. Okay, log in as root (superuser). Oops, we can’t! What’s going on? The idiot had sorted the password file. On original Unix, this is really NOT a good idea. It’s people like him who would create a password file in plain text. How else are you supposed to read it?
On average, the only safe assumption about someone storing plaintext passwords is that they do not have adequate logging to provide anything approaching evidence of absence.
It’s possible that, in this case, the affected system was some weird acquisition-induced lacuna in a wider context of humorless rigor, in which case you might see both aggressive logging and privileged access management and storage of passwords in a format that has been recognized as a bad idea since forever(at least concerns about the feasibility of attacks on unsalted hashed passwords actually had to migrate from theoretical to “matter of minutes”); but I wouldn’t bet on it.
It was described as a “bug,” so taken at face value there was supposed to be encryption, but a code error bypassed that step and no one noticed, which is still sloppy, but at least it wasn’t set up that way deliberately, and doesn’t necessarily indicate systemic security problems. But that’s taking it at face value and assuming no one used “bug” to mean “serious fuck up.” And by their own admission, the plain text list was accessible, just not externally, so they can’t claim it wasn’t accessed.
My understanding is that it’s not that they were storing passwords unencrypted in a database or something. I think they were doing something like logging request POST bodies, that might include things like passwords
So I’ve seen this happen before, it’ll slip through review, because it’s not like someone is doing
logger.info(password=${password}
)
they’re doing
logger.info(request received: ${body}
)
and not thinking about everything that might be included in the body variable.
Let’s just say that I’ve seen this mistake in code probably 10 times in my career, most recently within the last few months
The really dumb part about it is, no one noticed it in the logs, which probably means no one was looking at the logs which begs the question of, why are we logging stuff no one looks at?
Obligatory ‘fuck meta’ but yeah, I was also thinking the same. Logging mistakes especially can be easy to miss.
This part in particular though:
The really dumb part about it is, no one noticed it in the logs, which probably means no one was looking at the logs which begs the question of, why are we logging stuff no one looks at?
I mean… you might go a while without looking at your logs if nothing else has broken, or you’re just parsing them and never reviewing the actual raw logging data. Like my computer is huge so it could be that we just have unique problems, but it’s pretty rare to actually just sift through a log compared to reviewing rolled up data, so I can totally see something like this happening.
We have automated security audits/etc that catch this stuff, but it can be tricky, and often I think these come about because during an incident the team goes ‘oh fuck we need to log this one thing’ and then whoops turns out it includes more than you think it does, but it got slipped in quickly to fix a problem and didn’t get the full scrutiny (which to be fair, is probably the right thing in an incident, you just have to make sure to go back and post-incident review everything.)
I generally look at logs for stuff the first time I deploy it, because I want to know that the info will really be there when I need it. But yeah, sure, logs are used for debugging and if you don’t need to debug you might miss it
The reason I usually see this kind of problem is that logging platforms like datadog etc let you “annotate” your logs, as in “here is my log message” but also “here is a json blob of some related stuff”. You generally have to click through to see it, it’s super useful when you want to drill down into something but you’d never bother if you’re just casually searching for something in the logs.
My last few jobs have used some tools to try to identify things that look like passwords or api keys and redact them - it’s not perfect but it catches a lot. And frankly 10 years ago no I knew cared - we were logging anything we wanted and sort of thought “well, it’s all internal to our networks, who cares” but of course, logs can be leaked or exposed a bunch of different ways
This topic was automatically closed after 5 days. New replies are no longer allowed.