Do you blame the manager for not locking him out or calling the police?
Maybe, but let’s not try to be so black and white. There’s plenty of blame to go around and if I’m allocating it, I’m assigning a lot more of it to the person who was careless with their keys vs the building management. I mean, the building could have a doorman who checks IDs for even better security. There’s no end to additional measures and there’s no “right” answer to what level, except maybe in hindsight.
Security is a contiumum with costs and benefits. I’m not defending 23andMe’s choices here. But I am saying that if this was a password reuse attack, the careless user should get the bulk of the blame.
Like saying that the end user is at fault for the security failures of the company? That kind of black and white?
Saying that the majority of the blame lies with the end user rather than more with the company that is holding on to some incredibly sensitive data is really just kind of victim blaming here.
Agreed. At this point, we are all subject to this threat, often in ways not of our own making. We know that there is security out there that is harder to get to, and it should be on the corporations who are collecting this data to do all they can to protect it. But far too often, profits matter more than protecting the data of the end user that they are reaping massive profits from.
90% of every job is practicing your craft better than your customers can. Why, when it comes to infosec, is the customer “stupid” if they don’t know the craft?
Especially for the victims, eh? Gotta make sure they get blamed.
Look, you suggested multiple things that given the extreme sensitivity of the data could have been done to protect people and yet wave that away as inconsequential. You don’t seem concerned with the technical side of this either, that also seems inconsequential.
Instead you have constructed a faulty analogy in which 23andMe is like a door lock company and people who used their services are like tenants in an apartment complex with infinite keys who leave them around places and then try to blame the lock company for being potential targets for hate crimes.
I just think maybe this analogy isn’t a great way to think about the problem either from a social nor a technical angle.
Of course 23andMe is going to argue this, they don’t want to be held responsible if some one goes on a genocide campaign with their leaked data or something. They suck for arguing it though and victim blaming also just kinda sucks to hear in general so don’t expect people to be super accommodating of it.
I mean… yeah, exactly… the point of having security run by the corporation is to have experts who can help protect what they have… Otherwise, why have an entire field of programming dedicated to security…
And of course the nature of this database means they probably have information about you and me even if we never signed up for the service (which is why cops have been able to use it to match suspects for cold cases) so really the hack potentially impacts everyone.
I always liked Jef Raskin’s idea that when you create an account, the system should give you your password, not the other way around. I think he also said you should only be given one ID (so no username/password pair). This was all back in the '90s mind you.
I use the Ancestry site and they’ve recently increased security. What bugs me about increasing attempts to cast a wider net with their data is that in the past, information about living people was hidden. Lately, the site has been pushing linking photos to people listed in family trees, and suggesting likely matches from yearbook pictures (who knows where they got those). Those yearbook pictures and suggestions include living people, too. They also want audio - for storing and sharing family stories. The potential for that data to be shared and misused makes me avoid including information like that on their site.
In this case, that key enabled them to enter houses all along the street.
By using this tactic, known as credential stuffing, hackers could access the personal data of millions of 23andMe users who opted into a DNA Relatives feature, including genetic information like the percentage of DNA shared with compromised users.
If it was just their data, fine, it was the users fault, but this was more extensive and should have been an expected problem.
The problem isn’t the passwords even though it’s easy to get stuck on that. The problem is once an account was compromised you had access to an entire graph of ancestry for a single user going many generations deep and as such information about all those linked users ranging from minimal (initials and broad location) to very detailed (full name, DNA commonalities, their relations, and so on) depending on how much that user chose to share. This particularly affected users with Jewish ancestry given the ancestry graph is particularly deep and wide making these users a potential target for ethnically-targeted hate crimes.
Regardless of account security, you could hack a single account through any means and get a personally identifiable information about hundreds or thousands of others as a result whether they know it or not.
There is no need to hack large amounts of accounts; hack one and you already have a ton of information. Hack a few dozen and you can get personally identifiable information for hundreds of thousands or even millions. That’s just how the graph works.
I don’t see an easy fix for this. A major draw of these services is so you can understand your DNA and ancestry (including finding long lost family members). Removing that graph entirely greatly devalues the entire platform for many users. Obviously you can make it opt-in (and I’m not sure if it’s opt-in or out), but that will only mitigate those who choose to opt out. I’d imagine most would opt in anyway. So, how do you keep that ancestry graph functionality while preventing any data leaks? I’m not a security expert, but I just don’t see how this can be done short of making the site so hard by adding so much friction that nobody bothers.
Mind you I’m not defending 23andMe here, and their “blame the user” stance whether technically correct or complete nonsense. I’m trying to look at this from a technical point of view given what we know.
It seems that yeah they added the “social” find my relatives feature that basically exposed user info to the hacked accounts. You need to opt in for that functionality yeah? anyone doing that had to know that’s a risk right?
it’s not all or nothing. like most dating sites, they can blur or redact part of the information until both sides explicitly agree. that’s little real friction, and can even be used to get users into interacting with the site ( which is generally considered a good thing for the company )
another idea, which all banking sites use - and this is arguably more sensitive than your bank account - is to send users emails when there are new logins from new devices. at least then users would know
( and easy to add friction here: follow a link from email to see shared data… )
that there’s… nothing. that’s entirely their fault
I am not a tech person, so I’ll take your word for it… sounds like a good idea to me, though. It does put the emphasis on the company running the system being responsible for security to a greater degree. It seems like with mass leaks that is even more true. This is not like someone accidentally leaving their email up at a public terminal or something, after all.
I think that we should not expect end users - many of whom are not themselves programmers and don’t understand all this stuff, to be the ones to ensure that their data, that they’ve given to a corporation who is going to make a profit off of it - to be the ones responsible for securing that data… But we’re all expected to be tech experts now to do basic things that are increasingly necessary to navigate modern life… Are we all expected to have broad expertise in all aspects of life in order to just… exist? Of course not!
And that’s the problem of these companies, not the end users.
They kind of do this today. You have to opt in and then you have to choose your level of sharing (initials only all the way up to very detailed information). It’s not automatic, it’s just that most people choose to opt in.
As far as the other things, I agree that this should be done but it doesn’t solve the sharing problems. Mandatory 2FA should have been the minimum bar.
what i meant was, rather than giving all information to everyone, instead sharing could be: let me know when someone shares with me and then i can share back.
yes, i agree. any level of two factor would help with the hacking.