Obama: cryptographers who don't believe in magic ponies are "fetishists," "absolutists"

Or mistakes in the math of the encryption, or the implementation…

That would depend very heavily on the specifics of the encryption algorithm being used. Longer keys do not automatically equal longer times to crack.

1 Like

I think that in the interests of clarity, people have papered over the theoretical objections with examples that do heavily focus on commonly understandable human variables. But it’s always math on some level. I’m not saying it’s mathematically proven in the hard sense of something with a QED at the end of it, but there are large scale game theoretical issues with any specific plan. But, it’s so heavily dependent on the specifics of a plan to make any mathematical assessment of the model useless without them. But by and large cryptographers have already considered and dismissed all the ideas anyone can think of collectively as bad ideas. This is a burden of proof issue, to some extent. It’s not really up to the cryptographers to discount every possible backdoor security protocol that hasn’t been thought of yet, it’s up to people to propose it. That being said, it’s pretty much a mathematical certainty that all backdoor security protocols are risky. The question is whether we as a society have the right to take that risk with an individual’s data. I think that here on BB, we take for granted that the answer is a firm, “No.” If you’re arguing that there is no additional risk, then I think it’s safe to say that you aren’t think about this very carefully. But if you evaluate risk and civics differently, I suppose there’s an intellectually self-consistent model where this is reasonable. I predict that any such measures passed will have disastrous and costly consequences that even advocates will later acknowledge were not worth it. Bookmark my words, literally.

2 Likes

“The ability to have a key that “only the good guys can use” is not a new idea.”

Right, but is that the only sufficient answer? Assuming yes is a pretty big assumption. I think what Obama is asking for here is a key which can be managed successfully in order to make data both tolerably safe and tolerably accessible. And the absolutism he’s talking about is the refusal to even think about a state of “tolerably safe” where a third party might have access.

Would it be a pain in the ass? No doubt. Would it introduce new sorts of vulnerability we’re constantly going to have to keep ahead of? Yes, certainly. Would it be a tax on commerce & innovation etc. etc. Quite possibly. But would it, in spite of all that, be better than the other likely outcomes?

The answer might well be no, but the effort–through posturing and browbeating and ridicule–seems to me to be to keep the conversation from taking place.

I re-read the paper, and it’s true, the arguments they give are not mathematical in nature.

Page 11-13 (and the first line of page 14) basically runs down what’s wrong with having a key held in escrow by the government:

  • If the key is ever disclosed, then every encrypted communication since that key was put into use becomes vulnerable. This is why big businesses are moving towards the practice of negotiating a new encryption key for every transaction, or forward secrecy.
  • Encryption currently assures tamper-proofing as well as secrecy. Once you disclose the encryption key, you have no assurance that someone didn’t go in, make changes, and then re-encrypt the message.
  • I’ll quote the last point in its entirety:

The third principal debate to third-party escrow is procedural and comes down to a simple question: who would control the escrowed keys? Within the US, one could postulate that the FBI or some other designated federal entity would hold the private key necessary to obtain access to data and that judicial mechanisms would be constructed to enable its use by the plethora of federal, state, and local law enforcement agencies. However, this leaves unanswered the question of what happens outside a nation’s borders. Would German and France public- and private- sector organizations be willing to use systems that gave the US government access to the data – especially when they could instead use locally built systems that do not? What about Russia? Would encrypted data transmitted between the US and China need to have keys escrowed by both governments? Could a single escrow agent be found that would be acceptable to both governments? If so, would access be granted to just one of the two governments or would both need to agree to a request?

So, yes, you’re correct. The main arguments are not with the math. However, I still equate the idea of a key that only the good guys can use with a unicorn that only approaches virgins: both ideas only make sense in a world with magic.

2 Likes

I’d have said the same about the Patriot Act in 1999.

Yes, but Google.

would it, in spite of all that, be better than the other likely outcomes?

What are these hypothetical likely outcomes?


Here’s a paper from 1997, the last time we went through this conversation that a lot of people apparently don’t remember. It’s still pertinent today.

2 Likes

That being the absolutism he’s talking about. And it may not be absolutism, it may be just the fact of the matter that the insecurity we’re talking about introducing makes the entire system unviable. But, again, it seems the effort is to keep anyone from thinking very hard about it. Maybe because

To which I am not unsympathetic. Only I think we are better with a managed solution involving some people somewhat deserving of trust rather than what the least trustworthy, least accountable and least bound by any sense of morality come up with in the absence of that managed solution.

But THAT situation was resolved by just giving the government loosely restricted back-end access through service providers, right?

What hopes do you have that the untrustworthy and those without a sense of morality will choose to opt into a key escrow system?

None (no confidence they wouldn’t be involved and wouldn’t, to a limited extent, abuse it). But then they’d be invested in the integrity of the overall system. If they destroy that, they effectively destroy their access. [edit for clarity]

I’m suggesting that if you mandate that US companies use a key escrow system, then “bad actors” will just move to an encryption system created somewhere else. The maths already out there, so why a weaker hard to manage system for the “good” people?

2 Likes

It happens to be absolutely factual. I also absolutely think that I have five fingers on each hand. Absolutism isn’t ipso facto a problem with an argument.

TLAs got access to service providers. That did nothing about end to end encryption, which did exist at the time. It’s more ubiquitous now, but trying to put the genie back in the bottle is wishful thinking at best.

1 Like

Cryptography aside, can anyone think of any historical reasons why handing the FBI, of all government organizations, a back door key to the phones of our elected officials might be a poorly considered idea? Hand them this glory and they need never consult their budget baseline ever again. Imagine the very special relationship they would enjoy with any politician that dare criticize them. This is the low hanging fruit of government corruption – and the public could stay unaware of its progress for many, many years.

2 Likes

Both worries are about human error, not an actual mathematical objection. And of course any encryption scheme, including any present-day one without a secret government key, is also potentially vulnerable to these sorts of problems. But it seems to me that if you have a large number of experts reviewing both the pure mathematics and the details of the software implementation, the risk of this sort of problem is going to drop to a lot less than that of an intentional leak, it would be more akin to some widely-accepted and peer-reviewed mathematical result turning out to be wrong.

Keep in mind I was responding to an objection by nimelennar that claimed that the idea was impossible in a mathematical sense, so that these sorts of proposals were akin to flat-earthism, not just unsafe when you take into account dangers related to human error. Specifically, nimelennar said

Doesn’t this sound like nimelennar was saying that even in a purely theoretical mathematical sense such a scheme wasn’t possible? And when I objected, and you responded to my objection by citing “the wide range of encryption experts who have repeatedly said that what is being asked for is not technically feasible”, was “technically feasible” supposed to refer to the mathematics of how difficult the key would be to crack, or to other, human-error based concerns? (edit: and I see that in this coment nimelennar has reconsidered his objection after reviewing a paper, noting that the problems raised by cryptographic experts are not about the basic mathematical design)

No, but in the case of many encryption schemes they do–for example, this article in which an expert discusses the time to crack AES using a brute-force approach estimates that with a modern supercomputer it would take 399 seconds to crack a 56-bit key, about 10^18 years to crack a 128-bit key, and about 10^37 years to crack a 192-bit key. Again, if one wants to defend nimelennar’s earlier comment specifically, as opposed to making some more general argument about human error, one should be able to point to a cryptographic expert talking specifically about purely mathematical issues which make it impossible to have a secret key which would be impossible to crack computationally in any reasonable length of time.

I agree there’s always going to be some increased risk relative to security protocols without a backdoor, though of course those backdoor-less protocols aren’t risk-free either. The question is how large the risk would be–it’s a public policy question whether the advantages of a backdoor would be worth it for a given level of increased risk. And please note I am not arguing this because it’s my personal opinion we should try to implement such a scheme, see the last paragraph of my comment here about how it’s worth brainstorming about what a best-case scheme would look like in large part because having one in hand could be useful as a rhetorical tactic. And as a science fiction nerd I can think of future scenarios where new technologies allow a small group of determined individuals to do things much more horrific than present-day terrorist attacks, so I can see a situation where I might advocate implementing a best-case backdoor scheme a few decades down the line, even though I don’t think the risks are worth it today.

I think Stuxnet was mostly spread by thumb drives.
Because, you see, when your lab is full of air gapped computers, you always want to move small amounts of data among them. I have firsthand experience with the ‘Shipup’ worm. It took some effort to get rid of it, because someone was always plugging in an infected thumb drive. The good news is that, once you learn where it hides, you can easily delete its components. (omitting long story as to why not run scanners on lab machines…)

It only took about five seconds for a thumb drive to infect a machine.

[edit]
It was interesting to watch ‘Shipup’ operate – its payload was clearly designed to exfiltrate data from infected machines. One of its little secret folders, copied to the thumb drive when available, contained compressed encrypted samples of recently-modified files from our lab computers. Would have been good for stealing passwords. Our test data was almost certainly useless to the authors of the worm.

3 Likes

How about you detail out the specific encryption scheme you’re talking about, and then we’ll talk about possible issues with it? Because originally, the question you were asking was:

And now you’ve shifted the goalposts around to where people apparently have to mathematically prove that an entirely hypothetical, undetailed, and nonexistent scheme won’t work.

You don’t get to ask “is there anything unsecure about this” and then just handwave away any issues that are brought up by stipulating that those issues just don’t exist in your hypothetical scenario, especially when the only thing you claim you’ll actually accept relies on the portion of your scenario you’ve put the least amount of detail into.

2 Likes

Imagine a Trump administration (or worse) having a back door to all encryption.

1 Like

I’m not shifting the goalposts, because when I asked “on a technical level” I meant on the level of implementing the non-mathematical aspects of the security protocol, I just took it as a given that there are encryption schemes where attempting to decrypt by conventional computational means would take a prohibitive amount of time, since I thought this was widely accepted. Are you claiming that you know for a fact that experts have said this is wrong, or that you personally have sufficient expertise to say it’s wrong, or are you just saying you personally lack sufficient knowledge of the field to say whether it’s right or wrong and are just asking for evidence one way or another? I did already give you a link to this article pointing out it would take longer than the age of the universe to crack AES encryption with keys of 128 bits or larger by brute-force methods with present-day supercomputers, though I don’t know if AES allows for multiple keys which can each independently decrypt the message (of the kind bob_probst mentioned in this comment early on in this thread), but I can try looking up more information about it if you like (or maybe someone else more knowledgeable can chime in)–I’d appreciate if you’d address my question above about what basis you have for considering the mathematical claim unlikely, though.

edit: I also found this answer on the math overflow stack exchange which seems to indicate it’s fairly trivial to modify any “standard encryption method” to encrypt a message in such a way that two keys are independently capable of decrypting the message.