I’m reading apples and oranges here. There is nothing mathematically certain about the rise of another Edward Snowden, or Loveint, or leak to the mafia or CIA or some other shadowy criminal enterprise. We intuitively grasp that it seems likely, but to mathatically prove it’s inevitable, you’d have to have access to all the secret records and be able to show that secrets always are exposed. How would you know if you’d missed one?
Of course Obama is required to deny Snowden’s historical importance, he does everything in his power to keep another snowden from arising, and like the Catholic church, he pretty much has to claim infallibility to keep doing what he wants to do.
He’s not asking for impossible encryption, he’s asking to always be part of the signing ceremony. That’s not scientifically impossible, it’s just criminally bad engineering.
When people try to use science language instead of engineering language, it weakens both disciplines.
Fuck you Mr. Obama. We have strong encryption on our brains already, and not only is it not possible to crack that yet, but we actually have constitutional protections against forcing someone to violate that in most cases. Somehow, our Republic has managed to survive so far. I think we’ll be able to live with people keeping their phones private too. Now where’s my pony?
Sure. And what happens when a cop decides to sell their encryption keys to the highest bidder? Or when some corporation successfully sues for access? If you don’t control your encryption keys, you have no encryption. Period.
(I know what you are saying, but as a key custodian not even I had access to the entire key. It was broken, and reassembled by another team who also never saw the whole thing)
You were making sense right up until the end, when you revealed you have no idea what you’re talking about. Kinda spoils the point you were trying to make.
I share your skepticism about whether current government leaders would implement such a system well, but as a speculative exercise I think it’s worth considering–if we had a set of leaders who wanted to ensure the possibility of searching computers of suspects in certain specific crimes like terrorism and child pornography, but who took the concerns about the key being hacked or leaked extremely seriously, is it even possible to imagine a technically feasible solution they could implement that would satisfy both competing demands? Is there any reason the key would have to be “available for law enforcement agencies across the nation”, or why it would have to be on an “open internet with billions of them”?
My first stab at coming up with such a solution would go like this. Let’s assume the number of requests is small enough you could just have one or a small number of facilities that are allowed to use the secret key, and only if the actual physical device that needs to be broken into is sent there, after having had its ability to connect to the internet physically disabled. Then whatever high-level official is responsible for breaking into the device has to make sure the device is disconnected from all internet connections, and they enter the secret key which is printed on a piece of paper (or something more durable like a piece of metal) kept inside some ordinarily locked device akin to the “nuclear football”. The relevant data on the device can be transferred to another (offline) computer, and then the device itself must be destroyed just in case it contained some secret software or hardware that could record the keystrokes when the secret key was being entered (for further security we could imagine the whole process, from entering the key into the device up to the device’s destruction, is done inside a Faraday cage just in case the device had some kind of secret transmitter the authorities missed). Is there anything obviously impossible or unsecure about this on a technical level, again leaving out the likelihood of present-day government officials being motivated to come up with a solution this careful?
In theory this is perfectly possible using RSA Cocks encryption. (Originally invented at our own GCHQ and never patented because it was a high level secret…) However, think about this.
Billions of phones incorporate software which contains the public key and some mechanism to provide access.
Your evil criminal organisation or State now only has to make one successful cracking attempt and it is game over. It can get as many sample phones as it likes and use every possible means of reverse engineering - doesn’t matter if one phone gets ruined, just go on to the next one until there’s a success. It can inspect encrypted data and the public key and it only has to get lucky once. It’s the single point of failure mechanism.
As supercomputers get faster, eventually brute force will succeed. But that assumes that the “high level official” hasn’t simply been bought.
Although it isn’t strictly analogous, the capture of a U-boat and a weather ship by the Royal Navy was enough seriously to compromise Enigma codes. That’s because, although the Enigma machines could be “programmed” differently they all worked the same way. There were also occasions when operators made mistakes which allowed more rapid cracking.
The difference is that with the case you describe, a single successful message crack now makes every device vulnerable because they all share the same private key.
That is an event that sends both cold shivers down my spine and waves of excitement. I certainly couldn’t crack an enigma, and I’m in awe of people that so thoroughly understand cryptography (I’ve still barely washed the hand that shook Whitfield Diffies :D)
Enigma decryption was the first example of using [dedicated, special purpose] computing to brute-force possible plaintexts based on identified vulnerabilities in an algorithm [ the manual algorithm for use by Enigma operators.] I guess you could say that everything since has been an expansion and exploration of those ideas, devised by people like Turing, von Neumann, and I J Good.
I kind of understand some if it at a superficial level - a level enough to understand the ways in which security can be compromised.
My net conclusion is that I don’t keep anything on a computer that I don’t want people to read. But as I’m boringly law abiding and quite a lot of my assets are in bricks and mortar, not bits, I am less worried than some.
I too am boring. The most illegal thing I’ve ever done usually gets gasps from people, “that’s illegal!?”.
And I’m right there with you on the " superficial" level. Yes, I’ve done key exchanges on paper, and s boxes, and very mild brute forcing. But really nothing more complicated than Will Short comes up with every week.
I was under the impression that simply by increasing the length of the key you can get the probability of cracking it arbitrarily low with a given amount of computing power running for a given number of computing cycles, at least barring any radical new developments in number theory/computing theory like a discovery that P=NP…is that incorrect?
Well, anyone who uses any sort of key-based encryption has to accept that if Moore’s law goes on for enough decades, eventually all their encrypted messages will be retroactively decodable with relatively cheap computers, assuming the encrypted versions were saved…but people who use encryption usually aren’t too worried about this as long as it’s expected this would take decades or centuries. And if Moore’s law does continue, the time between “it becomes feasible to retroactively decrypt a single universal key of length N” and “it becomes feasible to retroactively decrypt every person’s individual key of length N” perhaps wouldn’t be all that long.
As for corrupt officials willing to leak the key, that is a risk, but the same risk exists with nuclear codes. Even here there are precautions that could be taken, like making sure anyone accessing the key is being recorded while they do so, and has been searched for recording devices beforehand.
You also have to trust the implementation of the encryption by every device maker that incorporates it. A mistake by a single device maker could easily end up making a crack much easier… it’s happened before.
You keep falling back on “nuclear codes”. There are a whole system of mechanisms preventing the misuse of “nuclear codes” that would be completely impractical for an encryption system that is used on devices in general public use. There definitely is not the same risk of a corrupt official leaking a nuclear code, for a whole host of reasons that really should be obvious with a little thought.
This is generally correct, but there are millions of devices out there that are constrained to specific algorithms and key lengths. SHA1 will not die this year because Cloudflare and Facebook bullied Mozilla* to support a fallback solution.
*) probably an unfair assessment on my side, read the article if you’re interested in a different view : )
@Nonentity has replied to this but I’d like to amplify. OK, a corrupt official has leaked the US nuclear codes to North Korea. Somehow, North Korea infiltrates the US’s most secure messaging system. Somehow they persuade the people in the silos to program the missiles to target NY, Frankfurt, London, Seoul and Beijing (without anyone getting suspicious). And somehow they manage to get into a (presumably) very secure PSTN line or the equivalent and convincingly tell the operators to launch.
Some years ago the then British Labour government had a similarly stupid idea for a national identity card system which would somehow protect people’s identities - a centralised database which would be accessible by many people, and they were somehow going to guarantee that none of these thousands of people misused the data. (That was the point at which I realised that both politicians and civil servants are deeply, irremediably stupid.) The police force responded that the present system of identifying people by lots of data held in different locations - driving licences, passports, bank accounts, council records for a start - was much better and more secure because no one entity controlled all the data. I imagine the situation with “nuclear codes” is the same. It isn’t enough to suborn one official. You would have to subvert an entire system.
Of course in theory you could get some of the same benefits by having every make and model of mobile device have a different public key and have many individual security workers in different locations each with only one key, so that corruption of one would have limited effects. The short answer to that is:
How long before some bureaucrat decided to store all the keys in a central location “for efficiency”?
How long before some organisation found a way of unlocking a bootloader in some model of phone, flashed an image with their own private key, and started selling uncrackable phones on the black market?
For “nuclear codes”, getting one would just be the beginning of the attacker’s problems, and they can implement all kinds of safeguards to prevent use (like frequently rotating them).
On top of that, with the “nuclear codes” example you’d have to find a corrupt official actually willing to risk setting off a nuclear war… whereas, on the other hand, someone leaking the encryption key is just exposing data, and most likely “unimportant” civilian data at that.
What you talking about? Those inputs aren’t even a byte, overflow their buffers (you can repeat the numbers, they don’t say how many times per input) and Viola! Solved
Good point. I had assumed you and Nonentity were talking about the likelihood a leak would happen in the first place, rather than the likelihood of disastrous results if we assume a leak is a fait accompli. Still, if you only have a very small level of very-high level officials who are allowed to access the code, you could at least create a situation where if a leak happened there would be a very small list of suspects, so leakers would be unlikely to do it for financial gain since their finances would immediately be under very high scrutiny, though an ideologue committed to aiding some other country still might do it even knowing they would likely get caught. Also, one could still imagine creative ways of designing the procedure for accessing the key that would make it technically difficult to steal–you might have a group of two or more officials who only get to see parts of key and enter it in sequence, everyone could be recorded while viewing and entering their part of the key, they could be searched for cameras and then have to wear something akin to hazmat suit that would block any hidden cameras on their body that had gone undetected, etc.
Of course, the more convoluted the plan for an ideal scheme becomes, the less likely it is that the real-world government is going to implement anything that effective. But I still think it’s worth trying to creatively come up with a hypothetical scheme, both because of the slight possibility that some future administration might take privacy concerns seriously enough to put it into practices, but also just for rhetorical purposes. For one, when an official like Obama says there has to be some middle ground between guaranteeing privacy and some sort of access to suspects’ computers, proposing some complicated scheme for security puts the ball back in the officials’ court rather than allowing them to paint those who scoff at the idea of “middle ground” as absolutists are ideologues. Also, pointing out how complicated a scheme would need to be to really minimize the probability of a disastrous leak helps emphasize all the possible failure points that a more half-assed scheme would have.