I don’t think NSA has a mission to defend anything.
Based on the excerpt, I don’t see why massive computational power is needed. If almost everyone uses one of a few keys, an attacker only needs to test a few keys. Even if a “few” meant “few million” the task would be trivial.
IANASE but it seems a solution would be to generate a large number and test for prime… how do you say prime-ness? Then you have a secure key. Is that test feasible on a personal computer?
Sadly, the linked article (though fascinating) doesn’t confirm that the cockroaches were trained:
We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don’t hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can’t cling to) and worked out a devious routine for sneaking the compartment into the hat.
The whole Diffie-Hellman thingy is widely discussed under the trademark “Logjam” and the solution is simple: Don’t use the well-known primes but generate them yourself (‘openssl dhparam 2048’ took a little bit longer than 30 sec on one of my boxes).
Simple to some degree. It will break some clients and not all applications/daemons/appliances support the configuration of different DH params. Sometimes I have the feeling that the world runs on legacy solutions and gaffer tape alone…
I’m very not shocked. One thing that has been pretty consistent in the government’s complaints about strong crypto is a lack of examples of situations where they couldn’t break the crypto that was being used. Way back with the key escrow debate, the examples cited were always "what if"s and examples of people using encryption that the government had broken, with cries of, “what if we hadn’t been able to break it? The horror!”. Even the infamous Cox Report, which was touted as providing tremendous support for the need for key escrow, pretty much boiled down to situations where the government should have been using encryption to protect information, or totally non-encryption related security problems like security guards hiring prostitutes while on the clock.
None of NSA’s conflicting missions seem to include committing industrial espionage on their wetern allies.
But they seem to be doing it anyways.
This organization is out of control and needs to be shut down.
Either that or Obama ordered them to commit industrial espionage and is pretty happy with the results. IN which case, should anyone be signing any kind of TPP treaty with someone like that ?
I don’t think so.
For the nerds in the audience, here’s what’s wrong: If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number with a particular form. There seemed to be no reason why everyone couldn’t just use the same prime, and, in fact, many applications tend to use standardized or hard-coded primes. But there was a very important detail that got lost in translation between the mathematicians and the practitioners: an adversary can perform a single enormous computation to “crack” a particular prime, then easily break any individual connection that uses that prime.
How enormous a computation, you ask? Possibly a technical feat on a scale (relative to the state of computing at the time) not seen since the Enigma cryptanalysis during World War II. Even estimating the difficulty is tricky, due to the complexity of the algorithm involved, but our paper gives some conservative estimates. For the most common strength of Diffie-Hellman (1024 bits), it would cost a few hundred million dollars to build a machine, based on special purpose hardware, that would be able to crack one Diffie-Hellman prime every year.
Back in the 1996-1997, I donated computer cycles to Distributed.Net’s RC-5 Challenge, which was to crack a message coded with a 56-bit key. Back then, numerous computers all trying by correlated brute force to crack that key took 250 days (think SETI@home in 1997). Not only is the DH key different, it’s (at least) 1048 bits. The scale of computation necessary to do these kinds of calculations is, in a word, immense (hence the likelihood of a US-built supercomputer dedicated only to cracking DH keys).
I am very much not a cryptoanalyst(in fact, number theory pretty much took me to the cleaners in college) so I have to ask: is using nonstandard primes for Diffie-Hellman only difficult because some lousy software freaks out; or is this one of those situations where it is possible to pick an atrociously bad choice to use in place of one of the well-known values, one that actually makes you much weaker in some more or less unintuitive way?
From the standpoint of pure obscurity, anything that the NSA needs to task a math dude to spend an afternoon on is probably safer than something that they have an API request for; but there are some cryptographic situations where ‘just pick a number to go here’ is a markedly trickier problem than it appears, and some choices are bad. (Like in the good old days, when the NSA actually steered DES toward a much less vulnerable S-box.)
I am not a mathematician, but I don’t think that some long primes are cryptographically worse than other.
The current SNAFUs are all problems of the implementations, the important upstream softwares are fixed (in the case of logjam more often than not before it was discussed in mainstream media), but the productive rollout is too slow - I’m not able to upgrade some of the legacy systems because of compatibility issues.
You know, I always speculated that SETI@home was actually an attempt by the no such agency guys to obtain maximum computing power for a brute force attack. That would certainly make a reasonable movie plot…
What would be fascinating to know is, if this is true, how the NSA snuck ‘immense’ through the world’s global IT-bit supply chain.
Especially if you want to use ASICs for the task(which is probably sensible, given its scale and the comparatively poor efficiency of general purpose CPUs for very specific tasks like that; and the high cost and low performance of FPGAs compared to ASIC implementations of the same logic); ‘immense’ means that rather a lot of spook silicon had to be fabbed and packaged without attracting attention(even of the indirect “Wow, Intel took Fab X ‘offline’ and isn’t producing any commercial product there; but their balance sheet doesn’t look nearly as bad as you’d expect, that’s odd…” flavor).
I understand that team spook has some in-house or closely-held-contractor capabilities; but they would be very hard pressed to keep up with the commercial foundries that have the advantage of massive economies of scale and large markets to spread their capital costs across; and a big compute problem gets a lot slower and more expensive if you are using some antique black-fab that is excellent at keeping its mouth shut but less good at producing silicon that looks like it was made this decade.
Anyone have a theory? Sneaky diversion of Intel or IBM capacity(those being probably your best options for domestic production of fairly high end logic silicon)? Black-fab tech actually markedly better than believed, part of the investment was in keeping that competitive? Found a clever way to build a chip that could do prime factorization or be a janky knockoff mp3 player and had TSMC fab a zillion of them? Quietly purchased a substantial percentage of the world’s large FPGAs and did it that way? Quietly purchased commodity CPUs or GPUs through various fronts?
I’m interested because the production of high-end silicon is a business that, while not very consumer-visible, doesn’t really have all that many players; all that many physical sites, or a lot of ‘eh, idle capacity, no big deal(not when a $10billion facility is depreciating…)’; while if a problem is at the edge of feasibility, you can’t necessarily do it with old tech(though, given that fabs tend to be hazmat sites, you could probably get one of Intel’s old ones for basically nothing if you let them off the hook on whatever is in the soil underneath; so getting fab capacity for clandestine but undemanding designs is likely quite doable).
You just gotta love this guy or gal.
Exactly what kind of servers did you have diverted to the NSA, Miss Fiorina?
I guess, basically, Moore’s Law (or Moore’s Observation for the scientifically picky.)
By 1984 the capability of commercial microprocessors was comfortably exceeding military ones. There was all kinds of interesting stuff going on with VLIW and MIMD which kind of peaked in the early 2000s and then went very quiet except that it was suggested that a lot of the techniques were now in commercial silicon, which was RISC interpreting CISC or implementing graphics instruction sets. There was no compelling commercial reason to migrate to 64 bits, but the advantage to the military was obvious.
It’s reasonable to assume that the agencies have simply had access to the latest hardware out of the top bins all the time, and are simply doing the clever bits in software (which doesn’t date). Xeons and graphics ICs are plentiful, and once you’ve worked out how to use them on parallel tasks the rest is just an engineering exercise, basically. Meanwhile, ordinary customers are helpfully paying for the R&D.
Never underestimate the cryptographical capabilities of a group with funding sufficient to build a Beowulf cluster the size of a small city.
I’m not sure if “cracking” a prime is the same as finding a new key…?
The, er, patriotic kind; that might have also helped stem the bleeding at my tragicomedically mismanaged company, albeit temporarily…
It’s the product of two primes that’s being “cracked”, if I’m not mistaken. From the comments in the article:
Actually, there are two primes g and p, and the secret shared between the two end-points is is
g^(a*b) modulo p, a and b being random numbers chosen each by one end-point, and they are never transmitted. The Diffie-Hellman protocol and mathematics allow the end-points to find this shared secret by exchanging
g^b modulo p. It is assumed that computing
g^(a*b) modulo pis very hard when you only know g, p,
g^a modulo pand
g^b modulo p.
What this paper says, if I’ve understood correctly, is that it possible to pre-compute a function from g and p (even if it takes a lot of time and money), that will let you find g^(a*b) modulo p easily from g^a modulo p and g^b modulo p.