Universities have been doing R&D for the feds, including the very, very, quiet kind, for a long time(I assume it goes back further than MIT/Lincoln Labs working radar; but certainly at least that far); but a non-IRB-approved ‘experiment’ on human subjects, most of them not even allegedly involved in anything, as a bespoke job for the FBI would be…atypically classy even by the grim standards of the genre.
I doubt that indiscriminate harassment from internet strangers would be helpful; but I can only hope that any friends and/or colleagues of the CMU research team involved will ask them the requisite pointed questions and spit in their quisling faces if they don’t have some very, very, compelling explanation of how they were coerced into this against their intentions for the research.
Even if they are not legally touchable; the situation would be greatly improved if it were the case that abusing your position as an ‘academic researcher’ would leave you a pariah, with only spooks and arms-dealer-nerds like Vulpen and Hacking Team for company. Not as good as old fashioned slammer time; but humans are social animals.
This really speaks to the inherent vulnerability of the Tor network architecture. They are based on trusting groups of untrustworthy volunteers. Over and over this is how they get breached. Compare this to a security model based on a well known, trusted entity with a clean track record. Now the interests of the customer and the provider are aligned. There is equity at stake if there is a compromise. That is why we see more breaches in distributed volunteer privacy tools than single provider companies. If an adversary can break your security system for all users for just $1 million, then it’s safe to assume lots of organizations are probably doing it.
Examples? I am aware of a few, but so far Tor has a rather fine security track - the worst was the Firefox zero day, Sybil attacks are hard to target specifically .
Examples? VPN services have a similar product (anonymity) and many providers are short-lived or are known for logging.
As you don’t name names I call FUD.
The trouble is that a ‘well known, trusted entity with a clean track record’ is just a giant, soft, target if you can do fun stuff like writing ‘national security letters’. There is equity at stake if there is a compromise; but there is ‘having all your equity seized and going to jail’ if you are uncooperative about it. VPNs are far more sensible if you have an endpoint that you can trust; but if you are up to something unpopular that gets pretty tricky. If you just want to connect to the home office; no problem.
TOR does have a major problem in that, at its current rather small size, it is dangerously cheap to be the one running a substantial percentage of the nodes(my understanding is that this attack employs some clever strategies to get better-than-naive-brute-force results; but fundamentally relies on it being attainable to control a nontrivial portion of the nodes); but it is designed as it is because single entities can be leaned on legally; and(in its original incarnation as a tool for overseas US government personnel) the identity of the entities you consider trustworthy can be highly revealing.
The bulk of its difficulties(both security related and in terms of poor bandwidth and high ping) trace back to attempting to avoid reliance on any particular trusted entity; but there are reasons why the ONR started the project despite the fact that VPNs were already commonly available at the time.
I try to keep my message non-promotional, but since you ask here are a few examples I have written about:
A quick search will reveal many other references.
I have been building and running anonymity systems for over 20 years as author of the Mixmaster anonymous remailer and founder of Anonymizer.com. Neither of those has ever been impacted by a compromise and I stand by Anonymizer personally (I handed off work on Mixmaster many years ago). Reputation and track record have proven to be more successful against attack than the distributed model that Tor uses.
I have 20 years of experience running Anonymizer and other privacy services without ever having fallen to a security letter or any of the many subpoenas we have received over the years. The key is to design the system to not keep any user activity records even in real time. There is nothing to take. This has been very well tested.
Tor has been a very interesting experiment started by the ONR and the entire security and privacy space has learned a huge amount from them. That does not mean it is the right answer. It is possible that at some time in the future Anonymizer could be forced to alter its architecture to support capture of user activity. That would appear to require a change to US law right now.
Obviously I have a bias but I think the record is clear.
This topic was automatically closed after 5 days. New replies are no longer allowed.