>... a few weeks earlier had canceled a security conference presentation on a low-cost way to deanonymize Tor users. The Tor officials went on to warn that an intelligence agency from a global adversary also might have been able to capitalize on the vulnerability.
This is kind of worrying. I hope the Tor Project has information on the attack is looking into ways to mitigate this. But if it's due to the protocol nature, then maybe it's time to look for a successor (we aren't using WEP anymore, right?)
As to the CMU stuff... Tokyo University has this pledge to make sure basically no military research is done on campus, which I feel to be pretty laudable.
I wonder if there's a similarly worded pledge for this sort of thing. But at the same time, universities can do a lot of good security research that can, in the end, strengthen the systems we use.
The "$1 million to target these specific people" sounds dirty, but "$1 million to do research on the vulnerabilities of Tor"... well that sounds like research to me. Pretty tricky.
There's is no WPA2 alternative, this is it, this is the bleeding edge of Internet privacy algorithms. And since privacy is seen as a public enemy, a public sponsored attack is underway to weaken it, to the point where you can't really trust Tor for the type of world changing, nation state adversary, Snowden or Wikileaks level missions.
People needing a high level of protection can use and should use Tor in their workflow but they should not expect a one-click solution. On the other hand, it's perfectly adequate for day to day use of privacy minded individuals that are not targeted by active attacks.
I'm at MIT proper and a good portion of our team's medical device work is DOD funded. While we are primarily designing devices to be used in civilian hospitals, our diagnostic devices could also potentially be used to optimize battlefield care for soldiers, which I personally think is great.
I think a wholesale ban on military research is pretty silly; the ethical implications of projects should be considered on a case by case basis by the university.
As someone who wasn't alive then either, it's a rather well documented period of history, albeit mostly in dead tree form. Karnow is probably the classic (http://www.amazon.com/gp/aw/d/0140265473/).
To use a more modern analogy that exists on the internet, the 2010 US military research / development / testing budget looks like it was around USD$80b.
Now you're a university professor / dean / president. Times are hard (they always are, you're in academia). There's a huge pie sitting right next to the one you've been fighting over, and all you have to do is work on certain technologies that may or may not have lethal consequences.
I wouldn't take the bet on many people saying "No thanks, I'll be happy giving up grant money for moral reasons."
> I hope the Tor Project has information on the attack is looking into ways to mitigate this. But if it's due to the protocol nature, then maybe it's time to look for a successor (we aren't using WEP anymore, right?)
The attacks on Tor are largely in the form of:
A) Outright implementation flaws [e.g. Software bugs ]
B) Malicious actors deploying Tor nodes [e.g. On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.
https://blog.torproject.org/blog/tor-security-advisory-relay... ]
> A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
Pretty much the only defense is to control the entry nodes you use yourself by:
> Restricting your entry nodes may also help against attackers who want to run a few Tor nodes and easily enumerate all of the Tor user IP addresses. (Even though they can't learn what destinations the users are talking to, they still might be able to do bad things with just a list of users.) However, that feature won't really become useful until we move to a "directory guard" design as well.
Its an inherent problem with a low latency anonymity network that is really an open research problem.
However, controlling your entry nodes has a different problem:
1) It pretty clearly links you to you entering the tor network via a consistent sent of nodes.
2) Capturing these nodes via the DC and warrants/legal action has been done in the past. As any is going to be able to find these nodes since they are no longer randomly selected...
3) Once you are actively targeted you are just as vulnerable.
The intelligence community used to value Tor. Remember where it came from. Now they don't, presumably because the primary intelligence target has shifted from fixed actors like nation states and large businesses to the general public. Now those nation states and businesses are 'intelligence partners' in the fight against the 'lone wolfs' hiding within the masses. Perhaps then it is in Tor's interests to restart some rivalry between nation states.
NSA is schizophrenic in that regard. Remember that one of the things it does besides looking in everyone's underwear drawers is it also advises US govt (3 letter agencies, military) on what crypto to use. In other words it tells Uncle Sam how to lock his underwear drawers so other agencies don't peek in there.
It is always interesting to see what they say there. Because if they know, for example, one type of crypto technique or implementation is vulnerable will they still recommend it for TS classified material storage? Will they recommend for US military or diplomatic service? If they don't, it might leave that open to attack, and they are not doing their job. If they do say "don't use this combination of AES, prime numbers, or OpenSSL implementations", that also gives something away.
I wonder if people people who make these recommendations even talk to people who discover, exploit, and actively penetrate systems? Because everything is very compartmentalized, they actually might not be able to.
That is why they are probably very interested (like we saw) in somehow subverting or weakening some algorithms and implementation so they are the only ones that have a key (Dual_EC_DRBG) , or they are the only ones that potentially have a computational capacity to exploit (DES).
NSA themselves have used Dual_EC_DRBG (which can be distinguished from a PRF even if you don't have the 'backdoor key': it's not just backdoored and slow, it's bad - and they know that). GCHQ behaves even worse and is at this point almost entirely out of control.
In either case, I feel information assurance and signals intelligence arms really should never have been the same agency: they are roles entirely at odds with each other and do not seem to even have their own governments' equities properly balanced, nor their recommendations always having been given in good faith. So be cautious drawing any conclusions from their advice.
Unfortunately, that is not the sort of 'reform' that either government is interested in, particularly my own. It's quite depressing, really.
That actually makes sense because of the way it was backdoor-ed. What they did there is the golden standard of subverting and backdoor-ing a crypto algorithm: go through a standards body, backdoor-ed it by using a public-private key. They hold the private key. Encourage others to use the system as much as they can (which includes showing the world that they themselves use it).
NSA have been having dreams of key escrow forever. It seems since the 90s, that dream was further and further from reality. But they didn't completely give it up. Dual_EC_DRBG was effectively becoming that key escrow they wanted for all the system that used it and they got to keep the private key and thus have a high enough assurance others won't use their backdoor.
Whoever was in charge of that operation, was probably patting themselves on the back every morning after waking up.
I don't think they valued Tor specifically. They did value scientific research, which is what Tor was at the time. Like most research work, it got dropped once they had a working proof of concept. The State Department picked it up years later.
1) For security, most systems rely on their obscurity and on the fact that the assets they protect probably aren't worth much investment by the attackers. Tor can't rely on either of those circumstances: It's prominent and breaking into it is a one-stop solution to attacking many valuable targets.
2) Many organizations with large amounts of resources, from state intelligence agencies to law enforcement to security vendors to ISPs, would like to find solutions to hacking Tor security inexpensively.
3) True security is very difficult and expensive. For Tor, this is taken to an extreme by #2. Does the Tor Project have the resources to implement bug-free software (e.g., the kind that flies passsenger planes)? Certainly not. Can they find and fix bugs as quickly as the attackers described above find and exploit them? Certainly not. I'm not criticizing them; they just don't have the resources.
4) Assuming the underlying concept of onion routing is secure, there still are plenty of targets for attacks such as implementation and all the other code Tor relies on (e.g., almost all of Firefox for the Tor Browser, encryption algorithms, your OS, etc.). Attacking a Tor user doesn't seem impossible.
Based only on the theorizing above, and not knowing about Tor's actual implemenation, I fear that we're lucky if Tor still is expensive to attack. Of course, any smart attacker with an exploit will publicly complain how hard Tor is to hack.
If you look at Tor's concept. It's pretty clear that it cannot be considered secure.
Each time you use tor your packets actually go through a path of 3 different servers (or relays). If the attacker owns the two ends it's game over. How many relays are there out there? How many are owned by the NSA or other gov?
It's pretty obvious that this system just cannot work because a majority of relays are owned by the attacker.
I still find it hard to ever trust an institution that wouldn't raise a huge stink about the ethical implications of this. They don't exist to serve "national security interests", that's what the NSA is for.
> They don't exist to serve "national security interests"
Yes they do. From their website:
"The Software Engineering Institute (SEI) is a not-for-profit Federally Funded Research and Development Center (FFRDC) at Carnegie Mellon University, specifically established by the U.S. Department of Defense (DoD) to focus on software and cybersecurity."
I interpreted the parent comment as saying that CMU doesn't exist to serve national security interests, whether or not there is an entity, like the FFRDC SEI, that does exist for related reasons.
On one hand, leading academic institutions are commonly understood to have a responsibility to preserve free speech (especially speech that is critical of military or government action), remain as a neutral education and research body decoupled from any specific political or military agendas, and help lead in social progress towards greater overall ethical standards in education, research, and scholarship.
On the other hand, many universities loan out the credential and status of being affiliated with them as a recruitment tactic to assist the DoD in the task of creating a diversified set of military research organizations, which from a superficial observation point of view (the view taken by many of the younger engineers duped into working for below market pay at such places) look like run-of-the-mill software/science/engineering jobs while having all sorts of ethical gray areas, and the end result is to create rampant ethical conflicts of interest, questionable management practices, and many other problems.
I don't think it's as simple as just pointing out that SEI is an FFRDC and moving on. The fact that universities in general continue to perpetuate this problem -- academia-military pseudo-credible research facility affiliation and status-mongering -- that is the bigger issue.
But it looks like they put themselves in that position. Either by voluntary working with the FBI and allegedly taking a $1M grant, and/or doing unethical research by doing it on the live network.
From what I've gathered, TOR is pretty robust at least on paper, and when explained in an academic way it has me almost convinced that the apparatus does what its supposed to, except for the part where it catastrophically fails when put into practice, like when:
1.) Custom Firefox 'Browser Bundles' which do not auto-update and ensure latent vulnerabilities are left un-addressed
2.) Trusted 'Third Parties' running exit nodes who we hope and pray are doing their job correctly
3.) Weird and non-innocuous looking domains on the wire that do nothing more than alert the neighborhood that somebody's using TOR (Unless everyone's using it you stand out like a sore thumb)
4.) Sybil attacks in the form of people-with-more-money-than-you polluting the network
5.) ???
6.) Any number of other issues (which have since been patched in the past), but still work if the TOR user is uneducated about how TOR works (traffic analysis / correlation attacks / zero-knowledge-proof attacks, etc)
> Personally, I use it maybe 10, 20 percent of the time. I know that there are people out there that are using it a lot of the time. But for me as much as I might hate Flash, there are times that I need to watch something on YouTube.
YouTube has been working for me using Tor Browser for months, if not years.
I don't think universities should get a free pass on whatever their affiliated FFRDCs might do. If the university wants to be disassociated from an unethical action, do so by severing the tie between the university and the FFRDC, and stop lending credibility and credential to the FFRDC via the university's reputation. Otherwise, accept the fair guilt by association that will follow.
That would help with things like the memory safety of the daemons you run, but that hasn't been the problem when Tor has failed its users.
Tor has failed its users because the idea of running a public Tor cloud with volunteer entry, onion, and exit nodes is ludicrous. It means that the entire network is under surveillance all the time, the exact opposite of what you want. There has been widespread confirmation that the data you transfer via the public Tor cloud is being passively surveilled at the endpoints and actively modified when you, for example, download software. This makes it incredibly dangerous to use, likely more dangerous than just using the regular internet.
There are many other problems (like the fact that .onion sites are a dirty hack and likely have many undiscovered weaknesses like the ones CMU found) but nearly all of them are either deployment or architectural issues, not code security issues.
Yes, I agree. When I wrote the parent comment I was thinking more about implementation detail correctness: memory safety, protocol implementation correctness, etc.
Like you said, Tor has architectural issues. Tor would be fine if it were low-profile, but it's not, and that's a major part of why the architecture is breaking down - it doesn't scale well with increasing users/publicity/nation-state-interest.
I think most proof-verifiable languages are too limited to prove many types of security correctness valuable to tor users.
For example, side channel attacks. A classic attack on computerized cryptography. I don't know of any proof language that can protect against side channel attacks.
If you look online there are a few lists of tor attacks. The attacks include: snooping on exit relays, application issues, traffic correlation, website fingerprinting, congestion attacks, blocking tor access (declining to extend). Most of these are issues in the design of the tor system, not something I think source code proof systems are capable of preventing.
what property would you prove, though? you could create a memory safe program that does not provide anonymity. how do you represent "anonymity" in the proof system?
This is kind of worrying. I hope the Tor Project has information on the attack is looking into ways to mitigate this. But if it's due to the protocol nature, then maybe it's time to look for a successor (we aren't using WEP anymore, right?)
As to the CMU stuff... Tokyo University has this pledge to make sure basically no military research is done on campus, which I feel to be pretty laudable.
I wonder if there's a similarly worded pledge for this sort of thing. But at the same time, universities can do a lot of good security research that can, in the end, strengthen the systems we use.
The "$1 million to target these specific people" sounds dirty, but "$1 million to do research on the vulnerabilities of Tor"... well that sounds like research to me. Pretty tricky.