Hacker News new | past | comments | ask | show | jobs | submit login
Why the Tor attack matters (cryptographyengineering.com)
279 points by pmh on Nov 13, 2015 | hide | past | favorite | 58 comments



I don't really buy the comparison that what CERT did is similar to a university-sponsored DDoS. I think a better parallel is the Dan Egerstad case. He ran a Tor exit node and analyzed all the plaintext traffic leaving the exit nodes. He ended up collecting a ton of sensitive usernames and passwords. He tried to contact some of these people by e-mail but they ignored him. So he posted a bunch of these passwords on his blog. He was promptly arrested (and eventually released). At that time the security community was outraged that an obviously well-intentioned researcher was being harassed by the police for doing his job. The response is a lot different now for reasons I don't really understand.

I do wish both sides would acknowledge this is a tricky issue. On the one hand, if I run a tor exit node or relay, it is my node and it seems like I'm allowed to do with it as I please. At the same time, it also seems obviously unethical (maybe illegal?) to be harvesting passwords off an exit node or to dole out vigilante justice to Tor users I don't like.

One other thing to keep in mind here is that SEI is a DoD funded center. It may be nominally affiliated with CMU, but all their money comes either from the DoD or external grants awarded to the researchers at SEI. So CMU the private research university and SEI the DoD-funded research center have very different obligations to the public. It's important not to conflate the two.

The big question is this: what are our responsibilities as security researchers, especially when we're working on "live" software systems? Green seems to be suggesting some form of a review board which pre-approves experiments on live targets. Maybe this is what we need, but be careful what you wish for though. The bad guys don't have review boards.


> I don't really buy the comparison that what CERT did is similar to a university-sponsored DDoS. I think a better parallel is the Dan Egerstad case.

Here's why it's worse: they inserted a plaintext encoding into the response from the onion-address lookup relay, and so anybody observing the user (e.g. the ISP) could detect what onion address the user was connecting to. This applies after the fact to recorded traffic as well. Thus the researchers had no control over who got deanonymized, to whom they were deanonymized, and when they were deanonymized.

> I do wish both sides would acknowledge this is a tricky issue. On the one hand, if I run a tor exit node or relay, it is my node and it seems like I'm allowed to do with it as I please.

You actually are not allowed to do with your relay as you please. At least in the US, the legal theory protecting relay operators (i.e. safe harbor) also makes it illegal to observe user traffic content except in certain cases (e.g. to improve network performance).

> One other thing to keep in mind here is that SEI is a DoD funded center.

This doesn't seem very relevant. All researchers have an obligation to consider and mitigate possible harms that occur during their research (source: I work in a military research laboratory). These researchers clearly did not fulfill that obligation, and I'm sure their institution is reviewing or has reviewed their procedures to make sure it doesn't happen again.


Let me try to understand your position a little better.

Are you saying the problem here is simply that the effects of the attack were observable by others? If this were not the case, you'd have been fine with it?

And since you seem to be arguing that researchers shouldn't examine user traffic, do you also think that what Egerstad did was also wrong? Do you agree with his arrest?

And one more thing sort of related to this. What's your opinion on research like Arvind's Netflix deanonymization attack? Do you think the work that research involved was also unethical?

> All researchers have an obligation to consider and mitigate possible harms that occur during their research

This is nice idealism and I'm totally in support of it. But I can't help think this is pie-in-the-sky thinking, especially when organizations like the DoD are involved.


Understanding the nature of each organization involved -- how both motivations and expectations shift as one moves between orgnaizational barriers -- is perhaps the most important, worst reported, least understood part of this story.

If the SEI took money to, essentially, weaponize unpublished research, the issue is not one an IRB would have prevented. DoD contractors aren't bound by scientific codes of conduct. In light of that realization, the suggestion in this blog post is confusing.

(BTW, distancing CMU and the SEI is not meant as a defense of CMU -- close ties between public science and law enforcement/military R&D are as troubling as ever...)


> But there's also a view that computer security research can't really hurt people, so there's no real reason for sort of ethical oversight machinery in the first place.

Worse: there's a view that people who get owned "deserved it." Our industry, and its academic attachments, have a really strange vindictive streak towards those who it should be looking out for. (Which is not to say that those people should be looking out for people swapping child porn--but what about the thousands and thousands of people who were not?)


It would have been more ethical if the university had not blocked the "researchers" from disclosing the vulnerability at Black Hat. (Though even then they were not following responsible disclosure practices). The fact that Tor had to guess what the vulnerability was and the "researchers" still have not released their paper is unethical and probably illegal.


I'll grant you it might be unethical, I don't see how it could possibly be illegal.


CFAA


Seems like more research needs to go into preventing traffic confirmation attacks: https://blog.torproject.org/blog/tor-security-advisory-relay...

"A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her."

Interesting technical problem. They patched it, obviously, but similar attacks are still possible. It does say more research needs to be done, when that post was published. Obviously the method they used to send and receive signals from one side to the other doesn't work anymore, but statistical methods presumably do. Sort of like this:

https://mice.cs.columbia.edu/getTechreport.php?techreportID=...

Seems like a very difficult problem to solve.


What is so surprising here? The DOD is the largest funder of research grants in the US. Pretty much every university is doing research for a US agency from cyber security to lasers for missile defense. I find it very hard to believe that this is the firs time a university was conducting computer security research on live targets.


Whether or not it's surprising is perhaps the least interesting point for discussion. Universities have a responsibility to conduct human research ethically and I hope we hear a lot more about how this research in particular was conducted. This could have endangered lives depending on how it was done, and I'm quite sure the ends don't justify the means unless it was specifically done in a way which protected the anonymity of untargeted users.


Ethics are tricky, especially considering these days one can earn an MSc in Guided Weapon Systems from the top Aerospace Engineering school in the UK.


>Universities have a responsibility to conduct human research ethically

which means little given the laws of nature, all that matters is what people end up doing and measuring that statistically. If statistically speaking most people aren't ethical then that's what we'll get. This whole idea that people are in control of their actions or have any freedom whatsoever given what we know about the laws of nature has to go.


If people lack free will, then the people judging/punishing them also lack free will, and the entire premise of your arguement is an absurdity.


If free will didn't exist, it would be necessary to create it.


The DOD funds lasers, but it doesn't then has researchers fire them at random cars to test their effectiveness.

I'm not actually sure this isn't sarcasm.


The DOD also funds development of guided, chemical, biological and nuclear weapons, and just about every other way to kill a man you can think off. "Agent Orange" was pretty much militarized by the University of Hawaii under a DOD grant during the late 60's, and they've coordinated with the USAF and the CIA and provided research and analysis to optimize the dispersal methods and study it's effects during it's combat use over Vietnam.


It was also done without a warrant it seems.


From their website, I get that CMU/SEI/CERT works with both DHS and DoD.[0] Although I don't see anything specific about the FBI, it's not too much of a stretch. As DHS has grown and evolved since 9/11, distinctions between police and military have weakened. A decade ago, CERT would have been carefully shielded through parallel construction.

In my opinion, this is a wakeup call for the Tor Project. The attack would have been obvious if they'd been tracking the requisite circuit parameters. Ironically enough, it strikes me that the Tor network needs something like CERT for detecting attacks.

[0] https://www.cert.org/about/


The article raises the issue for computer security but computer science is used in many other fields where it could have ethical implications. Self-driving cars is on top of my mind, but for sure other applications has issue too. So I agree with his point and should be extended.


I think we have to assume that if a government can hack it, they will try. Perhaps it's sad that a university will help them but I'd also to be assumed that they're going to be trying it in some way.


> I think we have to assume that if a government can hack it, they will try. Perhaps it's sad that a university will help them but I'd also to be assumed that they're going to be trying it in some way.

Sure. And we can also-- for the purpose of thinking about risks-- assume that if a government can torture people, they will.

This doesn't make it right, and it doesn't mean that people should sit idly by. Nor does the fact that people oppose and discourage such actions mean that systems can be left vulnerable to these attacks.

Opposing unethical and abusive behavior is not mutually exclusive with building systems which are robust even against unethical attackers. Human wellbeing is maximized when we do _both_.


The government's ability to do nearly anything by force is an obvious given. This is the reason why constitutional limitations and charters of rights exist in every modern country.

For example, it is equally a understood that almost any government could control/manipulate any press agency if they wanted to, or break down any door with a SWAT team.

The only difference here is that `cyber` did not exist nor is cleanly appliciable to laws wich limits this type of power - laws largely written in the 1800s. Additionally it largely happens in secret, attribution is difficult, and there is a serious knowledge gap from the general public and the type of operations being done.


I'm willing to bet that the NSA has started to hook into the Tor network and add in their own nodes, which monitor the traffic. Unless it's not possible to snoop in on data.


William Binney claimed in a recent reddit AMA that the NSA is monitoring packet routes throughout the tor network in a program called Treasuremap.

https://www.reddit.com/r/IAmA/comments/3sf8xx/im_bill_binney...


The response by Patio11 regarding how this was acceptable penetration testing was beyond stupid.

Just because you are univesity researcher does not means you can take money and then attack some random company and say LOL JK just doing "Research". Universities have enormous computing power / resources available via various means to do research. Just because I have access to a thousand node cluster does not means I can randomly launch DDOS attack against some company and then claim "Research". This is equivalent to those youtube videos where at the end they justify assault and other egregious behaviour claiming "Social experiment" or "Prank".


This is perhaps the most unnecessarily rude comment to be at the top of a hacker news thread in some time. Let's all remember that disagreeing with someone doesn't mean being glib or mean.


[flagged]


Anonymous random new account, I don't know how intimidated you think I'm going to by whatever your academic credentials will turn out to be when you reveal them, but nobody I know in security research is talking about this Tor work the way you are, or would take umbrage at what Patrick said. Patrick knows what he's talking about.


Dude the fact that nobody you know is worried should be the first clue that something is really wrong or you are hanging out with wrong people. Security, Privacy & Data Mining research are ripe for bureaucratic takedown. Incidences like these will only lead to harsher requirements and stifle future research. Its essential for the security community to police itself. As far as Patrick knowing what he's talking about I do not take anyone who has never published a research article, let alone an abstract seriously.

We have access to hundred of millions of medical records, if were to throw away integrity, ethics and morality out of the window. The results would be spectacular enough to warrant a front page news in all major newspapers. And it would completely destroy any future medical research in this field, since we would have betrayed public trust.

Oh and guess what we are exempt from IRB. The true reality is that research to a large extent cannot be completely controlled / monitored. It is thus extremely essential for a research community to hold itself to some ethical standards.


I'm not sure what the last part of your first paragraph was supposed to mean, but if I wanted to compare my own computer security cite record with yours, would I search scholar.google.com for "AMEDICALRe"?

You've misread Patrick's messages to spectacular effect, leaving me with the impression that you were simply champing at the bit to jump at him and his silly bingo card site.

Tor chose world governments as their adversary, and if $1MM was all it took to buy unmasking of users, they failed. That's important information, and regardless of CMU's ethics (and I think this was an ethical lapse), the revelation that there is or was a flaw of that scale is a service to the Internet.


To his credit, I'd guess he's not sharing his bona fides because doing so would jeopardize the program he alleges to be involved in, and there isn't a particular reason to doubt the veracity of his claim by virtue of his creating an anonymous account to protect said program.

While his passion for the issue has made his message more aggressive than you'd like, don't dismiss his claim because you believe he's just full of piss and vinegar. Looking past the totally unrelated arguments about his identity, I don't have any difficulty believing what he's said. I know several folks in American academia who have stated unequivocally that the amount of computing power and data analysis ability available to them would make Dr. Evil blush.

Let's be honest and set emotional responses and character assault aside here. If you take the emotion out of what he said, can you honestly say the rest of it is bullshit? It rings true to me, and he's right: if academia, which is generally held by the public to be above the sort of cloak and dagger stuff that happened with Tor, lost its way and tossed their ethics out the window ... Well, that's a Bad Thing in ways we can only begin to understand. Who's left for us to trust?


It is totally fine if they disagree with me. What's not fine is the way they chose to express their disagreement: by taking umbrage at the idea that anyone, let along the author of a Bingo Card site, would have an opinion contrary to theirs.

I don't even think I disagree with the second part of 'AMEDICALRe's root comment. But of course, that comment has very little to do with what Patrick actually said. Patrick is responding to the fact that an anti-surveillance tool that chose as its adversaries all the world's governments was broken for a sum of money any angel investor in SFBA could have coughed up.


Fair enough. But I would argue that qualifying Tor as a group targeting world governments is a bit dramatic. That may be propagandist commentary on their part - they're entitled to make it, and people still use Tor in spite of it - but isn't the primary intention of Tor to preserve free speech and anonymity, and to offer protection from persecution (or prosecution) by nation-states that seek to quell dissent? And if all the worlds' governments are truly the stated target of Tor, why on earth should an American academic institution insinuate itself into that battlefield? For that matter, given the stated powers of our governments network and data analysis engines, why were they even needed?


They provided information "on that battlefield" that shows that Tor is wholly inadequate to "offer protection from persecution by nation-states". If you're in a place where your life is on the line and you think Tor will help you, this shows clearly that is incorrect. That is useful, even critical information, since lives depend on it. Maybe you don't know the source or the method but the output is very valuable.


That's only somewhat true, though. Russia tried to break Tor's anonymity and failed.

http://www.bloomberg.com/news/articles/2015-09-22/russia-s-p...


> I would argue that qualifying Tor as a group targeting world governments is a bit dramatic. That may be propagandist commentary on their part - they're entitled to make it, and people still use Tor in spite of it - but isn't the primary intention of Tor to preserve free speech and anonymity, and to offer protection from persecution (or prosecution) by nation-states that seek to quell dissent?

I don't understand what you're trying to say. Surely, if the primary intention of Tor is to protect users from persecution by nation-states, then their adversaries are world governments?


> was broken for a sum of money any angel investor in SFBA could have coughed up.

perhaps nitpicking, and a bit tangential to this debate, but I can't imagine the $1m on its own would be enough to break it.

I imagine they have some fairly beefy research budget with an existing infrastructure with a substantial computing power and prior research experience to begin with. So quite a tall giant to stand on. If I had to guess, the $1m was only there to cover time spent on this very specific task at hand, and for allocating researchers' time away from other tasks...


I find it weird that such a seasoned academic researcher would completely gloss over the distinction between academia and an FFRDC.


the revelation that there is or was a flaw of that scale is a service to the Internet

Right on all points regarding Tor's failures, except this above is the crux of the problem. Specifically, that the researchers did NOT disclose this to either the Tor project or the broader security community. They disclosed it to the Feds, pulled their presentation, and sat on it presumably forever until third parties smelled something fishy.

Patrick and you are correct in your criticism of the criticism, but the fact that academic security researchers have become obsequious functionaries to state power is a MUCH larger issue here, so much so that you are arguing at completely orthogonal purposes to many of us.

My guess is that this orthogonality is lost on AMEDICALRe, and theirs on you.


Where do you think Tor came from in the first place? The US Naval Research Lab. Why do you think the USG went to CMU for this research? Because CMU has been a bastion of state-funded computer security research since the 1990s.

No, the big story here is that Tor was broken for a pittance. But that story is a lot less fun than demanding scalps from CMU, because it suggests that you might not in fact be able to thwart national SIGINT agencies with volunteer open source projects, and we nerds demand a monopoly on technological skill.


Do you think it was broken for $1M without using an already existing computing infrastructure that costed much more? I'm more interested in knowing how much the real total cost involved here is. Maybe it's not a pittance that any VC in SV could cough.


"No, the big story here is that Tor was broken for a pittance."

That was my prediction and take-away from this. I've constantly warned against relying on Tor to stop nation-states. It's requirements, especially synchronous and performance, make the anonymity goal ridiculously difficult.

That the attacks are still so inexpensive is more disturbing. Opens up doors to non-nation-state attackers that have money and connections to smart people.


> We have access to hundred of millions of medical records

They're not anonymized?

When I'm asked to provide data to medical researchers it also has to be anonymized.


Is HIPPA not a factor here?


The problem is that people are outraged that they attacked Tor when they should be outraged that they attacked Tor users.

Given what the Tor project thinks to be, it needs smart people to poke it.


Right; the ethical experiment here would be to set up one's own private Tor network and then attack that. (Think that requires a lot of effort? Well, yeah; that's why you do it as part of a university with grant funding!) This would also have the bonus effect of being able to instrument all the nodes, so you could see the effects of your attack flowing through the system in a white-box manner.


Some of us are more outraged by the fact that they kowtowed to authority on the BlackHat presentation, and had a disclosure policy that favored the Feds over both the Tor project and the entire security community.

The CMU researchers are basically Sabu. Subhuman traitors to the hacker ethos.


I doubt they ever particularly cared about the mantle of 'hacker' and whatever ethos is supposed to go with it. That makes it hard for them to betray it.


If you had cut the ridiculous and mean first sentence out of this comment it would have been fine, but then, as you know, nobody would have cared about it, because you'd have been saying nothing everyone else hadn't already been saying.


Why do you copyright your comments?


flippant answer—tptacek doesn't "copyright" anything. in territories that recognize the Berne convetion of 1989, everything created that meats the standards for copyright is protected by copyright. you can't "copyright" something—something either is, or isn't protected by copyright. IANAL, but as tptacek's comments are tangible forms of creative works, they are trivially protected by copyright

less flippant answer—because he's probably had problems with people stealing his answers and posting them on other forums or similar issues.


A minor nit, from his profile at https://news.ycombinator.com/user?id=tptacek

All comments Copyright © 2009, 2010, 2011, 2012, 2013, 2015, 2018, 2023 Thomas H. Ptacek, All Rights Reserved.


His actual statement:

> Tor is having a fit of institutional pique that researchers are compromising the network's privacy guarantees by, well, looking at it.

> If you write security software, and you're not praying that loyal opposition hits you with everything they've got, you're not doing security

> Tor is intended to be, and is marketed as, robust against nation state adversaries. It cannot possibly be so if it worries about academics.

Two interpretations:

1. It's OK to go after Tor. This is dead wrong - attacking a network without permission is very bad form. Maybe it's OK to do the equivalent of checking to see if someone's front door is locked (this is a grey area), but only if you intend to warn them that their door's unlocked. Going through their stuff is obviously unethical (and probably illegal).

2. Tor should be more permissive, encouraging more attacks from researchers.

Obviously, the researchers crossed the line when they started gathering user data. But Tor should only be upset that the attack went too far, not that the attack succeeded.

I'm not sure of the context - was the Tor community pissed off that researchers found a weakness, or pissed off that the weakness was exploited?

Twitter is a pretty poor platform if you want nuance, so it's probably best to be charitable in your interpretations of what people say there.


> The response by Patio11 regarding how this was acceptable penetration testing was beyond stupid.

Actually, from a security perspective, its quite understandable. If you provide a tool that claims to be safe from state actors, they can use that kind of power to attack it.

That said, if it didn't pass the usual protocols at the university for ethical standards they can and should be fired regardless of the client or reason.


What was the patio11 comment? It seems to have been deleted, making this thread a bit harder to follow.


I believe it was a tweet https://twitter.com/patio11/status/664551822120476672 which was interpreted as chiding the victims for being thin-skinned in the attack upon them. (Secure systems thrive and survive only if they can take on all stressors and remain robust.)

While Patrick seemed to be focusing on the abstract notion of security mechanisms needing to welcome malicious scrutiny, the strong reaction against his tweet was based on the observation that Patrick failed to take into account the real, human cost of such an attack. This was further compounded by the fact that often, research requires IRB approval to determine whether the research is ethical, and the evidence is that CMU's actions weren't ethical. Yet Patrick felt it necessary to opine without understanding the ethical component of such an attack.


I still remember the researchers working with Facebook on some social science project or the discussion on that guy tweeting about airplane security, so the response from HN on this case baffles me somewhat.


None of this should be much of a surprise.

There has always been the possibility of bad actors being involved with Tor. In addition, the Tor software is complicated enough that there undoubtedly will be bugs in it.

This is "you bet your life" serious. However, both the architecture and the implementation of software must be perfect for that to succeed. It's pretty easy for one bug to mean "game over".

People using Tor just don't have a chance when it comes to dealing with the NSA, FSB, GCHQ or any similar state actors. Even allowing for inevitable government bureaucracy and incompetence, the disparity in resources can just be staggering. A big agency can easily, easily afford to devote 100 full time people to one high value target. Those are not odds I'd like to bet against.

In the bigger picture, the NSA doesn't give a rats ass about either Silk Road or about child pornography (at least I hope they don't). Which is why an "academic institution" was enlisted to help out the FBI with this.

But if I was a dissident or protester in Turkey, Syria, Russia, or any of a large number of authoritarian countries, I certainly wouldn't use Tor. Not if my life and the life of my family was at risk.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: