Hacker News new | past | comments | ask | show | jobs | submit login

I don't really buy the comparison that what CERT did is similar to a university-sponsored DDoS. I think a better parallel is the Dan Egerstad case. He ran a Tor exit node and analyzed all the plaintext traffic leaving the exit nodes. He ended up collecting a ton of sensitive usernames and passwords. He tried to contact some of these people by e-mail but they ignored him. So he posted a bunch of these passwords on his blog. He was promptly arrested (and eventually released). At that time the security community was outraged that an obviously well-intentioned researcher was being harassed by the police for doing his job. The response is a lot different now for reasons I don't really understand.

I do wish both sides would acknowledge this is a tricky issue. On the one hand, if I run a tor exit node or relay, it is my node and it seems like I'm allowed to do with it as I please. At the same time, it also seems obviously unethical (maybe illegal?) to be harvesting passwords off an exit node or to dole out vigilante justice to Tor users I don't like.

One other thing to keep in mind here is that SEI is a DoD funded center. It may be nominally affiliated with CMU, but all their money comes either from the DoD or external grants awarded to the researchers at SEI. So CMU the private research university and SEI the DoD-funded research center have very different obligations to the public. It's important not to conflate the two.

The big question is this: what are our responsibilities as security researchers, especially when we're working on "live" software systems? Green seems to be suggesting some form of a review board which pre-approves experiments on live targets. Maybe this is what we need, but be careful what you wish for though. The bad guys don't have review boards.




> I don't really buy the comparison that what CERT did is similar to a university-sponsored DDoS. I think a better parallel is the Dan Egerstad case.

Here's why it's worse: they inserted a plaintext encoding into the response from the onion-address lookup relay, and so anybody observing the user (e.g. the ISP) could detect what onion address the user was connecting to. This applies after the fact to recorded traffic as well. Thus the researchers had no control over who got deanonymized, to whom they were deanonymized, and when they were deanonymized.

> I do wish both sides would acknowledge this is a tricky issue. On the one hand, if I run a tor exit node or relay, it is my node and it seems like I'm allowed to do with it as I please.

You actually are not allowed to do with your relay as you please. At least in the US, the legal theory protecting relay operators (i.e. safe harbor) also makes it illegal to observe user traffic content except in certain cases (e.g. to improve network performance).

> One other thing to keep in mind here is that SEI is a DoD funded center.

This doesn't seem very relevant. All researchers have an obligation to consider and mitigate possible harms that occur during their research (source: I work in a military research laboratory). These researchers clearly did not fulfill that obligation, and I'm sure their institution is reviewing or has reviewed their procedures to make sure it doesn't happen again.


Let me try to understand your position a little better.

Are you saying the problem here is simply that the effects of the attack were observable by others? If this were not the case, you'd have been fine with it?

And since you seem to be arguing that researchers shouldn't examine user traffic, do you also think that what Egerstad did was also wrong? Do you agree with his arrest?

And one more thing sort of related to this. What's your opinion on research like Arvind's Netflix deanonymization attack? Do you think the work that research involved was also unethical?

> All researchers have an obligation to consider and mitigate possible harms that occur during their research

This is nice idealism and I'm totally in support of it. But I can't help think this is pie-in-the-sky thinking, especially when organizations like the DoD are involved.


Understanding the nature of each organization involved -- how both motivations and expectations shift as one moves between orgnaizational barriers -- is perhaps the most important, worst reported, least understood part of this story.

If the SEI took money to, essentially, weaponize unpublished research, the issue is not one an IRB would have prevented. DoD contractors aren't bound by scientific codes of conduct. In light of that realization, the suggestion in this blog post is confusing.

(BTW, distancing CMU and the SEI is not meant as a defense of CMU -- close ties between public science and law enforcement/military R&D are as troubling as ever...)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: