Because I could not visit that website without enabling Cookies, here is the full text:
Hackers accidentally give Microsoft their code
By Josh Taylor, ZDNet.com.au on August 26th, 2010 (1 day ago)
When hackers crash their systems while developing viruses, the code is often sent directly to Microsoft, according to one of its senior security architects, Rocky Heckman.
When the hacker's system crashes in Windows, as with all typical Windows crashes, Heckman said the user would be prompted to send the error details — including the malicious code — to Microsoft. The funny thing is that many say yes, according to Heckman.
"People have sent us their virus code when they're trying to develop their virus and they keep crashing their systems," Heckman said. "It's amazing how much stuff we get."
At a Microsoft Tech.Ed 2010 conference session on hacking today, Heckman detailed to the delegates the top five hacking methods and the best methods for developers to avoid falling victim to them. Heckman explained how to create malicious code that could be used in cross-site scripting or SQL injection attacks and, although he said it "wasn't anything you couldn't pick up on the internet", he suggested delegates use the code responsibly to aid in their protection efforts.
According to Heckman, based on the number of attacks on Microsoft's website, the company was only too familiar with what types of attacks were most popular.
"The first thing [script kiddies] do is fire off all these attacks at Microsoft.com," he said. "On average we get attacked between 7000 and 9000 times per second at Microsoft.com," said the senior security architect.
"I think overall we've done pretty good, even when MafiaBoy took down half the internet, you know, Amazon and eBay and that, we didn't go down, we were still up."
Heckman said there were two reasons why the top hacking methods of cross-site scripting and SQL injection had not changed in the past six years.
"One, it tells me that the bad guys go with what they know, and two, it says the developers aren't listening," he said.
Heckman said that developers should consider all data input by a user as harmful until proven otherwise.
The last quote is the interesting one. "...developers aren't listening."
That has been my experience - that a significant number of people who write code for a living are not aware of what's been happening on the Internet the last half dozen years.
For some reason I'm still surprised when I meet a web developer who doesn't really know about XSS and SQLi.
Part of the problem is people who code for a living. If you get to work, write code for 8 hours, then leave and never think about programming outside that time you don't give yourself the opportunity to learn very much. Unless you work with amazing people and/or read a lot and participate in communities while at work.
That has been my experience - that a significant number of people who write code for a living are not aware of what's been happening on the Internet the last half dozen years.
Another example of "hacker" being problematically ambiguous. I assumed this article was about MS getting sent a bunch of open-soruce/Linux source code, and maybe an insinuation of MS lifting parts of it.
Of course, it's open-source anyway, but still.
And does MS.com really get attacked 8000 times a second? Unless you're counting every individual attempt from each brute force script etc, that seems unreasonable even for them.
It's more of an example of you not reading the short article but commenting on it anyway.
The article is about Microsoft analyzing crash reports, which include bits of compiled code that crash, and finding that sometimes those compiled bits are viruses under development.
So when you're writing code on windows and it crashes and you, accidentally or not, hit the "send report to Microsoft" not only does it send them your code (presumably they mean the compiled bytes here) but Microsoft then checks over it to see if there's anything useful there, even stuff not directly related to the crash...
(a) everyone @ msft takes the confidentiality of data uploaded to Watson (the crash reporting system) extremely seriously - you can't just 'look for useful stuff' in it without a good reason, and you'd be fired immediately if you were found to be reverse-engineering or copying bits of code out of it
b) on a technical level, trawling through dumps looking for 'useful stuff' would be an absurdly inefficient way to do anything. For one thing, it just crashed, which in my book is not usually a sign that you'd want to copy it.
c) your crash dump is a drop in the ocean. There are millions coming in. Nobody will ever look at it unless the system picks up that there a multiple millions of crashes occurring at the same point in the same widely-used app, in which case it's possible msft might contact the vendor (if identifiable) to offer to help fix it.
Yet surely the virus authors themselves aren't experiencing "millions of crashes" during the early development stages that Heckman is describing. So how does he find their code, given what you've said?
Clearly, there must be some red flags which cause even small numbers -- perhaps single -- crash reports to get human attention at MSFT.
Exactly. Many exploits rely on crashing components of the system. Depending on configuration settings a system may automatically be sending in watson reports for certain kinds of crashes. This makes it possible to track down the origin of a virus / worm after the fact. Step 1: take notice of the exploit in the wild. Step 2: determine it's method of operation. Step 3: correlate watson reports with exploit (likely there will be many of these). Step 4: backtrack to the oldest reports which might be from when the exploit was being developed, then dig out as much information from those reports as you can.
Certainly this doesn't work all the time, but even if it worked only very rarely it'd still be a pretty substantial payoff.
But then the same thing could apply to non-malicious/competitive code, too. Once it proves interesting to Microsoft, they could mine the past crash history for background info.
I can believe internal controls and culture mitigate this risk -- but sheer volume doesn't provide confidence of confidentiality for non-malware coders any more than it does for malware coders.
Look at it this way -- the Watson system is pretty important in improving the quality of Windows (let's assume that more quality -> more sales). If coders (most of whom work in a corporate environment) ever found out that Microsoft was looking through their code for a non-emergency reason, they'd immediately turn off error reporting on all of their systems (not to mention, file a few lawsuits), removing sources of crash errors from a sector that is vital to Microsoft (and leading to a decrease in the quality of Windows, and thus, fewer sales).
So it's in their best interests to keep those internal controls as strict as possible in order to avoid such a thing.
Much more likely, something along the lines of - Microsoft security people searched through their database of crash logs for (however fragmentary) evidence of people attempting to develop exploits. This probably works particularly well in the case of vulnerabilities known to Microsoft but not yet publicly disclosed.
Actually, they check to see WHAT caused the crashed. In the case of a virus writer, that would be the virus' code. How could they not look at it? It IS directly related to the crash.
The problem I have with that story is that there are a lot more people writing software for the MS platform than just the virus writers and I figure that if you add that all up the virus writers are a drop in the bucket.
So this applies to all developers, not just to the ones writing viruses.
This does not only apply to "hackers". Say you are developing the next show stopping desktop application and you somehow crash something. If you send the error report you are also sending MicroSoft any "trade secrets".
Now is Microsoft going to steal your code? I dunno, probably not, but it is something to think about.
I almost never hit yes for those popups. Much of the information I touch should never leave the company. It's probably safe to send the crash dump but it's safer still to not.
>When the hacker's system crashes in Windows, as with all typical Windows crashes, Heckman said the user would be prompted to send the error details — including the malicious code — to Microsoft. The funny thing is that many say yes, according to Heckman.
That's funny. I distinctly remember noticing how those error details frequently "complete" instantly, so I've performed an experiment a number of times:
1) Get the report-crash window to pop up. 2) yank your Ethernet cord / power off your wireless. 3) submit report. 4) "success!"
The only times it's ever sent anything has been when some Microsoft core-product crashed on me, and then you get a progress bar and nearly always a link to info on a (likely) related error. And it catches less than 1/2 of the ones I've generated.
Windows caches error reports when you're offline and uploads them later. I've had error reporting give me links to third-part patches for products with no relation to Microsoft based on error reporting. But feel free to continue with your misinformed ranting.
I quite like the error reporting links. Massively more helpful than the "success" that most reporting mechanisms give you.
As to caching: to a certain degree, I doubt that, actually. When a Windows component has crashed, the same ones that I mention go through the whole process, it always informs me that I'm offline if I disconnect from the internet. For non-Windows ones, it always completes instantly, sends me no link, doesn't realize I've disconnected, and always says "success".
There's a disconnect here. If it's cached, it should say so, not tell you it succeeded when it didn't. Meanwhile, why do some detect internet connectivity and some don't? It's fully possible you're right, but I've seen no evidence of it.
It's easier for them to secure against all those attacks considering they have the all the code that runs on their servers and the people with the expertise to fix them ;p
Microsft just proved that opensource is the way to go. They wouldn't trust their business to software they don't have the code for.
Hackers accidentally give Microsoft their code By Josh Taylor, ZDNet.com.au on August 26th, 2010 (1 day ago)
When hackers crash their systems while developing viruses, the code is often sent directly to Microsoft, according to one of its senior security architects, Rocky Heckman.
When the hacker's system crashes in Windows, as with all typical Windows crashes, Heckman said the user would be prompted to send the error details — including the malicious code — to Microsoft. The funny thing is that many say yes, according to Heckman.
"People have sent us their virus code when they're trying to develop their virus and they keep crashing their systems," Heckman said. "It's amazing how much stuff we get."
At a Microsoft Tech.Ed 2010 conference session on hacking today, Heckman detailed to the delegates the top five hacking methods and the best methods for developers to avoid falling victim to them. Heckman explained how to create malicious code that could be used in cross-site scripting or SQL injection attacks and, although he said it "wasn't anything you couldn't pick up on the internet", he suggested delegates use the code responsibly to aid in their protection efforts.
According to Heckman, based on the number of attacks on Microsoft's website, the company was only too familiar with what types of attacks were most popular.
"The first thing [script kiddies] do is fire off all these attacks at Microsoft.com," he said. "On average we get attacked between 7000 and 9000 times per second at Microsoft.com," said the senior security architect.
"I think overall we've done pretty good, even when MafiaBoy took down half the internet, you know, Amazon and eBay and that, we didn't go down, we were still up."
Heckman said there were two reasons why the top hacking methods of cross-site scripting and SQL injection had not changed in the past six years.
"One, it tells me that the bad guys go with what they know, and two, it says the developers aren't listening," he said.
Heckman said that developers should consider all data input by a user as harmful until proven otherwise.