Hacker News new | past | comments | ask | show | jobs | submit login
YouTube change served lower-quality video to Firefox 43 for 2 weeks (bugzilla.mozilla.org)
371 points by temp on Jan 10, 2016 | hide | past | favorite | 257 comments



Browsers identify themselves with a user-agent string, and the user agent strings are a giant disaster of technical debt resulting from a long history of browsers working around buggy websites and websites working around buggy browsers. Browser video is also a giant disaster of technical debt, and YouTube uses the user-agent string to determine which ways of serving video will work and which ones won't. Presence of bugs here is no surprise, and any such bugs will naturally impact some browsers but not others.

So, a bug in YouTube's compatibility-autodetection caused it to fall back to a slower, bitrate-limited fallback method that it didn't need to, for some versions of Firefox. They took two weeks to deploy the fix. Considering those two weeks included Christmas and the bug didn't affect security or render the site unavailable, this seems like a pretty good response time.


> So, a bug in YouTube's compatibility-autodetection caused it to fall back to a slower, bitrate-limited fallback method that it didn't need to, for some versions of Firefox. They took two weeks to deploy the fix. Considering those two weeks included Christmas and the bug didn't affect security or render the site unavailable, this seems like a pretty good response time.

Do you think it would take 2 weeks or less than a hour if the browser affected was Google Chrome?

Google has systematically made Firefox an inferior option when using Youtube, pushing people to install Chrome instead because "Firefox doesn't work". At some point you almost start thinking they're doing it intentionally and maliciously to push their own (closed, DRMed) browser.


It's not just them. Have you tried viewing HN comments in Firefox for Android?

A few years ago Chrome gained majority among web developers and everyone just stopped testing with Firefox or any other browser. In fact not just testing - they started developing for Chrome only. And now we've got us into another IE6 situation and I think it will be even harder to overcome this time.


HN on firefox is bad. Comments doesn't have any hierarchy. HN has to fix that.

I use firefox on android as it is the only one that has extensions. I use uBlock and mht extensions.

On linux desktop too firefox is used mostly as it uses less memory than chrome when you have too many tabs open


HN on Android looks exactly the same in FF, Chrome and the stock browser. I checked right now. Even the zoom behaviour is the same. The only browser that makes the site behave better is Opera with text reflow on zoom enabled. I can't understate how useful that feature is in general and I can't understand why every browser doesn't offer it. It's the reason why I'm using Opera on Android despite using Firefox on the desktop. Another reason I'm not using FF on Android is that it's noticeably slower at rendering the page at the end of a scroll action. It seems that blink is better at that.

I'm adblocking with AdAway on the device I rooted and with Adblock Plus (a local proxy) on the one I didn't root.

You get them on F-Droid.

https://f-droid.org/repository/browse/?fdid=org.adaway https://f-droid.org/repository/browse/?fdid=org.adblockplus....


> HN on Android looks exactly the same in FF, Chrome and the stock browser.

I just checked too and Firefox looks different in the comments and it has been broken since the beginning of the mobile changes.

The comments on FF for Android do not have any additional indentation to suggest they are children. On firefox every comment takes the entire width of the screen, even if it's a heavily nested reply. On chrome comments have additional space on the left side indicating if they are children and who they are replying to.

The frontpage looks fine on Firefox. The comments do not look fine. When you view a stories comments on Firefox for android it's impossible to tell what comments are children.


Solution found:

> The comments on FF for Android do not have any additional indentation to suggest they are children.

I reinstalled FF on my phone and you're right. On my tablet FF behaves like Chrome. Some media queries gone awries or some FF quirk on smaller screens? FF-desktop behaves like the one on the phone when the window is resized to be narrow so it seems to be consistent with itself.

HN is using a media query to set the indentation using an empty image and setting its width. Check the /news.css file. Each comment is a tr, with 3 td elements. The image is the first one, then the votes arrows and the text. FF arranges those td in vertical, blink keeps them on the same row.

Digging into the CSS I realized that it's because of the display: block at line 88. If I remove it FF indents the comments. Opera keeps looking good.


Oh, that's awesome.

The CSS is clearly setting these to display:block, so they should in fact stack. But the page is also in quirks mode (no doctype), and in quirks mode Chrome (and WebKit) seems to compute the display of <td> to "table-cell" no matter what the styles say.

So yeah, someone just wrote CSS that depends on a bug in Chrome/WebKit and either didn't bother to test in other browsers or doesn't care whether it works there.

Edit: https://code.google.com/p/chromium/issues/detail?id=369979 is the 1.5-year-old bug report on Chrome to remove this unnecessary quirk. Let's see what HN does if/when the Chrome folks fix that one...


Interesting explanation. Thanks.

Btw this could prove my point that people ditch browsers because of bugs of sites, if those sites look good in other browsers.


Thank _you_ for actually digging into the CSS and finding the problem!


This is awesome, as expected from tech crowd. As soon as I got time to look into this, someone else has figured it all out.

Many eyes/brains on the bug means quickly figuring out issues. If it was open source the patch for this would probably be in testing now.

Active and very Awesome community!!


quirksmode, table-cells... I don't think the problem here is the browser.


Bravo pmontra! We've rolled that out.

I'm ashamed to say that this fix was on our list for a month and we hadn't gotten to it despite how easy it turned out to be (assuming, at least, that nothing breaks in other browsers). Fortunately HN has excellent users.


I checked the site with FF on my phone and it really works. Maybe I can start using FF there. Thanks for deploying the patch so quickly.


Opera Mini in android lost the indentation of comments around two months ago :(


Is it still broken?


It works now, thanks a lot !


A CSS fixing the layout on Firefox based on your solution: https://gist.github.com/skarap/15bf77d63fe596ee55ce . Use Stylish (https://addons.mozilla.org/en-US/firefox/addon/stylish/) to load it.


> Solution found:

Thank you for caring enough to look into this!

Would be absolutely awesome if we could find somebody from HN to fix it.


Here is what the comments look like on FF for Android: https://i.imgur.com/5e7hIMK.png

Does it look the same on Chrome, with no indentation for comments and replies? I've removed Chrome so I'm unable to test but I thought this was just a bug.


No it looks completely different in Firefox for mobile. There is no way on FF to distinguish which comments are children.


I don't have Chrome on the phone but I have answered here about FF https://news.ycombinator.com/item?id=10880486 Check the parent posts.


opera has always been innovative browser with good features as long back as i can remember(opera on Nokia s60v3 back then). features like great speed dial on mobile n desktop, save for offline, opera turbo mode, syncing bookmarks between desktop n mobile, website overview mode, text reflow.. I have to use addons with other browsers to have these features on chrome or firefox. Opera has a great team. Has poor market share though. They also had good security. Opera wasn't vulnerable to some big ssl bug discovered a couple of years ago. Most other browsers were. They recently switched their rendering engine to webkit though


And ever since that switch it felt like they had fired all their engineers and got lost chasing some kind of design shiny.

I guess i should have seen it coming when they were gushing about their "coast" concept.


they actually did fire about 100 people from the Presto team (previous engine) as they switched to webkit.


well damn, i missed that one...


Yeah. I was a big Opera user (/fan) before that ..


Yea, the slow scrolling on Android is annoying. I believe it's caused by not using the native functionality, which I hope gets fixed soon: https://bugzilla.mozilla.org/show_bug.cgi?id=776030


That bug is not about using the native scrolling infrastructure. I don't see a reason why the native Android functionality would be better than what Gecko already does.

(I wrote the initial version of Firefox for Android's scrolling code.)


Opera is slipping though. Ever since they abandoned their presto lineage and move to a Chrome derivative, they have been busy crapifying their browsers by focusing on shiny over functionality.

Just wish Vivaldi had gone mobile first...


And on top of that, it's also a principle matter for me. Unless I run into impossibilities, I prefer free and open source software because it offers more control, privacy and hackability, and it will act in my best interest instead of that of the company behind it.


> HN on firefox is bad. Comments doesn't have any hierarchy.

On desktop, at least, you can fix this by making your browser window wider (just resize the window: you'll find a threshold past which the hierarchy appears or disappears).

That doesn't help much for mobile, where you can't really resize windows.


This is the crux of the issue.

Gmail itself is unusable on Android + Firefox. When using an alternative browser you'll also stumble in problems with plenty of Android apps: Slack, Citymapper, etc... (when opening a web page intent, they expect that the only app that'll receive that will be Chrome)

The situation isn't as bad for desktop browsers, but it's depressing how things can still be broken.


> The situation isn't as bad for desktop browsers

Don't worry, it will get worse. Firefox is now forced to accept `-webkit-` in css (http://www.theregister.co.uk/2016/01/04/firefox_webkit_css_s...) not to be "broken". This will obviously lead to even less Firefox testing.

Hell, even I (my team) am contributing to this! We're working on Angular-based web SPA now. It hasn't got much functionality yet, but already has quite a few bugs when viewed in Firefox. And the usual answer from frontend guys is "It's a firefox bug!".


I remember fixing one of frontend bugs in an SPA and contrary to the claims it turned out to be a webkit bug that the code relied on (the app had been developed on a mac, so it's fairly easy to only test on webkit/blink). Lovely.

Aside/Rant: I also get the impression that frontend devs have never read the CSS2.1 spec or bothered to understand it (CSS3 is longer, and consists largely of special-purpose extra modules, so a complete reading is less valuable and more difficult). There's somehow this feeling that since CSS "isn't a real programming language" you should be able to use it intuitively. And when things don't work out and you fiddle until it works without giving thought to why, that's OK. That's such a huge waste of time. The box model may not be 100% intuitive, but boy, if that's your standard, what you doing in IT?


> the app had been developed on a mac, so it's fairly easy to only test on webkit/blink

It's no harder to only test on Chrome in Linux and IE in Windows, I fail to see how that's an excuse. The only excuses are IE testing when developing on OSX and Safari testing when developing on Windows, it's trivial to install both Firefox and Chrome on any OS.


Actually, it's easier to test for IE versions on linux/mac due to http://xdissent.github.io/ievms/ - You can install the VMs on windows too (of course), but I've never stumbled upon quite as simple a solution. On windows I tend to just rely on compat mode, which is surprisingly good, but far from perfect.


I suspect it will go beyond CSS.

There are a certain kind of app on Android that makes the device FS available via a web browser.

Thing is that these apps invariably makes use of a Chrome-ism when dealing with drag and drop.

In Firefox you can only drag a single file at a time. drag over multiple, or a folder, and there won't be much of a result.

In Chrome however it will transfer over just fine.

This has been sitting in the Mozilla bugtracker for some time, because the Chrome behavior goes beyond the HTML spec for file uploads.


Yeah sometimes I wonder if providing extra behaviors is good or bad.

Certainly it could save developers a lot of works and save users a lot of time, but it may become next ActiveXObject("Microsoft.XMLHTTP");

Maybe Chrome should put a warning into the console, saying that this function is experimental.


FWIW, Paypal rendered as completely blank pages (home and login / payment pages) in FF/Ubuntu until a couple days ago. Not sure when this started, but I've seen it before on some of their payment pages.


Upvoting you is not enough - let's focus on the local problem.

Dang - have you seen HN in Firefox on Android recently? Is there something we can do to improve this? Given that HN receives a moderate amount of (visible) changes recently, is there a way to make the site work in that situation/setup?


I already commented about this issue at https://news.ycombinator.com/item?id=10879874

To elaborate it further, the problem common to every browser I tried (the only one missing is Safari) is that comments are nested and if I zoom the page to be actually able to read the text, I have to pain points now:

1) I have to keep scrolling right and left to keep them in view as I scroll vertically to follow the threads.

2) Even if 1 were not bad enough, the comments are too wide now to fit into the screen (remember that I zoomed in) and I have to horizontally scroll at each line. The only browser not to suffer from that is Opera which reflows text after a zoom (if you enable that option). I'm left with problem 1 but it's the least painful.

I suspect that a div layout would fare better, still there are tons of blogs with div layouts that end up with 10px wide comments at deep nesting levels. Maybe some media queries to remove nesting for devices not wide enough and adapt the page width after a zoom? I don't even know if the latter is possible.

Edit: I checked on the desktop and it seems that you already reduce nesting when the screen width is reduced, to the point of removing it at all. I wonder why my tablet and my browser are not affected and keep displaying the indentation to the left. Any browser, any user agent (desktop or mobile). They're 1600x2560px and 480x800px.


Well I've sent a bug report to hn@ycombinator.com back in November.

They said "It's a bad bug and we're going to fix it". So I'm waiting.


It would be neat if the HN frontend was open-source and the community could submit PRs for improvements.


whatever you do, please keep everything else as it is now. HN loads very fast as it doesn't load every library on the planet. Do fix the rendering problem on firefox if possible, and keep it lightweight as it is now


If you took out all the table markup you'd probably save about half the page code. Using divs and css doesn't require any onpage libraries and would allow [HN] users to easily fix things like this with a user.css file with a left border on nested comments (for example) if the site didn't want to have a sane style for some reason [established parsing tools maybe].


What's wrong?


http://m.imgur.com/On83n3W

The indentation for one. Notice that the second comment is an answer to the first, but is on the same level of indentation. Following discussions like that is highly annoying.

Weird: While grabbing the phone to answer your question I noticed that "threads", i.e. the view of my comments, works fine. Just the comments below submission links are broken.

Compare to Chrome: http://m.imgur.com/Wpusprq

Note that I do use uBlock in Firefox. Now that I think about it, that might be relevant as well. But then again, why would that break the submission view and not the "threads" link?


I get that Chrome like rendering in every browser on my tablet (1) and with Opera and the stock browser on the phone. I reinstalled Firefox on the browser and it looks like your one. I understand that it makes difficult to follow threads but at least we can see deeply nested comments: they don't collapse to a single column of letters.

(1) Lines are not terminated at the screen boundary on the tablet if I zoom in. They keep going off screen to the right. Same on the stock browser on the phone and with Opera if I turn off text reflow.


>And now we've got us into another IE6 situation and I think it will be even harder to overcome this time.

I hate to defend Chrome, but I don't think that comparing Chrome to IE6 is warranted (yet). The problem with IE was that a large number of users were stuck with IE6 and never updated (either by their own choice or their IT departments'). So web developers had to write code for modern web browsers, and then implement a compatibility layer (I guess we call it polyfill, these days) for IE6. Chrome, however, has an update mechanism that the vast majority of users and IT departments do not override. Chrome is not in an IE6 situation because historically the number of users running old versions of Chrome is small and drops off rapidly as new versions come out.

I pulled these graphs from a few years ago, comparing Chrome adoption to IE adoption:

http://cdn.arstechnica.net/wp-content/uploads/2012/10/chrome...

http://cdn.arstechnica.net/wp-content/uploads/2012/10/intern...

Notice that Chrome uptake is very rapid, with the vast majority of users being on the current version of Chrome after approximately two to three months. Compare that with IE, where the majority of users still aren't using the most modern version of Internet Explorer after two to three years.

So no, I don't think Chrome is in the same position as IE. Google pushes updates to Chrome on a frequent basis, and, more importantly, those updates are actually applied by users. That said, it's still possible that Google could tomorrow do what Microsoft did when IE "won the browser war", which is disband the team and cease Chrome development. Do I think this is a likely scenario? Of course not. Is it possible? Sure.


The problem with IE6 was not that users didn't update. It was that IE created a monoculture among developers, using IE only features. Which resulted in code which got out on the web, and stayed there.

And then every browser had to either 1. be "broken" because the web-pages didn't work, or 2. forego W3C, standard-process and all that and emulate IE's bugs in order to be "real" browsers people could use.

And long after IE6 is gone, all that is still hanging around, causing issues to this day.

Right now we're getting a Chrome monoculture. People are using pre-standard Chrome and Webkit only features in production websites. In the future those websites will stick around, no matter if Chrome users update their browsers or not.

Chrome is the new IE. With Google giving a shit about real standardization, launching non-standard features in Chrome and production websites at the same time, they are the new Microsoft.


> launching non-standard features in Chrome and production websites at the same time, they are the new Microsoft.

Are there any features they are pushing in Chrome they are simultaneously not pushing on w3c to adopt? The problem with IE6 is that it not only did its own thing, it did the opposite thing the standard said to do. If Chrome is just standard + new stuff, and that new stuff eventually becomes standard, it isn't a problem. But its standard + new stuff + anti-standard bullshit, the latter is the issue and Google should be criticized for it.


The only way new features seem to get into the standards is if some browser starts offering it, and it gains so much traction that the standards bodies can't ignore it.

Web standards seem, from where I'm standing, to be very post-hoc.

Indeed, I think that's the way it ought to be. How many times has it worked out well when you've fully specified something, before you attempt to actually build it? For me that number is very close to 0.


I think this is the intent. A proposal is brought to TC39. It gets discussed. Someone champion's it (implements it in the wild). Adoption and usefulness are determined. New standards are accepted.

I think this is a much better practice than a bunch of dudes just deciding what browsers will support and how they support it without actually trying it on first.

It's pretty much all in the open. No need for conspiracy theories. The environment is significantly different than it was during the early IE days.

https://github.com/tc39/ecma262


I should have been more specific in what I mean by "IE6 situation". I was talking mostly about the years 2005-2008 when IE had already won the browser wars and despite a new and better browser (Firefox in this case) already existed, users couldn't switch to it (because ActiveX, because VML, ...) and developers couldn't [care to] stop using IE-only stuff, because everyone was using IE anyway. The current Chrome situation is more or less the same now.

The non-updating users problem is a bit different. I guess it had quite a contribution to the problem discussed above (e.g. if Microsoft had a good mechanism to switch most of IE6 users to 7, they might have made 7 a better - less "compatible" - browser).

And Chrome's great upgrade mechanism is one of the reasons I think the Chrome situation will be even harder to overcome. IE6 became so old and ridiculously bad that some sites stopped providing first-class user experience to it's users, so that users had to do something - upgrade. The same think won't probably happen to Chrome, because:

1. Google probably remembers Microsoft's history and will try to avoid the same mistakes.

2. They have an automated mechanism to keep Chrome updated.

And as long as they keep gmail and facebook working great in Chrome (and - ideally - worse in every other browser), they are safe.

I guess this is even good for the users - uniformly good user experience on all devices. Though - of course - if you don't mind exposing every detail of your online life to Google. http://uncyclopedia.wikia.com/wiki/Nobody_cares


What you are looking at there is that MS decided to shoot themselves in the foot over and over by tying things like the latest IE or DirectX to a specific version of Windows.

End result is that the last version that shipped on XP became the lowest common denominator.


Maybe I read it the wrong way, but by "another IE6 situation", I took the comment to mean that Firefox is the new IE6, likening Chrome's position to Firefox's at the time.

You have a lot of new stuff being designed & tested using Chrome, with some site breakage if using Firefox. Similar to several years ago when releasing sites that "work best with Firefox" (complete with badge images stating such), leaving IE6 users with a lesser experience.


Most of the time when I saw people develop "for Chrome" (like experiments), it would use standard stuff, but it only so happens that other browsers don't implement them [properly], yet. Or maybe I'm just awfully pedantic, but whenever I make stuff I tend to make it as close to the standard as possible, and fall back to vendor-specific stuff only when necessary (and usually try to cover all the major browsers).

Also, aren't most the Chrome-specific things disabled unless the app runs as an extension to discourage this type of chrome-only behaviour? I can't test/verify now, since I'm on my phone. This excludes CSS prefixes, since those are rarely needed these days anyway.


This is the first comment that I have read that has acknowledged this situation. I've been noticing this for quite a while. At first I was reluctant to believe it because designers should know better than anyone how bad that can turn out in the long run. Firefox is treated as IE now by this crowd. Google has been particularly shameful promoting their browser. It's gotten to the point where if your Firefox version is two releases behind they start showing their little "install chrome" ad. Ridiculous.


In retrospect, it was probably not a good idea to allow Google to buy Youtube. Because at this point, I don't think they're intentionally going after Firefox, but the tight coupling between Youtube and Google makes it impossible for them to actually be neutral in terms of things like bug fixes.

It's way too granular to be handled through the legal system or any kind of government regulation, so it's an unfortunate situation that's going to keep happening.


The problem is not Google owning Youtube. The problem is Google owning a majority of internet real-estate and a web-browser.

Now they can bypass any W3C standards-comittee and launch whatever they want in production websites and their own browser at the same time, much like Microsoft did. They already have their own Googley version of ActiveX out there! It will be fun to see what's next! /s

Google Chrome is the worst thing which has happened to the modern web.


Yeah, it's the Microsoft anti-trust situation all over again, except that all of Silicon Valley has gotten about 10x more sophisticated in terms of legal strategy and political clout. And just because I like Google (a lot!) and I didn't like Microsoft doesn't change that the technical landscape is becoming analogous.


Most organisations implement a change freeze over the holidays to prevent problematic code being deployed whilst they are low on resources I guess this did not trigger google's criteria for a '911' fix.


> Do you think it would take 2 weeks or less than a hour if the browser affected was Google Chrome?

I'd be inclined to agree, but then Google did start spam-binning its own security notifications a while ago and didn't seem to notice until Linus Torvalds started blogging about the rise in spam false positives. "Never attribute to malice what can adequately be explained by incompetence" - that is the Google motto, after all. ;)


"...user agent strings are a giant disaster of technical debt"

Amen.

I agree with the commenter below who suggests we use random strings instead. Unless there is some sort of published standard for what this header should contain and what each byte means, the contents may as well be random.

The "history" of "user-agents" (about 20 years) is not very long, assuming the web is going to last more than 20 years into the future. Taking a long view, we are still in the nascent phase.


Are you suggesting unique randomized strings for every user?


[deleted]


I'm sorry, but I still find the user agent string to be incredibly useful despite it's many shortcomings.

Not so much for deciding what to serve to a request, but moreso to analyze your audience and figure out from a technical perspective what is feasible and what is not. For example, a recent decision I made was to decide whether or not to rewrite a portion of our application in a javascript framework that did not offer IE8 support. I was able to quickly validate what portion of my userbase I was affecting by looking at user agent strings.

Request headers are a fantastic way to capture this little bit of information. Using javascript feature checking depends on a javascript engine successfully loading and parsing whatever scripts I've provided to the client. Request headers are simple plaintext that don't fail in even the most bare bones of situations.


https://en.wikipedia.org/wiki/Progressive_enhancement https://en.wikipedia.org/wiki/Fault_tolerance

No need to do browser detection. Just make your site work without JS as well :)


Ten years ago, that may have been valid advice, when "dynamic" mostly meant "server-generated". But if you're creating a website that is heavily JS-based, it does no good to serve 25% of your customers a perfectly-standards-compliant page telling them their JS won't work, so the site can't do its thing. You still need to know what client base you are supporting, no matter what.


That would be better, but not per user, per session. Currently I have surf(suckless.org) just pick a common one when I launch a tab (via tabbed): https://github.com/jakeogh/rndagent


per-HTTP request

Nothing ever guaranteed the user agent header would be consistent or present. Any re-use of a user-agent string adds bits of data that can be used for tracking.


While we're at it, we need some solution to tracking based on browser window size. I'm one of the few people I know who doesn't maximize everything, so I'm probably 100% uniquely trackable just by window size.


Window size detection is done with JavaScript. Block it.


It can apparently be done with CSS by a determined website.


Disable JS by default. I do, and it makes the web better. Surf/tabbed makes this easy to do per process.


If you do that, you are gong to break so many websites.


All the better to let them know they shouldn't be doing that.


That will be very popular with the users of your browser.


No, I mean, a tiny bit of css hacking will let you detect it.

Have a css media query for every possible height, and every possible width, that loads a 1px empty png.


>a bug in YouTube's compatibility-autodetection caused it to fall back to a slower

I double it's a bug, it's not the first time it happened, when YT introduced full HTML5 mode, there was a bug that for firefox, and only for firefox youtube was serving video with incompatible codec, the same as for Chrome, but Ff didn't have that codec implemented yet, so video didn't work. Opera was served video same as before, I noticed on reddit and other forums people were complaining on Firefox, that it stopped working on youtube. People started posting "I'm moving to chrome because it works". I think it's a plan to annoy firefox users by serving them not compatible videos so people move to Chrome. Google IS internet.


Never attribute to malevalence what can be explained by incompetance.


In principle, yes. But when it keeps happening again and again and again, it takes on the appearance of premeditation.


So, you're saying that Google doesn't test stuff they push on production?


In this case the bug was only for a small subset of the Firefox users (Firefox without x264 support).

It doesn't seems impossible to me they missed this specific case.


Actually for me since about two week the full screen mode in html5 with Safari is quite buggy too. So yeah maybe on this specific realease during holyday season they might have test a little more on different platforms and/or delay this patch.


Never say never.


> Browsers identify themselves with a user-agent string, and the user agent strings are a giant disaster of technical debt resulting from a long history of browsers working around buggy websites and websites working around buggy browsers.

I really don't get why we can't just have a "fuck it" mentality for user-agent string. User agent has been the biggest technical shit debt I have ever seen so far on the web.

1. Fuck the existing user-agent string.

2. Simplify to Browser-name: version

Anyone depending on existing scheme will find failure and they will update after googling a bit. Sure there are many variations / forks, and those should identify themselves either with the upstream or decide to advertise herself. The only problem is people running old version of browser as those won't get any updates.

Sorry for the language, but seriously, fuck the existing user-agent detection. The only reason I find user-agent still useful is simply tracking the % of chrome/firefox/edge/ie/safari/mobile users, and that's all web analytic people care about as far as browser choice is concerned.

User-agent is unreliable and should not be trusted.


It's a tragedy of the commons.

You try to implement that in your browser and launch it, and many websites will break (banks, in particular).

Users will download your browser and find that it doesn't work on [FAVORITE SITE X], and they won't blame the site, they'll blame your browser. After all, [FAVORITE SITE X] worked with the browser they used previously to yours; clearly, it's your fault that the site doesn't work now.

So we continue to make incremental changes and work around issues with popular sites needing to sniff user agents, while authors of new sites (that someday become popular) continue to work around issues with popular browsers by (ideally) checking the DOM in JavaScript, and (sometimes of necessity because the data of interest isn't available through the DOM for security or design-hole reasons) sniffing the user agent.


and this is a never-ending debt. If we take a big stand and give a one-year deprecation like SHA1, we should get through it. Of course, no one would do that because everyone is so dependent on this string.


That's what User agent started out as. Browser name and version. Now imagine you're building a new browser.


Browsers should start submitting a random 10 character string for the User-Agent header.


That would break more than it would fix. Video is hard (not all devices or browsers support the same codecs, bitrates, frame rates, etc), and YouTube has to use the User Agent to try to serve videos that the client can actually play. A random User Agent would mean that all those browsers that need a special video format wouldn't get the type of video they need, which sounds like a crappy world to live in (compared to the crappy world we already live in with meaningful User Agents).


> YouTube has to use the User Agent to try to serve videos that the client can actually play.

Why? We have JavaScript feature detection for video tags: http://diveinto.html5doctor.com/detect.html#video-formats


"Probably" and "maybe" values are, uh, probably not good enough.


I thought you were being facetious. Sigh. Damn, but that's broken.

I've worked with video format streaming, though, and encountered a situation where there was a driver on a certain Android device manufactured by a major hardware vendor that you've certainly heard of, and their driver silently corrupted video that was using low-latency streaming options. And by "corrupted" I mean that, while a few of the pixels looked almost right, sometimes, it was mostly just garbage and blank grey screens.

I gave them a sample stream that failed, and they verified that yes, it's their fault. And that no, they weren't going to ship an update for it. Sorry. Device had been out for two years. Apparently no one thought to actually test the various standard options in that video codec. It works with the "normal" options? Ship it!

So I totally understand the need to look at UA strings. I had to look for that particular device hardware profile and disable the low-latency streaming options in order for our product to work on that device.


That sort of feature detection is likely not sufficient for detecting if a video will play well or correctly. There's been a number of bugs in MSE in the past which means that YouTube has to fall back on browser version checks to avoid a bad user experience.


To be fair, a certain school of thought would say that those bugs are the browser's to fix, and that websites should deliver the browser whichever video it claims to support. If the browser advertises support for something it doesn't support, then it is up to browser developers (not web developers) to fix either the support or the advertisement thereof.

Edit: Not saying that's always correct, but there does seem to be a tension between: "make sure your code can work well, even in a bad environment" and "don't mask bugs, instead fix them where it makes the most sense to do so" (in this case: in the dozens of browsers instead of in millions of websites).


Another school of thought is that it doesn't matter if it's the responsibility of the browser because the customer perception will be that your website does not work for them.

If a mechanism exists to prevent your customer from perceiving this, as a business the smart decision is going to be to use it.


Partly true, because if they try with another browser and get a good result they'll blame the browser and not the website. In this case, they'll probably say the the new Firefox broke YouTube. See one of the comments at www.pclinuxos.com/forum/index.php?topic=135323.0

"i'm getting a lot of corrupt unsupported browser when accessing video on other sites, so i'm using chrome. i can't figure out if it's flash or firefox but i'm tired of opening the hood, i'm done with FF for now"

Most of them looked for a workaround and spoofed the user agent, but they are the kind of people that fiddle with settings. Normal users probably either don't notice or change browser.


YouTube's compatibility hacks are against buggy versions of browsers. Regardless of whether a bug is eventually fixed, once it is released, there is some segment of the user base that has it. If that segment is large enough, and the bug easy enough to hack around, then YouTube will provide a workaround rather than lose revenue.


Sounds like the ultimate solution here is to just give browsers a killswitch, so they'll eventually just stop working if you don't keep them up-to-date. (I'm not sure whether I'm joking.)


While you're right that it's up to the browser to fix, that can be a very time consuming process (especially for something as tricky as video). YouTube doesn't want to wait around for browser vendors to fix things, so they work around it themselves (just like happened in the opposite direction for this bug).

Finding workarounds for browser bugs is a common theme for the work that web developers do. Yes, browser vendors should fix the bugs, but in the meantime, there's no reason to have a bad user experience.


> (compared to the crappy world we already live in with meaningful User Agents).

> Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36

Google Chrome tells us it is: a Mozilla, it is AppleWebkit, it is KHTML, it is Gecko, it is Safari, and it is Chrome.

"Meaningful" is something different.


It's meaningful. It communicates exactly what the browser is to anyone who is capable of receiving the intended communication. (It also communicates other things to people who are not, but "Google Chrome 41.0.2228.0" would be absolutely meaningless to these listeners, so that doesn't seem like an objection with merit.)

You might object to the form of the communication, but that doesn't render the communication inexact. Just think of it as browser slang.


> It communicates exactly what the browser is

This is true, which is not particularly useful when deciding if a feature is available. Using the user agent to make this decision incorrectly assumes that all user agents of a particular browser+version all support exactly the same set of features. This is obviously not true.

The user agent may be meaningful[1], but the meaning that it communicates isn't feature data.

[1] If you take the header at it's word; as the bug being discussed demonstrates, user ages may be lying.


> [...] the meaning that it communicates isn't feature data.

But it kind of is (to some degree, at least). VP9 was recently added to Firefox 28. If the User Agent says the browser is Firefox 27, then you probably shouldn't try serving VP9 to that browser.

Obviously the User Agent doesn't communicate the complete feature set. But some useful inferences regarding features can be derived from it.


The user agent header is not the place to look for those.


It shouldn't be the only input signal for determining those things, but it can definitely be useful (i.e. "this codec is only supported in vX.Y+ of this browser", or "vX.Y of this browser has a bug handling this type of video", etc.).

It's not like YouTube is a company of full of nothing but dopy idiots. I don't think they'd be using the User Agent if there was a better alternative.


If this is your goal, wouldn't it make more sense to just stop sending the User-Agent header?


Various websites break if you don't send the User-Agent header. So, in the short term, no, in the long term, yes.


Are there really a substantial number of sites that break if you don't send the header but work if you send a random 10-character string?


Many sites behind Cloudflare.

If I don't have this line in my .wgetrc, I get 403's sometimes from sites using Cloudflare.

user_agent=Mozilla/5.0 (compatible; CloudFlare-AlwaysOnline/1.0; +https://www.cloudflare.com/always-online)


Wget uses its own user-agent by default (Wget/x.y.z (linux-gnu)), not no header, so that's more likely cloudflare or sites specifically blocking wget.


Actually for the longest time, I had in my .wgetrc, user_agent="Mozilla/5.0 (etc etc etc)". It was a valid user agent. And Cloudflare was blocking that. Turns out if you wrap the actual user agent in quotes in your .wgetrc, wget doesn't strip them off, so cloudflare was seeing User-Agent: "Mozilla/5.0 (etc etc etc)", ie. with the quotes. And it was rejecting that.


Fair enough - but if it rejects that it would probably reject 10 random characters too


Note: the specific user-agent doesn't matter, I just think it's funny to steal Cloudflare's one.


I do the same whenever a site requests personal info from me and I don't want to give it.

Find the corresponding info from the company or their CEO, use that.

And whenever some site wants me to register for a newsletter, I enter their own "critical support" mail address.

I hope at some point these sites learn that mandatory anything just hives you useless data.


Haha, I usually put root@localhost, or www-data@localhost. It accepts it fairly often too. Sometimes I need to open up the Chrome Dev Tools and delete the input's type attribute (<input type="email"> to just <input>) because there's some client-side thing that tries to stop you inputting invalid emails.


Yes. Most of them give a HTTP 403 error, meaning they check for the header. Others crash with HTTP 500 errors or similar.


They could always remove it and add an option for users to switch it back on temporarily for the session if they really need it. Sometimes it's best to break things to get rid of cumbersome cruft.


Wouldn't that result in even-uglier hacks for finding user agent becoming standard?


Because the days of browsers having bugs that need workarounds are over?


Hopefully the days of websites having (or being able) to work around all browser bugs of the past 20 years are over.


Or cycle through all those lists of the top n used, and put some conditionals in /mozilla-central/netwerk/protocol/http/nsHttpHandler.cpp to avoid all those breaking effects (to some degree).

Added benefit of getting alot of "not recognized computers" or whatever emails from google lol


Could this be done with an extension?


Yes. https://github.com/muzuiget/user_agent_overrider

As far as I am aware, no extension currently offers the option of a random user agent, but many (including the one linked) allows user supplied use agents, and there is no reason you could not make one support random user agents.


There is in fact already an extension which will randomly change the browser user agent.

https://github.com/dillbyrne/random-agent-spoofer


Also, uMatrix can randomly switch user agent strings (among many other things).


With an extension to this extension, yes:

http://chrispederick.com/work/user-agent-switcher/


Why send one at all?


"They took two weeks to deploy the fix. Considering those two weeks included Christmas and the bug didn't affect security or render the site unavailable, this seems like a pretty good response time."

Note that plenty of smart companies have things like production freezes at holiday times :)

So this is not entirely surprising.


If bugs can't be fixed during a time frame, then I'd think new features shouldn't be rolled out during that time frame either.


They generally aren't. The freezes are real freezes: New software releases are not rolled out during that time period, be they new features of bugs.

(they are always exceptions in any of the companies i've been in, but that's been the rule)


1) break everything 2) change freeze 3) ??? Profit


I think the whole situation with user-agent strings being ridiculous was really put into perspective when I read that the latest version of IE (Edge) has a UA like this:

Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10136


Only serving 360p basically does render the site unusable.


How so?


A video site that doesn't serve watchable video isn't much of a video site. My reaction to getting 360p (or 240p...) video from Youtube is to close the tab.


Youtube used to be only 240p, and I found it perfectly watchable. Then when 360p started to roll out, people played all kinds of weird url tricks to try to force videos to be high quality (360p).

Tons of videos from then are only available in those resolutions, and I don't think it's productive to close the tab any time you encounter a video from 2007 or 2008.


I mean sure if you're stuck in 2007 it's fine.

The average screen resolution in 2007 was 1024x768. Most of us now have ~1080p-ish type screens or on mobile where it's even higher. We also now have the bandwidth to watch 720p/1080p videos whereas in 2007 that was less true to a broader audience.

360p is pretty unwatchable for me on my screen from 2011/2012, let alone anything that's been released in the past 2 years.


It really depends on your usage patterns. I watch YT mostly for audio in the videos. Either it's music or some documentary or interviews or shows like This Week in Startups. Video is really not important and I don't want to waste my bandwidth on it. I watch all videos at 360p by default and only increase it manually when I really need to see the detail.

I also know a lot of people who just start a playlist with music and leave YT in the background the whole day. They don't even have that tab active in the browser.


I was going to reply to you by saying that Youtube audio quality is affected by video quality, but apparently that hasn't been the case anymore since 2013. The more you know!

http://www.h3xed.com/web-and-internet/youtube-audio-quality-...


Yep 240p was watchable, but no one wanted to watch it. I used to make stupid little "frag movies" and we used to upload them to youtube for "quick review" and if we found it interesting or good then there was always some file sharing site URL to download better quality version for local viewing


> A video site that doesn't serve watchable video isn't much of a video site.

Luxury problem - I frequently get no video at all because of geoblocking when I use Tor. Youtube is really a terrible video site and I don't see why everyone is using it.


Which video sites are better in your opinion? It isn't great, but all the alternatives I've seen are worse AND have way less content I'm interested in.


I've never seen a dailymotion video blocked by geoip.


Vimeo has much less content, but has never given me trouble viewing on any platform.


True, forgot about Vimeo, because I don't use it like YouTube to "discover" or explicitly seek out content, but only through outside links. It's great though.


Guess everybody has read this multiple times, but it's highly on-topic: http://webaim.org/blog/user-agent-string-history/


You have good points. But a big company like Google not running automated tests on all major browsers also seems a bit implausible.


Google always only cares about themselves, in this case, Chrome.


This is why we need standards and standards compliant web browsers. Then we wouldn't need to worry about testing on all different browsers.

As for the case with google products, particularly youtube, those users other than ones using chrome have a really mediocre experience. Try viewing a youtube video on chrome. you won't experience a problem if you quickly try to seek backwards. Now try doing the same in firefox. This is one of the reasons I stick with firefox, which supports open standards and privacy.


Ah user agents. One of my favorite quotes:

"And then Google built Chrome, and Chrome used Webkit, and it was like Safari, and wanted pages built for Safari, and so pretended to be Safari. And thus Chrome used WebKit, and pretended to be Safari, and WebKit pretended to be KHTML, and KHTML pretended to be Gecko, and all browsers pretended to be Mozilla, and Chrome called itself Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13, and the user agent string was a complete mess, and near useless, and everyone pretended to be everyone else, and confusion abounded."

http://webaim.org/blog/user-agent-string-history/


This is actually pretty great. Youtube has some issue with Firefox 43 and serves it lower quality video to deal with it (probably as a hack to enable video for some users for whom it didn't work at all under 43, but there's no explanation from Youtube engineers in the thread). The Firefox team complains to Youtube, but Youtube responds with that they won't budge until January at least.

Firefox sees it as a threat to their reputation (obviously for them having a few users for whom it doesn't work is better than having loads of users with a degraded experience) and they start spoofing their UA to another version. The Youtube engineer is indifferent (or can't really do anything about it).

Then when it's done a Google employee from a different team comes in the thread to complain about the situation because the fact that the UA is spoofed for only youtube.com and not other (Google) domains breaks his product.


YouTube wanted to deprecate Flash video, so Mozilla enabled VP9 support in Firefox 43 for users whose operating system does not provide an H.264 codec. (VP9 for all other platforms is coming a little later.) YouTube disabled Flash for Firefox 43 users, as planned, but forgot (?) to enable VP9 for those users. Without H.264 or Flash video, those users received low-quality 360p VP8 instead of high-quality VP9. When this mistake was discovered, YouTube could not fix or revert the configuration change because of their holiday code freeze.

(I am a Mozilla employee.)


That sounds like something that shouldn't be rolled out over Christmas when nobody is in the office.


As if similar bugs in Google products wouldn't be fixed over Christmas. Chrome playing only in 360p? That would be fixed in a matter of hours.


They said in the ticket that the issue does NOT affect the majority of Firefox users. It was just a calculation of the impact, and they decided not to release a hotfix. I'm pretty sure if it was affecting all firefox users it would be fixed in a matter of hours too.


I am quite sure that even if it were the same amount (in absolute numbers) of Chrome users, it would have been a bigger priority than Firefox, at least for most people there.


If this had happened to Chrome users, the code freeze would have magically been broken in a matter of hours.


thanks for this summary, very succinct and hyperbole-free.


I thought google committed code to prod thousands of times a day! Why do they even have a holiday freeze if it's so automated to push to prod?


Just because all of Google pushes to prod thousands of times a day doesn't mean that the team within Youtube pushes to prod that often. They may have some sort of scheduled deploy cycle and likely a review process.

During the holidays they simply may not be staffed up for that process.

Just because you can push to prod doesn't mean you should push to prod.


From what I hear, YouTube pushes daily, Monday through Thursday.


My former team (in a big company) push changes to prod at least once a day until 3PM Friday. However, the company has the no-deploy days policy for the holiday period because if things break, there are gonna be very few people in the office to fix it.


If it's so, can't they just roll back?


The existence of a “feature” that relies on UA strings being consistent across domains, with the potential to “brick YouTube entirely in a user-visible way for all Firefox 43 users in Europe”[1], is quite worrying to be honest.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1233970#c61


Honestly, it sounds like the commenter was defending some crappy engineering decisions.

I'm not really comfortable that YT/Google were, via this guy, asking for special consideration (from an open source project) but only willing to discuss their request in private. It lacks transparency.


And this "feature" being not noticeable to the user, but kinda a requirement that it be there, and apparently associating stuff on someone's Google/YouTube side... but the Googler in question is unwilling for that feature to be identified.


From what he describes, it might be something like a quota service or an ad service that does consistency checks over the course of user sessions to fight spam/abuse/crawlers.


I was actually hit by that bug, which I erroneously attributed to firefox. I stopped using firefox as a result.

I will probably start using it again because I read this article. I can't help but wonder how many ex-users who wont because they don't follow the news.


It's not the first time people blame firefox for bugs in Youtube, very similar bug was introduced when youtube went HTML5. Firefox was served the same encoded video as chrome, but opera stayed on the old codec. Firefox didn't have chromes codec implemented yet and people left firefox for chrome.


I'm curious about this non-user-visible europe-only feature, which depended on user-agents aligning between youtube.com and google.com, that got broken, especially since the details couldn't be discussed in public.

Antispam? Ad-tracking? Why would it be Europe-only?


Cookie law related, perhaps? There was a cookie-related change in FF43 for third-party cookies (https://bugzilla.mozilla.org/show_bug.cgi?id=536509).


Reading the comments on that bug, it looks like it was only a small subset of Firefox users who experienced this bug. Not really a big story.


I believe this is the relevant comment:

"Apologies to all impacted by this transient configuration. Firefox 43 users who do not have h.264 will get 360p VP8 until the configuration is updated early next year. The very large majority of Firefox users watching YouTube have h.264. For them, and for user who get VP9, <video> MSE performs better overall than Flash."


Yeah, but the next comment was:

> "Unfortunately this is not an acceptable state for us. We have many millions of users whose Youtube experience suddenly got significantly degraded, and we're not willing to leave them in that state over the holidays. Any chance you guys could simply roll back the change you guys made? If not, we're forced to ship a Firefox update that lies to you guys about the Firefox version, which really is not a good option for either of us :("


It can still be "multiple millions" and still be very low percentage numbers, when you are accessing risk you deal with percentages not absolutes.


Reading the comments, it's some 6%, not small for me, if the total number of users is many millions.


Firefox users without H.264 platform codecs were affected: 100% of Windows XP users (which are about 14% of all Firefox users worldwide!), 2% of all other Windows users (the "N" and "KN" editions in Europe and South Korea), and 3% of Linux users.


Is OpenH264 still not being used for video decoding?


OpenH264 can't be used for video decoding because it can't decode the H264 profiles used to encode video on the internet; it's strictly baseline profile only which is pretty useless. The only reason it exists is because Cisco has a bunch of H264-only videoconferencing hardware that they want to be able to work with WebRTC.


I got this shit, just like I get no ability to change any search settings or do an advanced Google search on Firefox mobile, and am now spoofing Chrome as useragent.

In my opinion, Firefox should do what Chrome did, and append the competing browser’s UA to their own useragent.


In the case of Firefox for Android, there is already a rapidly growing blacklist of sites that get their UA spoofed to Chrome.


You might want to add webp support to Firefox for Android then, and just spoof star.google.com/whatever TLD they support and star.youtube.com, too.

(Replace "star" with *, as HN doesn’t support any proper escaping or markdown. This kind of bullshit is what makes this page so horrible to use)


FWIW, you can enter things like that on HN by treating it as "code" -- skip a line and indent the next by two spaces

  *.youtube.com


For me, the extreme Youtube slowdown coincided with me being in a new location. For a while, I just thought that the internet must be extremely slow, and yet, the pages themselves loaded fast, fps games worked smoothly, and there weren't any other signs that videos should buffer extremely slowly. Now I know why.


Some (not Mac) users on the newest version of Firefox got lower quality Youtube video over the holidays, and the issue would would not be fixed until next year? Oh yah, next year was next week. Thank goodness, it was still fixed within the year.

Do we live in a world where this in unacceptable?


The bug was reported on December 19 and fixed on January 7. That's at least 19 days, not just "next week". Do you think Larry Page would allow YouTube to serve 360p VP8 instead of hardware-accelerated 1080p H.264 to ~15% of Chrome users for 19+ days "because holidays"?


I run a large dev service with multiple online products, and we like most others in the industry go into a deployment freeze from about mid december to the first week in jan. Its designed to prevent new code that could cause stability issues being deployed over the holiday spell when many of our top engineers are on vacation, and our ability to handle complex problems is limited.

We (and i assume google does the same) restrict deployments to 911 fixes, which have a specific set of conditions before a fix is considered such, presumably this did not meet googles criteria. From what i read it was 6% of a minority browser traffic that was effected, which would would putt the total traffic at 1-2%, i would not break the deployment embargo for that.


Well, when you consider that every bug fix carries with it the chance of introducing a worse bug, sometimes holding off on a non-critical fix until everybody's back from the holidays and able to work 100% again doesn't seem like a bad idea.

Having some % of FF 43 users (who're about 3.5% of the browser market according to netmarketshare.com) experience degraded but functional video may be better than, say, a 1% risk of breaking youtube for 100% of users and not being able to fix it for a day.


I think the point of the parent comment is that they would have done it right away even if Chrome usage had been at the same 3.5% share.


Defending Google and attacking Firefox is still the trend though, as many of the comments show here, even thus this Mozilla employe's comments are crystal clear (for once someone does that - that's great!)


> Having some [small % of users'] experience degraded but functional video may be better than, say, a 1% risk of breaking youtube for 100% of users and not being able to fix it for a day.

Not being able to roll back for a day, you mean. Worst-case you hit the undo button, which they surely have.


A rollback is a change, it can still introduce problems if thete are issues in the rollback process, a deployment freeze is a freeze even for rollbacks.


Possibly there is better communication between the YouTube and Chrome teams than between YouTube and FireFox. As in they probably cross test each release.


Here is a list of bugs with Google services and Firefox.

http://www.otsukare.info/2014/10/28/google-webcompatibility-...


We should just remove the reporting of User-Agent in browsers. All they do is encourage user tracking a building half-baked heuristics.


The user agent header should have been dropped a long time ago, as it is one of the largest[1] sources of browser identification data.

Even ignoring the tracking problem, using the user agent string instead of feature detection has always been terrible engineering, as it assumes all browsers within a version have the same features. Browsers have always been configurable - often in extreme ways. I don't understand how anybody can simply6 pretend the preferences dialog (and extensions, etc) doesn't exist.

[1] Out of the 15.91 bits of data that Panopticlick was able to collect from my browser, 9.0 bits (56%) were from the user agent. The 2nd largest source is HTTP_ACCEPT at 4.25 bits.


Feature detection is not always enough. E.g. if a browser reports it has a required feature but that feature is buggy rendering it effectively unusable, then browser detection is necessary.


> using the user agent string instead of feature detection has always been terrible engineering

Is there a better solution for server-side feature detection?


Something like the Accept header, listing capabilities. Or something like how Android handles capabilities in manifests.


BTW, feature detection can be even more identifying (although you need it anyway). IIRC my browser was completely unique at Panopticlick, most likely because I have a rare plugin installed (a handler for byond:// URLs)


Or, make the information in the string useful. Like, maybe, just list the features the particular user-agent supports.


User-Agent: entire browser source code (uuencoded) ?

Hmm... might be a bit big... how about, since the source code is publicly available, we could just put the browser name and version in there?


No, seriously. Can we just get all browser vendors cooperate and clean up the mess at the same time, so everyone wins?

User-Agent: Chromium/47.0 (Windows 10)

User-Agent: Firefox/43.0 (GNU/Linux; X11)

User-Agent: Wget/1.15.4 (GNU/Linux)

etc.

Given that current UA strings are abysmal (and sent on every request), this would probably save a few kilobytes worth of headers (images, scripts, stylesheets), so would probably improve performance by some nanoseconds.

They managed to do this with SHA-1 deprecation and a lot of other stuff. So they could just announce that "on February 1st, 2017 we all clean up the clusterfuck User-Agent headers are", bake in the timer and spread the word.


What's the cost/benefit on that? Older sites would not update. Newer sites are using better forms of detection, except for the tiny number of sites that put engineering effort into supporting an insane long tail of old browsers.

(SHA-1 deprecation only worked because certificates have to be renewed, so any site with working HTTPS is being actively maintained at some level)


Benefit's obvious - getting rid of legacy cruft. It has to be done someday, or by 2030 we'll end up with few kilobyte-long UA strings.

Cost: bajillion man-hours of CEOs, companies and committees trying to negotiate that they all do this at once, then about 10 minutes of developer time to actually code this into reality.

Then some sites would be broken, but old stuff eventually rotting and dying is nothing new on the web. Unfortunately, no idea about the numbers, but I suppose it would be low. If someone has the actual statistics on how much sites become practically unusable without a legacy User-Agent header, that'd be great.


> Benefit's obvious - getting rid of legacy cruft. It has to be done someday, or by 2030 we'll end up with few kilobyte-long UA strings.

Which will cost how much by the standards of 2030? Particularly given HTTP/2 header compression.


Getting lower response times isn't a main point here. Getting rid of the cruft is about sanity, not extra milliseconds.

But I'm sure even by 2040 there still will be a lot of places with awful network connectivity, where every kilobyte would still matter.


If you're going to go from this "Mozilla/5.0 (Mobile; Windows Phone 8.1; Android 4.0; ARM; Trident/7.0; Touch; rv:11.0; IEMobile/11.0; BLU; WIN HD W510u) like iPhone OS 7_0_3 Mac OS X AppleWebKit/537 (KHTML, like Gecko) Mobile Safari/537", to something useful, you should probably give the useful one a new name and drop u-a on the drop dead date... That way websites can test the new way with their audience in advance.


The obvious solution is to deprecate User-Agent and have a new header. Browser-Version or something like that.


some stores even charge you a different amount based on user-agent string...


How do they encourage user tracking? The user agent string only tells you information about the browser, not the user.


It's not the only data that is available for deriving a tracking key.

https://panopticlick.eff.org/

Why are you pretending that there is some sort of difference between "tracking a user" and "tracking the user's browser", which is close enough for most tracking purposes? Tracking the browser is tracking a user for a vast majority of people. For browsers on a phone, it is even more likely to be a 1:1 mapping.


The user agent (by itself) provides you none of those things....

> Why are you pretending that there is some sort of difference between "tracking a user" and "tracking the user's browser"

I'm not? That isn't what I said.


They add extra bits of identifying information to the pool.

You know, https://panopticlick.eff.org/


https://www.browserleaks.com/ is another useful resource to help understand browser fingerprinting and how to combat it.


I wonder why it was that important (for Firefox to do something about it).


They already lost a massive chunk of their users. Do you think things will get sweeter if people start to notice that Firefox videos are crappier than on Chrome?


Yes, but you are talking about only weeks here for one thing.


Only weeks? I wouldn't watch any video for even 10 seconds if it's only 360p. I would instantly start tracking down the problem and/or switch to a player that can do more.


0th world problem, to the extreme. The bug would be fixed before at least 99.99% of users noticed and tested an alternative.


Put yourself in Mozilla's shoes. Why would their users have to take the punishment for no apparent reason? Wouldn't you be protective towards your users/reputation?


It would be the week a lot of new computers were unboxed.


But none of them run XP.


True, I would expect a lot of XP computers got replaced that week.


Also happens with Windows N.


They'd probably like to avoid developing a reputation as a crappy video decoder. There may have been some bickering amongst the peanut gallery about it.


On a similar note, Facebook does this as well for video. Take these two browsers: Chrome (with Flash bundled) and Opera (the new shitty one, based on Chromium, with chrome-like user agent and without Flash). Facebook serves Flash video to the browser without flash and HTML video to the browser with Flash.

Why? Fucking user agent.


Oh, I just remembered - they also do it with gifs - convert them to video and serve through flash. I sometimes wonder whether they are invested in Adobe or something.


I think that browser user-agent-strings are far more trouble than they are worth.

I think at this point, if there are browser bugs in something as important as video, the release team or someone should pick it up before release as a show-stopper.

If the issue is on the server end, then the people running the server should be fixing their mess.

Do we know which it was? A Firefox issue that caused Youtube to detect the user-agent and server lower quality video, or was it an issue on Youtube's end?


See https://news.ycombinator.com/item?id=10878678 for what happened here, at least from Mozilla's point of view.


Probably true but sadly there are equivalents to a user-agent string in a lot of technologies and they will never really go away because of compatibility constraints.

Entire operating systems contain hacks to work around unfortunately-designed-but-popular applications doing things they shouldn't. Device drivers too. Even the x86 processor specification in some cases is more about "what companies actually do" than "what some document says", because that is the only way things work correctly.


The only reason this should be on the front page #1 of HN is if there is tangible evidence that intent was malicious.


The poor way in which YouTube does feature detection (via user agent string) is almost as appalling. There is nothing wrong with outlining it here.


> The poor way in which YouTube does feature detection (via user agent string) is almost as appalling.

If you could provide a better alternative, I'm sure they'd be all ears.


I'd have to look at the code but you can detect just about anything within the browser; do the check once regarding what features you want to use, store and do that. User agent string parsing is a hack that's typically easier to do hence why so many still rely on it.

Having said that I don't think they'd be all ears; their code isn't opened source and while I could try to debug through their minified code that hardly seems worth it for me to do.


Devil's advocate:

Detecting the features available in the browser is potentially two requests: the server sends HTML/JS to the client, the client makes a decision, and then requests more data that's compatible with its feature set. If the server can do feature detection by user agent sniffing, it's potentially one request.

The client has to do this for every browser session, too, since browsers are frequently upgraded and the features change week-to-week.


> Detecting the features available in the browser is potentially two requests: the server sends HTML/JS to the client, the client makes a decision, and then requests more data that's compatible with its feature set. If the server can do feature detection by user agent sniffing, it's potentially one request.

That's not really true though. The code that's going to load the video will be downloaded and then the code will request the video. So those two requests happen no matter what. Cache the feature detection so it's slightly faster after the first time.

> The client has to do this for every browser session, too, since browsers are frequently upgraded and the features change week-to-week.

Browsers are unlikely to lose the ability to use whatever codec you think is best so I think this is a safe assumption. Bonus points if you can detect its failure and re-run the feature detection in case something odd like that would happen.


Put there a little button that lets the user select the format/resolution/implementation.


What percentage of YouTube users could define the word "format"?


The less options you give the user up front the better; I don't think this would be a better option (though allowing advanced users to select this may not be the worst thing).


And the unforeseen consequences to the "quick, innocent fix" more important still. They did not handle this properly.


I don't think it was malicious on Google's side, though it's still definitely interesting enough to be on the homepage, as it's a situation where both parties had to deal with hard technical constraints.


YouTube hacked together feature detection that relies on user agent strings. Firefox released an update that hacked around a recent bug in said feature detection. Seems fine to me without needing any conspiracy drama.


If your ultimate actions were anti-competitive, even if it was a legitimate mistake, then you can't and shouldn't let it slide. If people started following that philosophy, companies like Microsoft will start causing "innocent little mistakes" every day.

This absolutely, positively, would never, ever, happen in Chrome. Never. Yet here we are. "oops!"

I absolutely believe it was an accident, but I don't believe it should be fully treated like one.


I somewhat doubt it was an accident. Google products always seemed tailored to work less well in Firefox. YouTube always seemed to be slow on Firefox for me on anything less than an i5 while it worked perfectly in Chrome on even pretty bad PCs, which shouldn't happen because other video sites always worked just fine in Firefox, even when serving much higher-resolution video.


Interesting ... Because of some incidents Google is evil ... Meanwhile Firefox is not allowed on iOS.



"Everyone" which cares about openness avoids Apple products and iOS in particular and has for a long time.

Pointing out how Google is evil, does exclude other companies from being evil too.


> Pointing out how Google is evil, does exclude other companies from being evil too.

Obvious typo: Pointing out how Google is evil, does NOT exclude other companies from being evil too.


Chrome isn't allowed on FirefoxOS…


I've also noticed the bug, it didn't happen with Chrome. What do you all think was my first thought?


"oh firefox sux"

but in fact, it was all google's doing. imagine if they do this randomly and developers of safari, edge, firefox, dont always notice... good way to grab market share, by leveraging that the entire human race browse/use youtube daily.


I doubt YouTube is willing to jeopardize their ad revenue for some intangible nudge towards Chrome. I'm sure these Google different orgs, rightly, prioritize their own P&Ls.


I'm sure it's involuntary (at least I hope so) but the prioritization is voluntary (like you said, if it was chrome it'd be instant fix - even if chrome had 1% market share).

My comment means also to show the power Google holds over all web browsers. Imagine a sharp sword above the head of all of the web browsers, and they can swing it any time they please. They don't hopefully, but they can. Sometimes, the sword just slip, even. That's a lot of power for a single company.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: