Hacker News new | past | comments | ask | show | jobs | submit login
Handling cookies is a minefield (grayduck.mn)
427 points by todsacerdoti 19 hours ago | hide | past | favorite | 212 comments





Cookies are filled with weird gotchas and uncomfortable behavior that works 99.95% of the time. My favorite cookie minefield is cookie shadowing - if you set cookies with the same name but different key properties (domain, path, etc.) you can get multiple near-identical cookies set at once - with no ability for the backend or JS to tell which is which.

Try going to https://example.com/somepath and entering the following into the browser console:

  document.cookie = "foo=a"; 
  document.cookie = "foo=b; domain=.example.com";
  document.cookie = "foo=c; path=/somepath";
  document.cookie
I get

  'foo=c; foo=a; foo=b'

At work, whoever designed our setup put the staging and dev environments on the same domain and the entire massive company has adopted this pattern.

What a colossal mistake.


Yep. Even within the prod environment it's ideal to have a separate domain (as defined by the Public Suffix List) for sketchy stuff like files uploaded by users. Eliminates a whole class of security issues and general fuckery

I had the option to re-use the prod domain for non-prod a few years ago (the company's other two projects use the prod domain for all non-prod environments).

I didn't really think about cookies back then but it just felt like a generally bad idea because disastrously messing up a URL in some config or related service would be much easier.


For the juniors reading this, here's what you do:

Buy a second domain, ideally using the same TLD as your production domain (some firewalls and filters will be prejudiced against specific TLDs). Mimic the subdomains exactly as they are in production for staging/dev.


Just use subdomains such as *.dev.example.com, *.test.example.com, *.prod.example.com, etc., no?

The reason not to do that is that dev.example.com can set cookies on example.com and other envs can see them.

We have *.example.dev, *.example.qa, *.example.com for development, staging/qa and production. Works well and we haven't had any issues with cookies.

That only works if you (and any third party code that might run on such a domain) are completely consistent about always specifying the domain as one of your subdomains whenever you set a cookie.

And if your marketing/SEO/business people are ok with having something like "prod" as a subdomain for all your production web pages.


Usually it's mainsite.com for the marketing site, and then app.mainsite.com for actual production, or if you have multiple it'll have the product name, like coolproduct.mainsite.com

We then have app-stg and app-canary subdomains for our test envs which can only be accessed by us (enforced via zero trust). No reason for marketing or SEO teams to care in any case.


When was the last time you saw a public website like that? prod.companyname.com websites are extremely rare especially outside tech.

The production site could be www. or something else that makes sense.

Ah yes if you use a CNAME that would work. You know better than me.

I'm sure this will be replicated in future projects because it's much easier to argue "we're already following this pattern so let's be consistent" than "this pattern is bad and let's not have two ruined projects"

btw, technically that leading dot in the domain isn't allowed and will be ignored; https://www.rfc-editor.org/rfc/rfc6265#section-4.1.2.3

... this came up recently after I tightened the validation in jshttp/cookie https://github.com/jshttp/cookie/pull/167 - since that PR the validation has been loosened again a bit, similar to the browser code mentioned in the article.

My changes were prompted by finding a bug in our code (not jshttp) where a cookie header was constructed by mashing the strings together without encoding; every so often a value would have a space and break requests. I was going to suggest using jshttp/cookie's serialize() to devs to avoid this but then realized that that didn't validate well enough to catch the bug we'd seen. I proposed a fix, and someone else then spotted that the validation was loose enough you could slip js into the _name_ field of the cookie which would be interpreted elsewhere as the _value_, providing an unusal vector for code injection.


I wonder if this explains a lot of the unusual behaviour that happens when you use multiple accounts on a website in the same browser.

Seems perfectly reasonable to me?

If you are on /somepath I'd expect to get C as is the most specific value out of all three. All the values are still returned, ordered, which to me is the best of both worlds (path-specific values + knowing the globals)

The only thing I don't like is the magic `document.cookie` setter, but alas that's nearly 30 years old.


Yeah, isn’t that how you represent a list of values? (Or maybe better to say a collection, not sure if ordering is preserved)

But if the attributes are exactly the same then the cookies replace each other. So this isn't a general mechanism for representing a list.

Not to mention that the way to delete a cookie is sending a replacement cookie that expires in the past. How are you supposed to delete the right cookie here?


And the worst is that you need to exactly match the domain and path semantics in order to delete the cookie! Domain is easy enough because there are only two options - available to subdomain and not available to subdomain. But if you have a cookie with the `/path` set and you don't know what value was used, you literally cannot delete that cookie from JS or the backend. You need to either pop open devtools and look at the path or ask the end user to clear all cookies.

Is there a way for JS to see the attributes for each value? Because presumably setting an expire time in the past and iterating over every used set of attributes would get the job done to delete the cookie. Iterating over all possible (plausible?) attributes may also work, but knowing the specific attributes set would narrow that list of erasing writes to issue.

No, there isn't. All you get a list of values that are valid for the current page. Same on the server side.

If you're ever in a situation where you need to invalidate all possible instances of a cookie, it's easier to just use a different name.


Using the path field is a code smell

Can you elaborate? I'm having a tough time finding references to that. (Disclaimer: I'm not an avid JS developer)

It means that you are setting cookies on whatever page you're on, without considering whether the cookie will be consistently accessible on other pages.

For example, you set the currency to EUR in /product/123, but when you navigate to /cart and refresh, it's back to USD. You change it again to EUR, only to realize in /cart/checkout that the USD pricing is actually better. So you try to set it back to USD, but now the cookie at /cart conflicts with the one at /cart/checkout because each page has its own cookie.


If you want cookies to be global, set them to / or leave out the path. If you want more fine-grained cookies, use a specific path. What's the problem? Currency is—in your example—clearly a site-wide setting. I think sites should make more of their hierarchical structure, not less.

If you leave out the path, it will default to the directory of the current URL, not /.

If not for this default behavior, it would have been much easier to manage global settings such as currency.


Isn't that just the feature working as intended? Of course it is possible to introduce a bug by setting or not setting a cookie somewhere where it should/shouldn't be set.

I've never found a use for path-based cookies personally, but I'm not sure this is a particularly compelling example.


For modern applications you’ll have better ways to maintain state. As shown they cause trouble in practice. Cookies should be used sparingly.

If you want to maintain state across navigations and share that state with a server it’s the best we’ve got.

Server can store session state

Server side session state for more than authentication is way worse than "code smell."

It requires a ping to a shared data source on every request. And, the same one for all of them. No sharding, No split domains... That gets expensive fast!


You just described how the whole web operates. It works just fine.

Even if you want client side, we have better ways now than cookies.

We do, but only cookies are universally available. Plenty of unusual user-agents in the world, or people like me that browse with JS off by default.

I add some products in phone. Then I login to desktop later for modification and order. Cart is empty. That's engineering smell. A really bad one.

Thats nothing more than UX/UI.

> In computer programming, a code smell is any characteristic in the source code of a program that possibly indicates a deeper problem. Determining what is and is not a code smell is subjective, and varies by language, developer, and development methodology.

- https://en.wikipedia.org/wiki/Code_smell


The article mentions Rust's approach, but note that (unlike the other mentioned languages) Rust doesn't ship any cookie handling facilities in the standard library, so it's actually looking at the behavior of the third-party "cookie" crate (which includes the option to percent-encode as Ruby does): https://docs.rs/cookie/0.18.1/cookie/

Thanks for pointing that out -- I've updated the article and given you credit down at the bottom. Let me know if you'd prefer something other than "kibwen."

De facto standardization by snapping up good names early!

Not really. A lot of essential third party Rust crates and projects have "weird" names, eg. "nom", "tokio", etc. You can see that from the list of most downloaded crates [1].

This one just happens to have been owned and maintained by core Rust folks and used in a lot of larger libraries. This is more the exception than the rule.

It's a given that you should do due diligence on crates and not just use the first name that matches your use case. There's a lot of crate name squatting and abandonware.

Rust crates need namespacing to avoid this and similar problems going forward.

[1] https://crates.io/crates?sort=downloads


A sibling comment talked about “UwU names”. Not sure exactly if they are referring to “tokio” or something else. But if it’s tokio, they might find this informative:

> I enjoyed visiting Tokio (Tokyo) the city and I liked the "io" suffix and how it plays w/ Mio as well. I don't know... naming is hard so I didn't spend too much time thinking about it.

https://www.reddit.com/r/rust/comments/d3ld9z/comment/f03lnm...

From the original release announcement of tokio on r/rust on Reddit.

And also to the sibling commenter, if tokio is a problematic name to you:

Would either of the following names be equally problematic or not?

- Chicago. Code name for Windows 95, and also the name of a city in the USA. https://en.wikipedia.org/wiki/Development_of_Windows_95 https://en.wikipedia.org/wiki/Chicago

- Oslo. Name of a team working on OpenStack, and also appears in their package names. Oslo is the capital of Norway. https://wiki.openstack.org/wiki/Oslo https://en.wikipedia.org/wiki/Oslo

If yes, why? If no, also why?


Just want to point out that location names are used for codenames because they cannot be trademarked

Big tech uses them instead of wasting legal time and money having to clear a new name that's temporary or non-public.

Changing the name to Tokio removes this benefit and still leaves it disconnected from its purpose.


The name of the city is 東京 -- anything in Latin characters is a rough transliteration. Tokio was the common spelling in European texts until some time last century, and is still used regularly in continental Europe.

see also, e.g. Tokio Hotel


A reference to Tokio Hotel was not on my HN bingo card

This is the first time Tokio Hotel has been mentioned on HN in over ten years.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

That has me thinking of Neutral Milk Hotel. Totally different vibes.


Tokio is a different (masculine) name in Japanese, pronounced quite differently. /tokʲio/ vs. /to̞ːkʲo̞ː/.

https://en.m.wikipedia.org/wiki/Tokio_(given_name)


We are talking about the spelling centuries ago, when the romanisation were less standardised

> location names are used for codenames because they cannot be trademarked

I don't think that's the case. Amazon, Nokia as some counterexamples.


> Rust crates need namespacing to avoid this and similar problems going forward.

It hasn't been implemented despite crowd demanding it on HN for years because it won't solve the problem (namespace squatting is going to replace name squatting and tada! you're back to square one with an extra step).


I do agree that people will assume xyz/xyz is more authoriative than some-org/xyz, but I think there is benefit to knowing that everything under xyz/* has a single owner. The current approach is to name companion crates like xyz_abc but someone else could come along with xyz_def and it's not immediately obvious that xyz_abc has the same owner as xyz but xyz_def does not.

This is a completely different topic though, and I think there's interest in shipping something like that.

That's the main problem with “just add namespace FFS” discussions that come every other week: everyone has its own vision of what namespace should look like and what they are meant for, but nobody has ever taken the time to write an RFC with supporting arguments. In fact, people bring this mostly in ways that are related to name squatting (like right here) even though that's not a problem namespace can solve in the first place. It's magical thinking at its finest.


> nobody has ever taken the time to write an RFC with supporting arguments.

https://rust-lang.github.io/rfcs/3243-packages-as-optional-n...

https://github.com/rust-lang/rfcs/pull/3243


Exacrly, this isn't about “default namespace”, this is the other feature which I said had support (didn't know the RFC had been merged though, thanks for pointing that out).

This isn't the kind of namespace people say they want to prevent squatting.


Solved the problem almost completely in npm. Sure you can't search for a name of a company or a project and expect it to be related to the company or project. But there's no way to solve that.

But once you know a namespace is owned by a company or project, you can know that everything under it is legit. Which solves the vast majority of squatting and impersonation problems.

Also you know that everything under "node" for example is part of the language.


> Sure you can't search for a name of a company or a project and expect it to be related to the company or project. But there's no way to solve that.

There's a way to solve it partially: you can have a special part of your namespace tied to domains and require that eg com.google.some-package be signed by a certificate that can also sign some-package.google.com

Of course, there's no guarantee that https://company.com belongs to the company, but the public has already developed ways of coping with that.

(I specifically suggest doing that only to part of your namespace, because you still want people to be able to upload packages without having to register a domain first.)


That just makes package names harder to remember and type (and actually less secure as more prone to typosquatting and backdoors in seamingly harmless pull requests) for no benefit.

Keep in mind that the majority of package by far don't come from companies in the first place, and requiring individual developers to have a domain of their own isn't particularly welcoming.

It's going to be tons of complexity for zero actual benefit.


There are problems it does solve though. It’s incomprehensible that we get so many new package managers that fail to learn from the bajillion that came before.

It actually learned and that's what makes cargo as good as it is (arguably the best of all that came before, and a source of inspiration for the ones that came after).

But its authors rightly concluded that it's useless to expect to prevent name squatting by any technical mean!


php deals with this by using the username/organization name of a repository as the namespace name of packages. At least then you're having to squat something further up the food chain.

Why not do it like go does and use the git hosting domain as a prefix (like github.com/org/project)?

I recall in the Elm community there was a lot of hooplah around the package system aligning too much with a single repo provider (github) so that might be one disincentive there.

It doesn't have to be git either - a few version control systems are supported. See https://go.dev/ref/mod#vcs

And it doesn't have to be the direct domain+path of the repository, it can be some URL where you put a metadata file that points to the source repo.


How does it prevent squatting in any way?

Did anyone else notice that the HTTP protocol embeds within it ten-thousand different protocols? Browsers and web servers both "add-on" a ton of functionality, which all have specifications and de-facto specifications, and all of it is delivered through the umbrella of basically one generic "HTTP" protocol. You can't have the client specify what version of these ten-thousand non-specifications it is compatible with, and the server can't either. We can't upgrade the "specs" because none of the rest of the clients will understand, and there won't be backwards-compatibility. So we just have this morass of random shit that nobody can agree on and can't fix. And there is no planned obsolescence, so we have to carry forward whatever bad decisions we made in the past.

Tbh I’ve made peace with this world and I might even enjoy it more than the planned obsolescence one.

That was the model that Microsoft used at the height of their power and dominance in the 1990s and 2000s.

Anarchy is the price to pay for not having a monopoly dictate a nice clean spec which they can force-deprecate whenever they want.

This is also the fault of shit-tastic middleware boxes which block any protocol they don't understand-- because, hey, it's "more secure" to default-fail, right?-- so every new type of application traffic until the end of time has to be tunneled over HTTP if it wants to work over the real Internet.

> middleware boxes which block any protocol they don't understand-- because, hey, it's "more secure" to default-fail, right?

If the intent is to secure something then failing-open will indeed be at odds with that goal. I suspect you’re not implying otherwise, but rather expressing frustration that such providers simply can’t be bothered to put in the work and use security as an excuse.


About 10 years ago I implemented cookie based sessions for a project I was working on. I had a terrible time debugging why auth was working in Safari but not Chrome (or vice-versa, can't remember). Turned out that one of the browsers just wouldn't set cookies if they didn't have the right format, and I wasn't doing anything particularly weird, it was a difference of '-' vs '_' if I recall correctly.

IIRC there is (or was?) a difference in case-sensitivity between Safari and Chrome, maybe with the Set-Cookie header? I've run into something before which stopped me from using camelCase as cookie keys.

Can't seem to find the exact issue from googling it.


I got the impression that almost as soon as they were introduced people thought the only sensible use of cookies is to set an opaque token so the server can recognize the client when it sees it again, and store everything else server side.

I don;t understand why it's a problem that the client (in principle) can handle values that the server will never send. Just don't send them, and you don;t have to worry about perplexing riddles like "but what would happen if I did?"


Cookies are an antiquated technology. One of the first introduced while the web was still young in the 90s, and they have had a few iterations of bad ideas.

They are the only place to store opaque tokens, so you gotta use them for auth.


They are not the only place to store tokens. You can store tokens with localStorage for JS-heavy website, in fact plenty of websites do that. It's not as secure, but acceptable. Another alternative is to "store" token in URL, it was widely used in Java for some reason (jsessionid parameter).

To expand on the "not as secure" comment: local storage is accessible to every JS that runs in the context of the page. This includes anything loaded into the page via <script src=""/> like tracking or cookie consent services.

Cookies seem to be a big complicated mess, and meanwhile are almost impossible to change for backwards-compatibility reasons. Is this a case to create a new separate mechanism? For example a NewCookie mechanism could be specified instead, and redesigned from the ground-up to work consistently. It could have all the modern security measures built-in, a stricter specification, proper support for unicode, etc.

It's funny that you mention NewCookie, there is actually a deprecated Set-Cookie2 header already: https://stackoverflow.com/q/9462180/3474615

Imagine pwning a frontend server or proxy, spawning an http/s server on another port, and being able to intercept all cookies and sessions of all users, even when you couldn't pwn the (fortified) database.

This could have a huge advantage, because if you leave the original service untouched on port 80/443, there is no alert popping up on the defending blueteam side.

This gives me an idea for a project...



NewCookie is, roughly, what browser Local Storage is.

At least for some use cases. Of course, it doesn't directly integrate with headers.


I think one important use case we have for cookies is "Secure; HttpOnly" cookies. Making a token totally inaccessible from JS, but still letting the client handle the session is a use case that localStorage can't help with. (Even if there's a lot of JWTs in localStorage out there.)

However, potentially a localStorage (and sessionStorage!) compatible cookie-replacement api might allow for annotating keys with secure and/or HttpOnly bits? Keeping cookies and localStorage in sync is a hassle anyhow when necessary, so having the apis align a little better would be nice. Not to mention that that would have the advantage of partially heading off an inevitable criticism - that users don't want yet another tracking mechanism. After all, we already have localStorage and sessionStorage, and they're server-readable too now, just indirectly.

On the other hand; the size constraints on storage will be less severe than those on tags in each http request, so perhaps this is being overly clever with risks of accidentally huge payloads suddenly being sent along with each request.


I think if I were implementing a webapp from scratch today I'd use one single Session ID cookie, store sessions in Redis (etc) indefinitely (they really aren't that big), and for things meant to be stored/accessed on the frontend (e.g. "has dismissed some dumb popup") just use local storage. Dealing with anything to do with cookies is indeed incredibly painful.

> and they're server-readable too now, just indirectly.

Could you point me to more reading about this? It's the first time I've heard of it


I think they mean that you can always send back the content of a localstorage property with javascript grabbing the value and sending another request back with it in the body. Since the front end is going to run any javascript the server sends it (disregarding adblockers at least), it's sort of a more indirect version of Set-Cookie.

Yeah, that's what I meant. There's no built in support; but it's indirectly readable since client-side JS can read it.

This miss the "HttpOnly" part, which prevents javascript (think script injection vulnerability) from touching this part of the storage

i think the main problem there is that cookies are so intractibly tied up with tracking, any attempt to create better cookies now will get shut down by privacy advocates who simply don't want the whole concept to exist.

we're stuck with cookies because they exist.


Every privacy advocate I know hands over exquisitely detailed private and personal information to Google and/or Apple. It seems unfair to generalize as “privacy advocates” so much as it is people who are anti-ads.

Being anti-ads is a valid opinion. It has less intellectual cover than pro “privacy” though.


The DOM & URL are the safest places to store client-side state. This doesn't cover all use cases, but it does cover the space of clicking pre-authorized links in emails, etc.

I spend a solid month chasing ghosts around iOS Safari arbitrarily eating cookies from domains controlled by our customers. I've never seen Google/Twitter/Facebook/etc domains lose session state like this.


Safari is a lot more strict about cookies than Chromium or Firefox, it will straight up drop or ignore (or, occasionally, truncate) cookies that the other two will happily accept.

I had hoped when writing this article that Google would look at Safari and see that it was always strict about feel comfortable about changing to be the same. But doing so now would unfortunately break too many things for too many users.


If I open a second window or tab I expect when I go to 'myemail.com' that it knows who I am and shows me my account even though the url in the 2nd tab doesn't have any extra info in the URL

the new thing should be called "cupcakes" or "candies" or "snacks" or "munchies"


That feels like that XKCD comic about now there being 15 standards.


Needs a better name than NewCookie though. Suggestions include SuperCookie, UltraCookie or BetterCookie

Or to be slightly more serious avoid calling it a cookie and call it something else. Too much baggage surrounding the word cookie.


Definitely don't use "SuperCookie" as that's a thing: https://en.wikipedia.org/wiki/HTTP_cookie#Supercookie

His Majesty's English might suggest "biscuit".

Limp Biscuit it is then.

Everyone will surely be rushing to be the first one to disseminate this new technology!



I've had something (in the US) that was called a "scone", and it was rigid, which disqualifies it from being similar to a biscuit in my mind.

Is that generally true of scones?


The thing the US call scones is different from the thing the UK calls "scone".

A Dookie is a digested Cookie.

Muffin? Cake?

You graduate from consuming cookies to eating...


TrickOrTreat would seem appropriate.

Cookie header parsing is a shitshow. The "standards" don't represent what actually exists in the wild, each back-end server and/or library and/or framework accepts something different, and browsers do something else yet.

If you are in complete control of front-end and back-end it's not a big problem, but as soon as you have to get different stuff to interoperate it gets very stupid very fast.


Re Safari’s networking code being closed source, a good substitute might be the Swift port of Foundation. You can see checks for control and delete characters here: https://github.com/swiftlang/swift-corelibs-foundation/blob/...

Cookies need to die. Their only legitimate use is with for which we have the Authentication header. Having a standard way to authenticate into a website in a browser would be amazing, just too bad that Basic and Digest auth wasn’t good enough at the time.

As a bonus we could get Persona-style passwordless future.


How about user preference without logging in? Are you suggesting create a trillion throwaway accounts?

What about things like local storage?

If you want to store language preferences then that means you only know client side and you can't serve html in their language

...example.com/en/ or example.com/es/

The url can store state just fine...


Why are first-party cookies bad?

They are not bad they just are unnecessary. If your application uses local state, use local storage. If you store session data on the server, identify the user using the Authorization header. Why send arbitrary strings back and forth often with requests that don’t need them. Plus the technology is clearly rotten. They never got namespacing snd expiration right so you can just do weird stuff with them. Also, CSRF wouldn’t be a thing if cookies weren’t. This is like saying “why is finger/gopher/etc. bad?” They are not exactly bad but they are obsolete.

> if you store session data on the server, identify the user using the Authorization header.

And by what miracle browser would send Authorization header? Who sets it? For which domain it could be set?


Take a look at how basic auth is implemented in browsers today. Now imagine expanding it to (a) provide a much nicer and somewhat customizable UI for entering your credentials and (b) using proper encryption.

How would you use the Authorization header to implement server side session data?

Not a web dev. So do I understand it correctly that it's not so much the server side of this that's the issue, after all the Authorization header contains a nice token, but rather how to safely store the token client side without using cookies?

I think they mean storing an identifier in local or session storage and then sending it in the header.

Identifier in local storage could be stolen by 3rd party JavaScript. Anybody who wants to use local storage for sensitive information should read why there is a httpOnly cookie attribute.

If you are running third party JS on your site they can just make requests to your server now. Once JS is loaded it is running in the context of your domain. No they can’t do it once the user closes the browser but third party JS is XSS in action.

And I am not suggesting using local storage for it. I am suggesting adding browser support for standard/generic login UI. Basically think basic auth, just not so basic.


It doesn't work with basic multi page sites though.

Oh right, strictly for spas.

That is a bit of a minefield, I agree…

The way around this, as a developer, is URL-safe-base64 encode the value. Then you have a bytes primitive & you can use whatever inner representation your heart desires. But the article does also note that you're not 100% in control, either. (Nor should you be, it is a user agent, after all.)

I do wish more UAs opted for "obey the standard" over "bytes and an prayer on the wire". Those 400 responses in the screenshots … they're a conforming response. This would have been better if headers had been either UTF-8 from the start (but there are causality problems with that) or ASCII and then permitted to be UTF-8 later (but that could still cause issues since you're making values that were illegal, legal).


> URL-safe-base64

And make sure to specify what exactly you mean by that. base64url-encoding is incompatible with base64+urlencoding in ~3% of cases, which is easily missed during development, but will surely happen in production.


Isn't it a lot more than 3%? I don't think I've heard anyone say url-safe-base64 and actually mean urlencode(base64(x))

… yeah. I assume they're getting that from doing 3/64, but for uniform bytes, you're rolling that 3/64 chance every base64-output-character. (And bytes are hardly uniform, either … TFA's example input of JSON is going to skew towards that format's character set.)

oh, geez. No, just base64, using the URL safe alphabet. (The obvious 62 characters, and "-_" for the last two.

It's called "urlsafe base64", or some variant, in the languages I work in.

> This encoding may be referred to as "base64url".

https://datatracker.ietf.org/doc/html/rfc4648#section-5

But yeah, it's not base64 followed by a urlencode. It's "just" base64-with-a-different-alphabet.


Cookie value can contain `=`, `/` and `+` characters so standard base64 encoding can be used as well :)

I came across a similar issue when experimenting with the Crystal language. I thought it would be fun to build a simple web scraper to test it out, only to find the default HTTP client fails to parse many cookies set by the response and aborts.

There’s a nasty bug in the python cookie parser, cookies after a cookie with quotes will be dropped: https://github.com/python/cpython/pull/113663

Zoom or some other website our customers use was writing a cookie with quotes that would break the site. Amazingly hard to reproduce and debug.


IT IS a mess, but I never saw json inside a cookie. For json I use local storage or indexeddb.

In both cases (cookie vs localStorage) you're really just storing your data as a string value, not truly a JSON object, so whether you use a cookie or localStorage is more dependent on the use case.

If you only ever need the stored data on the client, localStorage is your pick. If you need to pass it back to the server with each request, cookies.


JSON is explicitly a string serialization format.

Right, I meant it's not a JavaScript object. It's serialized into a string in any case, no matter which API you're stuffing it into. So it's a bit of a non-sequitur for the parent to suggest that it's somehow weird to store JSON in a cookie, but not in localStorage. It's all just strings.

I find it weird too. I’ve always considered cookies like very stupid key value stores.

It would never occur to me to put something more than a simple token in a cookie. A username, and email address, some opaque thing.

The idea of trying to use it for arbitrary strings just seems weird to my intuition, but I don’t really know why. Maybe just because when I was learning about them long ago I don’t remember seeing that in any of the examples.


My point is that there really is no such thing as "truly a JSON object".

Combine local storage with service worker, so you pass the data to the server if needed. Completely without setting cookies.

And if I don't want any javascript to see my values, ever? Or how do you handle CSRF?

Httponly cookie is the way, but then you just don't use json as cookie value that is send on every request.

Csrf is no problem as the data from service worker is only active on the site itself. If you speak about csrf with a website where you can't trust js, you're site is broken as xhr/fetch use the same httponly cookies and is affected as well.


Since when can you trust js?

You're really going to hate it when you learn about JSON Web Tokens, which exist exactly to hack past this sort of problem.

Jwt is encoded and it is used for data without a server session.

I'm not a fan for jwt and it used more often than it should, but sometimes it makes sense.


But at least they’re base 64 encoded so you don’t have to worry about the special characters

Good way to hit max header length issues. Ask me how I know.

How?

Well you see when a front end developer and a backend developer hate each other very much, they do a special hug and nine days later a 400 request header or cookie too large error is born.

(Seriously though, someone trying to implement breadcrumbs fe-only)


I used chromelogger years ago that created often a too big http header over time https://craig.is/writing/chrome-logger

I'm not them, but that 419 pattern in the logs is burned into my adrenaline response: https://duckduckgo.com/?t=ffab&q=nginx+419+cookie+header&ia=...

Are they ubiquitous? I'm no client side guru, I know I could look at makeuseof etc, but why not ask some professionals instead.

At the very least localstorage is supported across the board

No. It is disabled in many browsers when opened in private mode. Where you can have session cookies

And the article isn't even about the proliferation of attributes cookies have, that browsers honor, and in some cases are just mandatory. I was trying to explain SameSite to a coworker, and scrolled down a bit... https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#co... wait, cookie prefixes? What the heck are those? The draft appears to date to 2016, but I've been trying to write secure cookie code for longer than that, hadn't heard of it until recently, and I can't really find when they went in to browsers (because there's a lot more drafts than there are implemented drafts and the date doesn't mean much necessarily), replies explaining that welcome.

Seems like every time I look at cookies they've grown a new wrinkle. They're just a nightmare to keep up with.


Well, prefixes are opt-in. You don't have to keep-up with them.

The only recent large problem with cookies were to changes to avoid CSRF, those were opt-out, but they were also extremely overdue.

All of the web standards are always gaining new random features. You don't have to keep-up with most of them. They do look like bad abstractions, but maybe it's just the problem that is hard.


> https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#co... wait, cookie prefixes? What the heck are those?

https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#co...

> For more information about cookie prefixes and the current state of browser support, see the Prefixes section of the Set-Cookie reference article.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Se...

(Cookie prefixes have been widely supported since 2016 and more or less globally supported since 2019.)

They’re backwards-compatible, so if your cookie need meets the requirements for the `__Host-` prefix, you should use `__Host-`.


[flagged]


I was answering your question about when they went into browsers with a link, and summarizing it in a parenthetical. So much for “replies explaining that welcome”, I guess.

It's the first part of your reply they're responding to, where it looks like you've answered their rhetorical question with the exact link they used to illustrate it.

I'd guess you just screwed up your copy paste and didn't notice.


Almost nobody uses SimpleCookie.load for python. Flask, FastAPI, Django have own more relaxed parsers which doesn't break on invalid byte.

    Firefox accepts five characters which RFC recommends that servers not send:

    0x09 (horizontal tab)
    0x20 (spaces)
    0x22 (double quotes)
    0x2C (commas)
    0x5C (backslashes)
I agree with at least some of these. Cookies without commas? Quotes?

Quotes in the value when quotes delimit the value? Yeah that seems dangerous to me.

Quotes don't delimit the value.

Per the section 4.1.1 rules quoted in the article, cookie values can be optionally quoted:

> cookie-value = cookie-octet / ( DQUOTE cookie-octet DQUOTE )


That is true, but in that case they are part of the value itself, they're not doing anything special:

> Per the grammar above, the cookie-value MAY be wrapped in DQUOTE characters. Note that in this case, the initial and trailing DQUOTE characters are not stripped. They are part of the cookie-value, and will be included in Cookie header fields sent to the server.


Why does the specification specifically mention them, then?

To clarify that by the spec, double quotes are allowed in the cookie value, but only at the beginning and end.

As for why that is, I have no idea.


Ah, thanks for the clarification!

One of the things I’ve always found frustrating about cookies is that you have to do your own encoding instead of the API doing it for you. I’m sure someone somewhere does but too often I’m doing my own urlencode calls.

Encoding is at least solvable, but every browser having their own cookie length versus some standard value makes that some nonsense. Kong actually has a plugin to split (and, of course, recombine) cookies just to work around this

But it's so solvable that I shouldn't have to solve it

>everything behaves differently, and it's a miracle that [it] work at all.

The web in a nutshell.


Browsers: what it would look like if Postel's Law were somehow made manifest in C++ and also essential to modern life

That reminds me of the Frog and Toad story about willpower vs eating cookies. Yes, handling cookies is a mine field!

I read the collected stories with my two year old, though I made sure we skipped the scary ones with the Dark Frog. I think the cookies ending was a little over his head, but we had fun taking turns acting out Toad pulling his blankets over his head when Frog tells him it's spring.


I got a trick.

Just dont make them, and dont accept them.


Everything about the web is a minefield. It's an exercise in "how many unnecessary layers can we put between users and their content"?

I have a solution! I just made one more framework!

What are you implicitly comparing it against?

Native desktop development.

Aka unindexable, unsearchable, violation access error mess.

Electron: Eyes Emoji

The article mocks Postel's law, but if the setter of the cookie had been conservative in what they sent, there would have been no need for the article...

> The article mocks Postel's law

As they should. Postel's Law was a terrible idea and has created minefields all over the place.

Sometimes, those mines aren't just bugs, but create gaping security holes.

If your client is sending data that doesn't conform to spec, you have a bug, and you need to fix it. It should never be up to the server to figure out what you meant and accept it.


Following Postel's law does not mean to accept anything. The received data should still be unambiguous.

You can see that in the case where ASN.1 data need to be exchanged. You could decide to always send them in the DER form (conservative) but accept BER (liberal). BER is still an unambiguous encoding for ASN.1 data but allow several representations for the same data.

The problem with BER mainly lies with cryptographic signature as the signature will only match a specific encoding so that's why DER is used in certificates. But you can still apply Postel's law, you may still accept BER fields when parsing file. If the field has been incorrectly encoded in a varied form which is incompatible with the signature, you will just reject it as you would reject it because it is not standard with DER. But still, you lessen the burden to make sure all parts follow exactly the standards the same way and things tend to work more reliably across server/clients combinations.


I agree that being liberal in what you accept can leave technical debt. But my comment was about the place in the code where they set a cookie with JSON content instead of keeping to a format that is known to pass easily through HTTP header parsing, like base64. They should have been conservative in what they sent.

And yet the html5 syntax variation survived (with all it's weird now-codified quirks), and the simpler, stricter xhtml died out. I'm not disagreeing with out; it's just that being flexible, even if it's bad for the ecosystem is good for surviving in the ecosystem.

There was a lot of pain and suffering along the way to html5, and html5 is the logical end state of postel's law: every possible sequence of bytes is a valid html5 document with a well-defined parsing, so there is no longer any room to be more liberal in what you accept than what the standard permits (at least so far as parsing the document).

Getting slightly off topic, but I think it's hard to find the right terminology to talk about html's complexities. As you point out, it isn't really a syntax anymore now that literally every sequence is valid. Yet the parsing rules are obviously not as simple as a .* regex. It's syntactically simple, but structurally complex? What's the right term for the complexity represented by how the stack of open elements interacts with self-closing or otherwise special elements?

Anyhow, I can't say I'm thrilled that some deeply nested subtree of divs for instance might be closed by a open-button tag just because they were themselves part of a button, except when... well, lots of exceptions. It's what we have, I guess.

It's also not a (fully) solved problem; just earlier this year I had to work around an issue in the chromium html parser that caused IIRC quadratic parsing behavior in select items with many options. That's probably the most widely used parser in the world, and a really inanely simple repro. I wonder whether stuff like that would slip through as often were the parsing rules at all sane. And of course encapsulation of a document-fragment is tricky due to the context-sensitivity of the parsing rules; many valid DOM trees don't have an HTML serialization.


You could split the difference with a 397 TOLERATING response, which lets you say "okay I'll handle that for now, but here's what you were supposed to do, and I'll expect that in the future". (j/k it's an April Fool's parody)

https://pastebin.com/TPj9RwuZ


I interpret the "liberal" part Postel's Law to mean "do your best to understand it, but that less important than accepting it, possibly returning a helpful error message" and thus "The Go standard library couldn't parse the cookie, leading to cascading failures all the way up the stack" should never be a thing that happens.

The problem with Postel's law is exactly that the sender is never conservative, and will tend to use any detail that most receivers accept.

So the problem with Postel's law is that people don't follow Postel's law?

The problem is that it's a prisoner's dilemma. And you can't cooperate on a prisoner's dilemma against the entire world.

So, just be as conservative as possible when you produce data and as liberal as possible when you receive something. Your code will then require the least cooperation from *any* other code to be compatible with.

Doing otherwise will require cooperation to adjust on the specificities clients expect, and you fall into the trap of the prisoner dilemna.


Go and failing to parse http headers correctly should become a meme at some point.

One issue we had was the reverse proxy inserting headers about the origin of the request to the server behind. Like ip, ip city lookup etc. And that parsed through a service written in go that just crashed whenever the city had a Norwegians letter in it, took ages to understand why some of our (luckily only internal) services didn't work for coworkers working from Røros for instance. And that was again not the fault of the Go software, but how the stdlib handled it.


Wait til you have a legacy system and a newer system and need to, among other things:

- Implement redirects from the old login screen to the new one - Keep sessions in sync - Make sure all internal and external users know how to clear cookies - Remind everyone to update bookmarks on all devices - Troubleshoot edge cases


Literally everything in IT runs on decades old principles and technologies. The world simply refuses to fix things because "if ain't broken, don't fix it" philosophy. Look at TCP, HTML, JSON, SMTP..all good tech but insanely old and outdated and overtaxed for that it was invented for. When people joke that the entire banking industry runs on excel sheets, they are really not far from truth. Things will be shitty until they completely break down and people are forced to fix them. Look at JavaScript, this horribly stinking steaming pile of green diarrhea that rules over the entire front-end is still being worked on and developed and billions of money and countless work-hours have been wasted in order to make it somewhat usable, instead of just coming up with entirely new tech suitable for the 21st century. This is the entire internet and tech in general.

> Apple Support

Are we sure the website wasn't just broken normally? I kid, a bit, but good lord does Apple _suck_ at websites. Apple Developer and, more often, App Store Connect is broken for no good reason with zero or a confusing error message.

Note: I'm typing this on a M3 Max MBP (via a Magic Keyboard and Magic Mouse) with an iPhone 16 Pro and iPad Mini (N-1 version) on the desk next to me with an Apple Watch Series 10 on my wrist and AirPods Pro in my pocket. I'm a huge Apple fanboy, but their websites are hot garbage.


But why wouldn't web pages written in ObjC be just awesome and easy to manage?!

https://en.wikipedia.org/wiki/WebObjects

I can still remember when they'd purposefully take down their store page for some godforsaken reason. The mind reels


They still do take the store page offline in the leading hours and during a keynote.

> What servers SHOULD send and what browsers MUST accept are not aligned, a classic example of the tragedy of following Postel's Law.

"Be liberal in what you accept, and conservative in what you send" is precisely the opposite of "SHOULD send MUST accept". This would be an example of the tragedy of not following Postel's Law.

If the specs followed Postel's guidance, it would then have read "Servers MUST send x and browsers SHOULD accept y".


> minefield

Cookies are a bit of a mess, but if you're going to use them, you can follow the standard and all will be well. Not so much a minefield, but a hammer; you just need to take some care not to hit yourself on the thumb.

I guess the confusion here is that the browser is taking on the role of the server in setting the cookie value. In doing so it should follow the same rules any server should in setting a cookie value, which don't generally allow for raw JSON (no double-quote! no comma!).

Either use a decent higher-level API for something like this (which will take care of any necessary encoding/escaping), or learn exactly what low-level encoding/escaping is needed. Pretty much the same thing you face in nearly anything to do with information communication.


I don’t understand how that’s not a minefield, it’s easy to go astray?

Well, we’re getting into how to choose metaphors here. Not being literal, there’s always room to stretch. Still, you try to choose a metaphor with characteristics congruent with the topic.

With a minefield, you can be doing something perfectly reasonable, with eyes open and even paying attention yet nevertheless it can blow up on you.

Here, though, there’s no special peril. If you just follow the standard everything will be fine.

If this is a minefield, then practically everything in software development is equally a minefield and the metaphor loses its power.

(Later in the article they touch on something that is a minefield — updating dependencies. There’s probably a good article about that to be written.)


Probably just semantics.

[comment intended for a different post, but too old to delete]

None of this explicitly has anything specifically to do with HTML.

It sure doesn't, that was a comment for a completely different post. I have no idea why HN posted this comment on this article instead of the PHP 8.4 article I thought I was commenting on O_o

It’s happened enough that I suspect there’s a rarely-seen race condition somewhere in the Arc code that runs HN.

> Handling cookies is a minefield

I know! You gotta let them cool down first. Learned this the hard way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: