Hacker News new | past | comments | ask | show | jobs | submit login
Things not available when someone blocks all cookies (tomayac.com)
538 points by 0xedb on Aug 31, 2022 | hide | past | favorite | 221 comments



I naively assumed from the headline that the author would complain about users blocking cookies. I was very pleasantly surprised to see a post written by someone who appreciates that some users will want to do this and is actively working to support delivering them a useful content experience!


I assumed the same thing and was indeed happy when it turned out to be the better thing.


[flagged]


> Aside from that, what the heck is so important that folks need to cookie that can't be just tied to the session token in the backend?

literally any functionality that needs to persist state and you want to use offline and between reloads? See also Progressive Web Apps (PWAs).


Using a username and password, or saving a bookmark to the current page both work.


You can store the state in the URL, can you not? That used to be done anyhow.


I have written a web app that does this so you can share links which have the same state. You can't have that long of URLs actually - especially for some browsers.

Having to base64 encode most/all of it so you can safely use any kind of data makes it even worse.

My use case was very basic and I quickly hit limits that made it so I couldn't "persist" all of the data I wanted.


Er, you'd store the data in your DB of course and only a comparatively small, encrypted index to it in the URL. You wouldn't trust the client to "persist" the data anyhow.


The question was about storing state in the URL. You are referring to storing a session I’d in the URL. Quite different.


I really hope y'all are aware that URLs generally aren't treated as secret, but authenticated session state should be treated as secret.


I think the poster is referring to things like "offline Google Docs editing that can survive a browser shutdown or crash" which would be... difficult... to cram into a URL ;)


> what the heck is so important that folks need to cookie that can't be just tied to the session token in the backend?

Storing information on the client makes your site a lot more transparent about what it's keeping. If I have various user preferences in local storage you can tell that's what I'm doing and why, but if I just cookie you with an opaque token you have no idea what I'm tracking with that on the server.


While this can help, it needs to be done alongside network traffic analysis. If you type your name/phone number/address into (for example) a resume template, and you see that information stored in the local client, you don't know whether that information is also stored on the server.


Not disagreeing! This is primarily useful for sites that are trying to make it easy for their users to determine what's happening behind the scenes.


If there were a simple to examine and pick through localstorage, maybe. Of course if there were, localstorage would be intentionally obfuscated.


It's relatively straightforward. You open the console, type window.localStorage, and poke around. On the article I see:

    > window.localStorage
    Storage {dark-mode-toggle: 'light', cid: '2722...', length: 2}
Ignore the "length" (implementation detail) and you can see it's storing whether I've turned on dark mode and some id that's likely per-user. If I switch the page to dark mode I see instead:

    > window.localStorage
    Storage {dark-mode-toggle: 'dark', cid: '2722...', length: 2}


You could also just use the Storage tab in the devtools of your browser of choice, Chrome and Firefox at least both provide a GUI for looking at storage. Not sure what all Chrome's shows, but Firefox's shows Cache Storage, Cookies, IndexedDB, LocalStorage, and SessionStorage.


As someone working on a ticketing purchase flow, this is critical... can't exactly just turn people away! I was also surprised about localstorage throwing exceptions.


> can't exactly just turn people away!

Given that many cookie banners still (IMHO illegally, to be verified by courts) give the choices "accept tracking or fuck off" I guess you could. But it behoves you that you would assume it's a stupid idea. :)


I wasn't even using localstorage for tracking, it was for functionality.


I often think that instead of completely blocking cookies, it would be better to accept them and then throw them away. Same with localStorage. Just store it temporarily.


I’ve been using Cookie AutoDelete for that purpose for the last few years. It works flawlessly for me and brings me comfort in knowing that I am only being tracked online by my browser fingerprint and IP.

https://addons.mozilla.org/en-US/firefox/addon/cookie-autode...


Same, I absolutely love this extension. You can whitelist the websites you use frequently, and for everything else it's like a groundhog day every day.

The cookie banners can be super annoying sometimes, but they are easily removed with uBlock Origin. I also frequently have to solve captchas, but it's not so bad. For example, every time I visit amazon.com to order toilet paper or whatever, it thinks I'm a bot, but at least amazon's captchas are less annoying than some of the others.


Pair it with "I don't care about cookies". This one clicks Accept on all cookie banners, and Cookie AutoDelete deletes them when the tab is closed.

https://addons.mozilla.org/en-GB/firefox/addon/i-dont-care-a...


Users who do not auto-delete are better served with Consent-O-Matic, this one clicks Deny on all cookie banners and forms by default, but is configurable.

https://addons.mozilla.org/firefox/addon/consent-o-matic/

https://github.com/cavi-au/Consent-O-Matic


This is great, thanks

P. S. If I'm using a separate Firefox container just for Amazon, they would isolate my Amazon cookie right? So then I could just whitelist it and avoid capchas?


Yes. Each Firefox container has its own cookies.


If you go to Amazon a lot why not whitelist?


One of the last companies I'd whitelist.


I personally wouldn't whitelist any site that has my card details saved.


A bot with a bowel apparently.


One thing worth noting: at least when I installed it, the auto clean functionality (the bit that actually removes the cookie/etc data) is disabled by default. This means it needs configuration to actually do anything aside from manual cleaning.


The alternative would be that installing the extension immediately wipes all of your cookies, local storage, and other local data. Most users probably don't want this.


Came here to mention this. This is is probably the most useful extension for preserving privacy while also not breaking things. Sites can store all the cookies they want now, but a few minutes after the tab goes away it's like we never met. Very nice.


Isn’t that literally what private mode is? It’s a built in feature I thought.


I do the same! The whitelisting is the best part, since now i can stay logged in to sites that i trust not to track me.


Isn’t that essentially what private browsing is? I tend it just always use private mode in safari.


How robust is that against server-side storage of user data?


I do this with Firefox's Temporary Containers. Every manually opened tab is a new browsing session, with no cookies etc. Closed tabs' data get deleted after 15 mins. Fantastic addon, and the usage is as seamless as it gets.

https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


I do this too! I pair Temporary Containers with the Containerise add-on which lets me create persistent containers for a few specific sites that I want stay logged in to.

This setup works well with one glaring exception... Cloudflare and their stupid boats. Using temporary containers for everything has really shone a light on just how much of the web Cloudflare is gobbling up. Cloudflare throws a captcha at me every time I visit any website they gatekeep for. I'm talking mostly about random sites that turn up in web searches. Its annoying enough that when I encounter a Cloudflare captcha, I just close the tab and try the next site.

Now I'm wondering if there's a way to eliminate results from web searches that use Cloudflare with something like the uBlackList add-on.


I do something very similar, but with just the built-in Multi-Account Containers addon. Although its usage is not the most intuitive.


Well, I've reacted to the article with an "of course, the Google's browser breaks everything if you try to block tracking".

There is absolutely no reason for letting the javascript know that you've blocked some functionality. It just adds new tracking.

Anyway, the sensible thing to do is to store the values for the lifetime of the page. Simply throwing them away can be an option, but it's a bad default. Non ad based browsers do get it.


> absolutely no reason

An editor warning you that your work hasn’t been saved?


Or a page settings/preferences dialog reminding you that saving your settings isn't going to work. Sure maybe the user "should know that", but not everyone is totally in command of their own browser's settings.

Or a game warning you that your progress will be lost.

There must be tons of legit use cases.


This is the answer. Blocking the APIs is just asking for a broken internet, and I have very little sympathy.

Furthermore, blocking the API is a detectable characteristic and increases the surface area of your fingerprint. It has exactly the opposite of the intended effect on privacy.


That's basically incognito/private browsing mode.


Are there ways to do this while whitelisting sites where you want to retain and keep login tokens?


Firefox has this when you say delete browsing data on close, next to it is a manage exceptions button


Cookie AutoDelete


Just wondering, is there a fully automatic way in e.g. FF to do this? Like right-click 'open in new private tab which automatically accepts all cookie dialogs' ?


"Delete cookies and site data when Firefox is closed"


Yes, this is the way. Only issue is needing to login to everything again on restart/reboot, but it's a small price to pay.


There are extensions that will aggressively delete cookies while letting you maintain a whitelist.


"Manage Exceptions…"


Maybe. I run a word game (https://squareword.org) that uses localstorage to store stats. This allows me to give users statistics without requiring any sort of account or signup. Even so, I often hear from people that have their stats cleared, for example by iOS evicting localstorage after 7 days of not visiting a site.


I pick between the two. Of late that responsibility has been pretty well taken care of by the Forget Me Not extension on Firefox, although I think it's endangered (like a lot of things that have to do with Firefox and its extensions.)

You can set rules with three clicks, four clicks if you want that rule to be temporary and thrown away on browser restart. You may choose between never deleting, deleting on browser close, deleting on tab close, or just throwing them away. The initial setup for default policy has a few UI issues, but the author put a lot of work into it.

https://addons.mozilla.org/en-US/firefox/addon/forget_me_not...



I agree. Private browsing takes care of this for me. I close the tab, cookies are deleted, and I will randomly confirm this occurs occasionally just to be certain. There's no need to get all OCD or self-righteous about cookies when Javascript is the scourge. I can not respect any that block cookies but do not surf with Javascript disabled. Though html5 has nearly but not quite made Javascript irrelevant, the scourge seems to now be built in to html.


Agreed - Firefox have very legit, useable workflows for avoiding tracking while still having the web function for you. Chrome has IMO a purposefully unusable approach. It’s theatre, they give you the option, but that option breaks the web so badly that you’re not going to want to use it. Which makes sense, they’re an ad sales company, tracking is crucial to their business.


This is how I use Firefox, things only stored for the session, for the given container. Containers are better than first-party isolation, because many sites expect to share data with third-parties.

It is also better to fake API responses than to block access to them. In Firefox the privacy.resistFingerprinting option takes care of this. It was originally developed for the Tor Browser.


Unfortunately doesn't signal the rejection of tracking by things like fingerprinting.


What if browsers made it so when you turned off cookies, instead of not allowing anything to be written, they instead gave each page you visited its own fresh cookie jar that was cleared when you navigated away?


This is loosely what Firefox's temporary containers [0] extension does. Each tab (with options to control whether a tab spawned from a parent tab should inherit the cookie-jar context of the parent) gets its own temporary context. I don't recall whether it clears the jar on navigating away, but you can have that jar cleared when the tab is closed, and you can configure new jars when opening a new tab to a new site origin (i.e. domain).

[0] https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


I use and love this extension. The main complication that would prevent it from being a mainstream solution to cookie clearing is automating the decision of when to create a new container vs continue to use the existing one when links are clicked. Going by domain name (using Public Suffix List) breaks a lot of SSO implementations, and the occasional payment processor/verification flow, and other situations that redirect to another site, but pass information (or save state to have on return) via cookies.


How well does this extension work together with the Multi-Account Containers addon (https://addons.mozilla.org/en-US/firefox/addon/multi-account...)?


It is designed to work with and supplement the Multi-Account Containers add-on. There a few things that are annoying and could use better integration. First, all the temporary containers show up in the list of containers you have created. Most of the time you can ignore that list, so it isn't a big deal, but if you need to configure a permanent container (and you have 100s of tabs open) the clutter makes them harder to find in the long list.

When you tell Multi-Account Containers to always open a domain in a specific container, Temporary Containers knows about this, but for some reason it prompts you for additional confirmation. And sometimes this prompt breaks sites the first time through, adding an additional iteration when trying to configure a container to work with a site that uses multiple domains. Other than that they work together fine.

I would also recommend the Cookie Quick Manager extension, which lets you manage cookies on a per-container basis. If you have been using Multi-Account Containers by it self, then any links you open from a page will open in the same container (say reading news sites from HN), and you likely have a bunch of cross-site cookies stored there. This extension will let you clear out any undesired third-party cookies that have gathered in the container. The UI is a bit unclear at first (which of these three trashcan icons scattered across the page delete the subset of cookies I want), so read the tooltips before clicking an icon.


I've been using both for quite some time now. They work very well together and I don't see myself browsing much without them.

edit: remove redundant "together"


I'll give it a go, thanks!


That's effectively what Incognito does, except not at that granularity to make it simpler for users to understand.

It probably can't be every site being separate since that breaks a number of things sites do (like opening 3p windows to complete transactions), but it could probably be done in some kind of logical group manner. Maybe by using different window colors to signify the partitions.


Similar to suggestions that Android offer the option to provide fake location data to apps that require it without good reason: It's a fantastic idea and seems easy to implement, but might make it less painful for users to opt out of all the tracking that makes the internet so friendly to advertisers and other groups who would like to surveil your activity.


This feature is long overdue in both Android and iOS. The amount of location data being harvested is outrageous.


This is absolutely unacceptable. No.

The solution to "software authors routinely collecting more info than they should" is not "accept the behavior as irredeemable, and just normalize it".

The answer is "make ot way more visible to users when it is done, snd make it harder for software authors to do/maintain." Anything else is just a tacit acknowledgement and grant of legitimacy to the behavior in question.


Just the opposite. What this would normalize is app developers no longer be able to depend on location data being accurate, which would destroy the location data sales market by turning it into a market for lemons.

What you propose, simply yelling at developers to stop requesting unneeded permissions, would have no useful effect. They won't change. And most customers won't care and will blindly click accept no matter what if they think they have to to access their game.

Never let the perfect be the enemy of the good. Assume that other actors in the world will respond the way they usually do, not the way you think they should. That is the way to get things done.


No it wouldn't, and it'd be nigh impossible for the handset to make that judgement call. That's not a computable problem except by the User reading the code, understanding it, and toggling the fake data switch on.

Besides which, I don't care if you feed me garbo GPS GLONASS if I've war driven and reverse indexed local wireless nodes to GIS coords. There's more than one way to get at coordinates, and enough hands in the jar in terms of being honest with location data by default that it'd be an uphill battle to fight teach users just how many ways a mobile handset can potentially leak location data.

The problem is that the data is retained at all. Until that data is considered toxic/a liability, there will be no respite from it.


Oh no, not the location data sales market.


Not sure what your point is or what your interpretation of my comment was. So just to clarify, I think the location data sales market is a bad thing and that is why I advocate a solution that would destroy that market and give us back location privacy. It has zero benefit to users and a lot of scary downside.

(For those who are unaware, a lot of free apps pay for themselves by secretly recording location history and then selling it. This is why companies and even government agencies can buy mass location data when they want and query it to find out who was where when. This has been going on for years and is well known and documented and in no way secret. Just for anyone who missed it somehow.)


Giving app developers bogus data would make it harder to use* and maintain, so that's a clear win, IMO.

* use as in use for the intended business purposes, not harder to write the code


The problem is not just app developers. Telcos sell it to.


There's an extension called cookie autodelete that does that. https://github.com/Cookie-AutoDelete/Cookie-AutoDelete


Firefox Focus on Android basically does that by default.

If you set it as default browser then all links you click will be openend there. Hit the back button when you're done and everything is deleted.

Another use case is if you quickly want to open some website to look something up: open the website, maybe click a few links, and when you're done everything is wiped.

You can keep a regular Firefox (or Chrome or whatever) for surfing to those websites where you want to keep some state.


Yes. And instead of requiring sites to ask "accept cookies", let it be a browser option when the site attempts to store cookies, like "OK for 10 minutes".


Exactly. The great thing about cookies is that they are a tool, completely in the hand of the user. The site gives you a piece of text and says "show this to me next time if you want me to remember you". And then the browser can choose to continue to use them or not.

Such a weird choice to put the onus on the websites to ask whether to give the cookies, rather than the browser to ask whether to save them. I'm a big supporter of privacy legislation like the GDPR, but this is asinine, as it needs me to trust every website I visit to actually honor my choice.

Really, the original sin of cookies is that were designed to be transparent to user. That was a mistake that promptly needs to be rectified.


Cookies might be in the hands of the user but tracking as a whole is not. If cookies become less reliable then there are many other ways, which is why the GDPR requires consent for any tracking not just cookies.


I absolutely agree. I'm arguing for cookies as an inherently consensual form of tracking and legislation to keep all the other ones in check.


A combination of Firefox Enhanced Tracking Protection and Cookie Autodelete works quite well here. Along with I Don't Care About Cookies to hide the inevitable slew of consent banners.

I do use Multi-Account Containers and Temporary Containers too, but typically when I want multiple simultaneous sessions, rather than wanting my current session to be cleaned up.


This is the exact use case for sessionStorage or a cookie with expires=0. But correct usage depends on the knowledge & goodwill of website authors.

For privacy purposes, Incognito mode achieves the same effect without any of the hassle. Maybe turning off cookies should not even be an option anymore?


What if legislators had targeted the ~3 browsers instead of the countless websites to enforce their policy? Things would actually work on a technical level, and we wouldn't be bombarded with dozens of useless cookie warnings. Would have been nice.


Assuming the website wants to do something on the first user's visit, it would start doing it on every page load. Letting the website know that the user has disabled cookies can help avoid it and improve user experience.


Letting the website know anything at all lets it track you, as we learn time and time again.


You cannot realisticly prevent the website from knowing that you are using a privacy-conscious browser, which is why e.g. TOR Browser and Firefox's enhanced tracking prevention don't attempt to do that but instead only try to make all their users look the same. Trying to emultate the growing number of cookieNG technologies whithout adding more privacy leaks is a waste of time.


It already has your IP.


Isn't that basically what a session cookie is?


Session cookies usually last until you close your whole browser, rather than just until you navigate away.


> All I am using is some innocent localStorage and IndexedDB to persist user settings like the values of the sliders or the chosen color scheme.

When you turn off cookies you're telling the browser not to let sites persist information. Otherwise, whatever goals you had in disabling cookies would just be worked around through these other technologies.


I totally understand your point and I think I agree, except—well, the setting says “disable cookies”. It should do what it says. If the goal is to disable all persistent information, the setting should be called “disable persistent storage”.

Of course, I also know why it’s not called that: a lot of people know what cookies are at this point, at least relative to the number who'd understand “persistent storage”. A toggle named “disable cookies” is better for usability.

On the other hand, trying to guess what the user actually wants based on a different preference is virtually guaranteed to cause confusion of its own. Should the setting also disable Canvas, since that’s commonly used for fingerprinting? And will Google make the same decision in Chrome v104 as they do in Chrome v110?

I can’t decide whether the primary issue is:

• The name of the setting.

• The undeserved cultural prominence we’ve given “cookies” specifically.

• The modern web in general.


They could call it "disable cookies and other persistent storage (more information)" with more information providing their reasoning why they are bundled in plain English. There is no reason the setting has to have a two word name with no description.


Just like Windows Control Panel has "Printers and Devices" - because I guess people don't think of printers as devices?


I guess that well behaved devices took offense of being put in the same category as printers. And I understand them.


Yeah if I was a device and someone called me a printer I tell you hwat!


From what I saw, that was indeed one of the more common complaints about Windows 8. Nobody could find printers and thought Windows 8 didn't support printers because Windows 8 merged everything to just "Devices" (and the short-lived Devices "charm" as an intended one-stop print shop/"universal Print button", RIP). It didn't help that Windows 8 tried to at the same time update the ancient Windows Printer driver model and remove some of the worst habits of Printer driver vendors (background apps that are always running, bespoke updaters, weird unregulated UI extensions to Win32 common dialogs via backdoor hacks, etc) so there was some Manufacturer-encouraged hysteria that Windows was moving too much cheese on Printers and coming to take people's beloved Printers away (which did result in Microsoft killing that nicer printer driver stack initiative of Windows 8 and its user-focused experience).


Why can Microsoft seemingly not commit and stand by decisions in the way Apple does?


Many of Microsoft's early successes seemed predicated on listening to user and developer feedback.

It is simple to believe that they've taken that as a strong core principle of the company. The over-reliance on deep telemetry metrics, for instance, seems kind of a natural evolution of a company that cherishes as much feedback as it can get.

It seems reasonable to think that the immensely negative feedback on Windows 8 or the sad market response to Windows Phone sparked so many shifts in priority precisely in the way that any heavily feedback-focused (even slightly neurodivergent) person might over-react to negative feedback and try to do everything "not that" to make up for it, even if those were good ideas and the negative feedback was more concerned about execution of them rather than the ideas themselves.

I've been accused of "fanboying" Microsoft at times because I like pointing out the good parts of ideas that Microsoft has had over the years (like how the Charms bar was a good idea poorly executed) not to blow smoke up Microsoft but to remind them, as a feedback oriented company, of ways they've over-reacted to negative feedback, to wonder where they would be if they didn't just kill such good ideas at the first sign of disinterest/complaint but instead gave them room to grow/evolve. Sometimes it sounds like they need a lot more positive feedback to be a better company because all they seem to hear is the hate of some of the noisier crowds.


> The over-reliance on deep telemetry metrics, for instance, seems kind of a natural evolution of a company that cherishes as much feedback as it can get.

Telemetry is almost the opposite of user feedback as it completely disregards the human element of the user. You may be able to tell what is used often, where users drop out but you don't know why and you don't know what is important to you users. So what telemetry ends being used for more often than not is to back up the developer's own preferences by seemingly backing them up with data without actually doing so.


Microsoft seems to have a crapload of competing teams with each fiefdom vying for attention and authority over decisions, whereas Apple, at least from an outsider perspective, more looks like an authoritarian, top-down corporate hellscape. Another very important distinction that explains this is that Apple doesn't give much about backwards compatibility, whereas it was a core part of Microsoft to keep backwards compatibility even at very high cost.

Both methods of corporate behavior have their advantages and disadvantages.


I'd guess it's more likely that there used to be a "Printers" control panel, when the main thing it controlled was the DB-25 parallel printer port on an ancient IBM-compatible PC, and later they added other devices to it. People accustomed to adjusting printer settings might scroll straight to "Printers" and be confused if it got renamed to "Devices." Even more nefariously, it's reasonable that there's some software out there that does a string comparison for "Printers" that ignores extra characters like " and Devices" which would be a reason not to rename it completely.

Similarly here, cookies have been around since Netscape in 1994. IndexedDB is as new as 2015, window.LocalStorage is from ~2010 IIRC. For backwards compatibility, it's totally reasonable to use "Cookies" or "Cookies and local storage", and expect that to extend to any new developments.


Dumbing down user interfaces has always been the trend on the internet.


“Cookies” is shorthand for “persistent storage” because nobody outside of web developers knows other methods exist. When people, laws, banners, etc. refer to cookies, they mean “any technology that stores information on the client side systems”.

Whatever mechanism is used is irrelevant to the meaning/concept.


> “Cookies” is shorthand for “persistent storage” because nobody outside of web developers knows other methods exist

Most people don't know what "cookies" means either. We shouldn't make the problem worse by giving them false information.


Or we could just recognize that for the general population, "cookies" are any client storage by a website, and for technical people, cookies are a subset of options for client storage by a website.

The public never needs to know the technical distinction because it is both

1) Arbitrary: "cookie" could just as well have been a general term for client storage, and

2) Insignificant: Virtually nothing the public is concerned about hinges on specifically how client data is stored, except for lawyers trying to get around cookie laws or to deceive through the text and UI of cookie consent pop-ups.

> We shouldn't make the problem worse by giving them false information.

So what I'm saying is that it's not a problem. It's very easy and accurate enough to tell users that they have to allow cookies in order to use some webpages offline. They can make all of the informed political decisions and personal decisions they need to make. I'd be happy to even further complicate the situation by referring to localstorage as the type of cookie that you'd need to make a lot of pages work offline.

edit: I mean, you can save your cookie in localstorage. For me, that makes it a superset, and the name "local storage" makes it clear that it's storing things where you are. If the public weren't calling all client storage cookies, I'd recommend that they start calling all client storage localstorage.


> Insignificant: Virtually nothing the public is concerned about hinges on specifically how client data is stored, except for lawyers trying to get around cookie laws or to deceive through the text and UI of cookie consent pop-ups.

IMO, your exception is what makes the distinction significant. Defining a cookie two different ways gives companies a powerful new tool for purposefully misleading and manipulating end users.

Yes, many users are already confused. But if we actually make it acceptable to define cookies more broadly (but only when it's convenient for those in power), we're going to make the situation much worse.


It's a bit late, these things have been called "supercookies" since Flash started to support persisting data outside the browser's control.


Right, if anything, we should campaign for the technical definition of "Cookies" to encompass everything as well and just call the old thing legacy HTTP cookies or whatever when you need to be specific.


As a non-web developer, I remember years ago when disabling cookies meant only cookies -- then learning that there were other forms of persistent storage. It made me angry and I felt betrayed.

Calling all persistent storage "cookies" matches the popular understanding of what "cookie" means. I don't see the problem with accepting that and using the term accordingly.

It may not be technically correct, but this is a point where the technical distinction isn't important. If a user disables cookies, what the user is expecting is that persistent storage won't happen at all.

Renaming it to disabling "persistent storage" would be fine, too, except that it would be necessary to explain what "persistent storage" means.


I disagree, but not in the technical sense. People have been talking about cookies since they were invented, so most people who’ve used a web browser know the word and have a vague sense that they’re used to store information on their computer, and are often used for tracking.

The fact is that “cookies” now has a colloquial meaning that’s different from the technical definition, and both meanings are valid.


Well, there is the distinction between cookies getting sent over the network vs localStorage being (obviously) local. but of course, a website can work around this by manually sending localStorage data in requests, so it makes sense that if people want privacy, you block both. Sucks though.


I think of cookies as a mechanism to send data across the network. That mechanism can be used to simulate persistence on the client, among other things.

But at least some of this conversation revolves around what the public perception of cookies is, and as for that, I really don't know. I wouldn't presume that anyone else knows either unless they've conducted a poll.


>I think of cookies as a mechanism to send data across the network. That mechanism can be used to simulate persistence on the client, among other things.

I can't get with that definition. A server that attempts to set a cookie is very explicitly asking for state persistence on the client in the otherwise stateless HTTP protocol exchange. It literally has no other purpose.


Cookies are a way for clients (and servers) to add data to HTTP requests. It's a header, plus the expectation that the client will add this data to subsequent requests sent within a certain timeframe.

Consider that a similar effect can be achieved by adding an id to every link in the body of a response. But its still just a link. In fact, before cookies this is how you associated requests with each other into a "session". And indeed, this is still a way to do user tracking across domains without cookies and in a way that is impossible to block in general.

What a thing is used for is not the thing itself.


>plus the expectation that the client will add this data to subsequent requests sent within a certain timeframe

That's the definition of "client-side state". Cookies have no purpose other than maintenance of client-side state.

https://www.rfc-editor.org/rfc/rfc6265

"This document defines the HTTP Cookie and Set-Cookie header fields. These header fields can be used by HTTP servers to store state (called cookies) at HTTP user agents, letting the servers maintain a stateful session over the mostly stateless HTTP protocol."


> “Cookies” is shorthand for “persistent storage”

"Cache" is also “persistent storage” to me.


The current phrasing in Chrome is:

Block all cookies (not recommended)

- Sites can't use cookies to improve your browsing experience, for example, to keep you signed in or to remember items in your shopping cart

- Sites can't use your cookies to see your browsing activity across different sites, for example, to personalize ads

- Features on many sites may not work

That seems long enough that they could put in text about how this is storage in general and not just cookies.

Firefox just has "Cookies: All cookies (will cause websites to break)" so there's not really a place for that sort of text.


How about —

  Allow websites to store data on your computer

  [ ] short-term
  [ ] long-term or permanently

  Some websites need to store data for some features, or to work it all. Storing data can also enable them to track you.
And I'd be inclined to blame the state of the web in general.


I'd be tempted to nuance that:

  [ ] Cookies - stored locally and sent to web-sites automatically
  [ ] Local Storage - not automatically sent to web-sites
In all cases I veer towards sharing explicit details. If users choose to not understand it fully that is fine. I don't like 'dumbing-down' or simplifying taking away explicit details with no easy way to get it in the same place as the 'simple' exposition.


That’s a distinction without a difference IMHO.

Local storage (or Cache API, IndexedDB) + Service Worker = Cookies (simply add a header to every request)

Local storage + DOM manipulation to add search params with the stored content = Cookies (apart from first navigation)

And on the flip side

Cookies set using JS + Ignore cookies server side = Local storage

It’s almost like the object/closure equivalence.


Especially with single-page applications, I would love for there to be a way for a page to have either access to persistent store or network connections, but not both. A site could load all resources, then announce to the browser that it would like to have access to whatever it stored the previous time. The browser would grant access to the local information, and simultaneously take away access from ever initiating a network connection again. A newly loaded copy of the page would start in the same state, able to pull in new resources, but unable to read or write local information until it again gives up the right to exfiltrate.

It would be a one-way street that way. The page can take any network information with it into the private cave, but nothing from the cave may ever come out, nor may it even know if the cave is empty before taking that irreversible step.


I disagree. For example, a simple todo-list web-app doesn't need any cookies but stores everything in localStorage. Cookies are made for a server, localStorage for a client.


What happens when the web app decides to send the server the entire contents of the localStorage every time it loads? Now we are back to emulating cookies with localStorage.


For regular users "Cookies" is a catch-all term for any persistent identifiers and tracking. The exact API used to persist cookie-equivalent data shouldn't matter. Excluding some tracking methods based on a technicality is a gotcha that erodes users' trust.

I think the real issue here is that Google chose to throw errors instead of turning those APIs into no-ops.


> I think the real issue here is that Google chose to throw errors instead of turning those APIs into no-ops.

This behavior pre-dates Chrome. I get "Uncaught DOMException: The operation is insecure" in Firefox today, and if I'm reading the patch correctly [1] this dates to when it gained localStorage support in 2006. Quickly looking I can't find when this was added to WebKit, though.

[1] "return NS_ERROR_DOM_SECURITY_ERR;" https://bugzilla.mozilla.org/attachment.cgi?id=234539&action... from https://bugzilla.mozilla.org/show_bug.cgi?id=341524


>For regular users "Cookies" is a catch-all term for any persistent identifiers and tracking.

Geez I hope that's not true. Cookies and localStorage serve a very different purpose. localStorage is exactly what it says: local storage. Cookies are sent to the server with every request and are quite wasteful in comparison.

I would expect my browser to be accurate of its labeling in the user settings.


I agree it's a misnomer and it's confusing for developers to find that disabling cookies breaks local storage, but I think it's understandable. When a user disables cookies their intent is presumably for the website to not be able to track their device, and it's quite easy to work around this with local storage (just add a locally stored identifier to each request made).

At the end of the day it's about either surprising the user or developers, and the user wins out (as they should, imo). One could also argue that developers will eventually find out that the functionality they implemented is broken, while a user who thinks they're not being tracked may never realize they really are, just more sneakily and on a technicality.


I can appreciate the argument, but it still bothers me. Don't you think there's an argument to be made for educating the user, rather than reinforcing their misunderstanding?

localStorage is just one of many ways to track users. You could also store data in indexedDB, the new File API, installed service workers, or even the browser cache if you're clever enough.

If we start lumping all forms of tracking (storage) under "cookies", that's going to get real messy, real fast.


They have different primary purposes, but they can both be used to engage in tracking and other privacy-destroying behavior.


I don’t think so, else we should change the name. Cookies are sent to the server on every request so it has tracking implications that locally caching something like dark mode preferences does not.

One issue is that there’s a hysteria over cookies which muddies the water.


It’s trivial to emulate cookies with other Web APIs (storage + service worker, for one). You’re focusing on the label of the toggle and not user intent. If a website can send information about my visit 2 days ago to an upstream server, clearly my expectation of “Disable cookies” is broken.


Then it should be called disable client persistence or something more clear, imo. Cookies is already a ridiculous jargon word esp for the general public.


Firefox calls it "Cookies and site data".

Seems clear and to the point.


Thing is though, it can be worked around as long as js is enabled.

Client side fingerprinting plus server side data storage and you get the same functionality in a roundabout way.


So you get to share preferences with everyone else that matches your fingerprint :)


That's the punishment for blocking local storage. :P


s/punishment/reward


/s Even better we could store login information this way. What could go wrong?


You don't even need JS: blocking cookies doesn't (but really should) disable the browser cache, which can reasonably reliably store information.

Note that you can't use the cache to work around browsers blocking third-party cookies because all the major browsers fragment the cache by site.


I would actually love to see a demo of this used for comedic effect. "Unlogin, use your browser fingerprint as your password. We already, know who you are, why put up with the hassle of typing a password."


Some websites already do something similar: if you are logging in from an "unrecognized" (unfingerprinted) browser, they might force you to 2FA authenticate and then give you have the option to "trust this browser" for future logins. You might still need to log in with your password, but not 2FA.

Of course, that fingerprint can break when your browser auto-updates to a new version.


I actually wrote up an April Fool’s parody based on that premise, modeled after Google’s NoCaptcha announcement.

http://blog.tyrannyofthemouse.com/2021/04/leaked-google-init...


I'd argue that being able to write to and read from storage for the lifetime of the session (i.e. until you close the tab) is not "persistence" in the sense that any privacy-conscious user cares about.

If anything, making these features break loudly enables sites to detect that they can't be used for persistence and allows them to find ways to circumvent that. Contrast this with cookies which are silently discarded if the server sends them anyway.

It's not at all surprising that Google's browser would chose a way to loudly break these features in a way that a) allows sites to detect that they're unavailable and b) discourages users from using this setting.

This reminds me of when third-party Android releases would add a way to fake sensor data (e.g. GPS) when denying permissions to apps so the apps wouldn't be able to lock out users unwilling to agree to those permissions. A feature that of course Google would never add to stock Android as it is important for their business model that apps can trust their tracking data to be genuine.


> It's not at all surprising that Google's browser would choose ...

This is how all browsers have handled it, for as long as localStorage has existed. See, for example, this Firefox discussion from 2006: https://bugzilla.mozilla.org/show_bug.cgi?id=341524


> I'd argue that being able to write to and read from storage for the lifetime of the session

The web API doesn’t cater to that. If you only need storage to persist for the session you can just use memory.


Came here to say this. If anything, a cookie that is not cross-domain is more innocent than localStorage.


What would make it "more innocent"?


Perhaps the comment was about HttpOnly which cannot be read by random ad / tracking scripts on the page. It is still storing data, but now the server needs to actively transmit it to other parties.


A cookie is a tiny plaintext file limited at 4kb. Your browser keeps an inventory of them, lets you read or delete them, and automatically clears them out regularly.

Local storage can contain just about any type of user data without letting users know, and theoretically forever.

If you have to store local data, and you cared about "innocence" (author's word), it seems to me a cookie set to the local domain leaves less room for shenanigans. Hence why Google probably disables it when you disable cookies as well.


Looks like local storage has a max size of 5MB and cookies 180 * 4KB = 0.7MB, so a similar order of magnitude. AFAIK, cookies can be set to expire in hundreds of years by sites and local storage can be cleared the same way as cookies.


I mean, you can make a cookie that is anything you want, but browsers accept what they accept. For Chrome that's a <4kb cookie with a max-expiration of 400 days.

I think localStorage being slightly better or worse than a local cookie is up for discussion. I just think you're weird if you think local cookies are bad but localStorage is good.


I always use a wrapper around local/session storage[1] to avoid this problem. Then you have your app sync settings with storage, never read from it except during startup.

It becomes impossible to implement basic UI features like remembering open panes, etc when storage is disabled though. With the current policies around cookies - no cross-domain reads, Safari's ITP - there is no real need to turn them off for privacy reasons, for the average user at least.

[1] https://www.npmjs.com/package/localstory


Basic UI features shouldn't need storage. In-memory or in the URL is enough. If you put it in storage then it is actually a (cookie) session, with some sort of configuration - that's not "basic UI".


Sure, in-memory works until the page is refreshed. Storing data in the URL is an option, but also messy and cumbersome to manage especially with bookmarks. localStorage / sessionStorage is clean and dead simple, and it actually allows an app to be truly stateful, so it’s quite unfortunate that the trend is to avoid the “evils” of storing any kind of data on the client. What, should we go back to the days of session IDs and server-side storage for even the most trivial data?


This is nothing new: on a webapp, you may not have a session. That's all.


If you define "basic" as not including "this remembers how you had it set last time" then, sure.

"In the URL" works for that, sort of, though not if you want it to still work for users that are just re-finding you through Google or typing in your address.


> "this remembers how you had it set last time"

That is a session.


And? All desktop and mobile apps are able to persist data. The purpose of disabling cookies was privacy protection, not a 'storing local data is a crime' stance.


"(On a tangent, MDN is completely broken with cookies blocked, too. I was about to report this problem (because I care and love MDN), when I discovered a PR is already under way that fixes the Issue. Thanks, @bershanskiy!)"

This would imply that "MDN" is under a state of rapid flux, potentially "breaking" and then being "fixed" (or not) over short periods of time. However it appears from the edit history that most of it is actually static and has not changed since 2019 or 2020.^1

Perhaps the "completely broken" catchphrase invoked by the author refers to an issue with "cosmetics" (window dressing) not content. I use a text-only browser and have not found MDN to be either partially or completely "broken". I send an HTTP request for a file and I receive the contents of the file. For me, it works. No cookies or Javascript required.

1. https://raw.githubusercontent.com/mdn/content/main/files/en-...

If I want to check browser compatibility, which can change from time to time, I can use Github or the MDN website.

For example,

https://raw.githubusercontent.com/mdn/browser-compat-data/ma...

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cl...


I created an extension that limits the maximum lifetime of cookies, I was surprised to see some have a lifetime of years. https://addons.mozilla.org/en-GB/firefox/addon/fresh-cookies...


Why does the browser pretend to have localstorage but then throw an exception when it's used?

Surely it would be better to simply pretend to not support localstorage and then all sites built with feature detection would work correctly without needing to special case this?


I can see it both ways. I think there's an opportunity for the developer to identify that localStorage is unavailable at runtime, and turn off certain features in the UI as a result, or write their own wrapper layer that does the 'throw-away' behavior.


The Atlantic is really annoying in this respect. When you open an article in Firefox Focus, it fully renders for a moment, but then apparently some javascript loads at the end of the cycle which clears the page.


Readermode fixes this for me


Yeah, adding try/catch around those has been a good practice for a while. I think there was a time when, if the site was running in a private window in Safari, localStorage would also throw exceptions.


I handle cookies on my proxy where I change them to per session cookies and interestingly the sites that have cookies that are "necessary for sites to work", are working flawlessly (/s)


> All I am using is some innocent localStorage and IndexedDB to persist user settings

Hrm, isn't this the exact definition of a cookie?


Different APIs, but all in the realm of (client-side) storage.


God bless this man for making this -- but you know what's CRAZY to me? That no one has done this before.

We all visit websites constantly and governments (particularly in EU) talk endlessly and vaguely about cookies and yet almost NO ONE really gets it. I work on this specific problem and it is SUCH a mess.


An interesting facet of this is the implicit trust by the author towards the downstream tooling and libraries. He is not alone.

We talk about how we need to make sure dependencies are secure, but I venture to state, it is often just brushed over. Yes, supply chain security (now to rinse my mouth out).


Weird to throw an exception when localStorage is not available. It is much more logical to have it undefined or null. Code working with localStorage is more likely to check whether it is available (“not falsey”) rather than trying to use it and fall back if it throws.


No, it's correct. Null checks are for checking that a browser API is supported, but they might be unavailable, like being blocked or full or whatever.


One thing that annoys me about firefox's total cookie protection is that I offer some 3rd party embeds. What I did on those is I set a cookie, and probed for its existance to check if the user has third party cookies disabled. Then if they do, it displays things for the case where it's not known whether or not the user is logged in to the service, rather than as if they're definitely not logged in.

This worked fine, but now that firefox just containerizes third party resources rather than actually blocking the cookies, so there's no longer a way to detect that the actual site cookies just aren't being delivered in a third party context, rather than not present without user agent sniffing.


instead of blocking cookies, it would be nice if there was something like "Certified Humane" for websites... and you could stick within an internet of websites created by people who are not dicks.


I think the problem here is "potentially blocked".

How do you know what's potentially blocked? Maybe it's listed clearly somewhere in the browser docs, or maybe it's not. Did they change it between versions? Did you even know about this issue in the first place?

I know people like to think of checked exceptions as a failed experiment from the dark past of object oriented programming, but this situation is a great example of statically-typed (or at least statically-checkable) side effects are a huge improvement in code safety.


This feels kinda unscalable though...

wouldn't it make more sense to change the browser to make cookies and localstorage non-persistent and isolated, but otherwise available programmatically and to XHRs.

i.e so that they can exist in isolation as long as the tab is open. This would be compatible with anything that doesn't require cross frame or cross tab persistence (which is usually all users care about).


Just keep your data in memory in that case, if you don't need persistence.


Looking forward to the day some webmaster sets the background to pictures of cookies when you block them all.


Why is the website failing with unhandled errors, but working when they are try/catch'd? Either way the errors are being thrown, and the functionality isn't available. Is the browser not able to handle the situation itself more gracefully?


Because the site doesn't need the functionality to work, but an exception terminates the javascript execution of things that the site does need.


Thx!


Someone knows if this happens only in Chrome or also in the rest of popular browsers?


Fun fact: the code example with the glow effect was created with [Carbon](https://carbon.now.sh/)


I wonder if instead of blocking cookies we could make a browser extension to share the tracking cookies (and only those) with random people on the web, to confuse trackers?


Not directly related to this article but doesn't aggressively blocking tracking in this way create a tracking monopoly for browsers, extensions and apps?


Why throw an exception instead of providing real working versions of these things that only persist data for as long as the page is open?


I recently read a thread about privacy here. One point was that the one best thing you can do is disable JavaScript. So I decided to try it. I installed Brave on my phone and disabled pretty much everything, including all cookies.

My thinking was, all I do is browse HN, hn.algolia, and lobsters. Those should work, right? Well lobsters works perfectly, including collapsing comments.

HN loses the ability to collapse comments. But algolia is the worst. Not only does it require JS, being an SPA, but it refuses to work until you enable cookies! My theory is that it reads the settings (popular, 24-hour) from a cookie, and plain dies if they're not there.

On another note, and to a pleasant surprise, a lot of the web works perfectly fine, and feels a lot snappier, including even google search. And many of the annoying cookie and paywall popups never appear, since they appear to be implemented in JS.

So yes, if you haven't tried it, I recommend you do. You can always whitelist sites you trust or really need to use.


I always accept all cookies because they have never had any negative impact on my surfing experience ever. Cookie banners and Privacy banners are much more of a problem than cookies ever were.


Is there a way to use csrf tokens without cookies?


How about, you know, handling error conditions?


I just want to take this opportunity to thank "adtech" and everyone working in it for making local storage way more complex than it otherwise needed to be because you couldn't/can't stop yourselves from abusing users.


A technology which leaves the door open for exploitation will certainly be exploited by someone. In a self-governing society, people would turn their backs on the most exploitable technology; in the actual world, I just don't see that happening.


Where there is an opportunity for exploitation and humans, there will always be an exploit used. It's just how we are, how our brains are built. Until that exploit yields you no benefit socially.

Meanwhile we see just how effective targeted advertisement can be for both money and political influence.


I’m in adtech and we manage to do ads in a completely user respecting way within the retail space. We monetize on search traffic without user data, cookies, local storage. The only browser feature we leverage are click events and img tags.

Though I appreciate your frustration, your aggression is a little off target. :)


> The only browser feature we leverage are click events

Do the users want their click events fed into an advertising engine? Did you ask them? If you made this opt-in, how many would say, yes, please track my clicks in order to advertise to me? Even if its anonymized/aggregated.

A huge amount of advertising is enabled by tracking users against their will, exploiting the fact that many users aren't aware of what's going on, don't know how to stop it, or aren't as invested in their preference as the adtech companies are in their revenue. "A man is always right in his own eyes". If you're smart it's easy to justify this stuff to yourself because you're getting paid, but that doesn't make it right.


I should have known my comment would only make you more aggressive. Walked into that I guess.

You are right in that a huge amount of ads leverage user data at the expense of the user.

The point I’m trying to make is that not all involved in the advertising technology are exploitive. We do zero ad targeting based on user data. You make a search for specific products, we take the response and shuffle the order a bit based on vendor campaigns. That’s it. Nothing is associated or even collected from the user. It’s all system metrics.

The greatest harm that my team does is hurting optimized relevancy, which is inherit to advertising but also something we work hard to alleviate.

That misused proverbs quote is a nice touch. Shows a lot of self awareness.


> The point I’m trying to make is that not all involved in the advertising technology are exploitive.

I don't doubt that you're being truthful here. The problem is that the vast majority of adtech is extremely exploitive, and there is no way for a user to tell the "good guys" from the "bad guys". So all adtech must be treated as hostile.


I'm sorry if this hurts your beliefs, but in what way are aggregated and anonymized data exploitative? Every "offline" store does it. How do you expect businesses to make profits if they can't look at what drives their revenue without bothering every single customer with consent requests?


Data claimed to by anonymized usually can be easily deanonymized.


> I’m in adtech and we manage to do ads in a completely user respecting way within the retail space.

Ads by definition try to influence the user to do things they would not have don on their own. They cannot ever be user respecting.


Good point. I'm pretty sure he meant "privacy respecting" when he said "user respecting".


A completely user respecting way is no ads. Try again.


In what way is it more complex than it has to be?

I don't work in front-end but whenever I've used localstorage for personal projects it's as simple as getting/setting a javascript object with keys & values.


It’s described in the first section of the article.


Care to elaborate? I just re-read the article and honestly still don't know what complexity OP was referring to.

Do you mean the fact that cookies and LocalStorage (or other APIs able to persist data) are often considered the same thing when using the term "cookies"?


This is a good example of how the discourse on this subject seems to have twisted itself around. No one serious ever thought "advertising" was a bad thing, not really. It was the potential abuse of the stored data people were worried about. But that's hard to explain, especially without good examples to which to point[1]. So fast forward a few years and... now it's "adtech" that's the enemy in isolation.

That seems unfortunate. And futile anyway, because the truth is that society as a whole actually wants advertising, in all its forms. We may find individual ads annoying but we still make our purchase decisions based on those ads and use those channels to hawk our own wares.

[1] The truth is that the big ad-driven internet companies have been, on the whole, actually fairly good stewards of this stuff. People don't trust them out of principle (or, just as often, tribalism), but at least so far the dystopia hasn't arrived.


The big ad companies definitely have not been good stewards.

Here's an ongoing example that should get someone put in jail:

Pharmaceutical companies target ads for addictive drugs at the people that are most likely to become addicted to those drugs.

(For the victim of this that I know personally, it wasn't painkillers. As far as I know, there have been no repercussions for the manufacturer of the drug in question or the ad networks.)

See also: The millions of comments on any thread on HN that discusses SEO, dark patterns, cookie consent, or apps that interfere with advertising revenue streams.


"won't somebody think of the drug addicts" is a novel (to me) argument for why all ads are bad


In this case, the drug addicts are people with medical conditions that have been intentionally pushed to abuse their medication by the pharmaceutical company and Google, etc.

The argument is closer to "encouraging and profiting from illegal drug abuse is not 'responsible corporate stewardship'".


That's not true -- many people think advertising as practiced is abusive in its own right.


"Society as a whole" and "many people" are not the same entities. Again, there was a time when serious discourse about internet privacy focused on abuse potential and how to provide data security guarantees. Now it's just "ads bad", and I think that's a real problem with the discourse. To repeat, you can win fights over privacy (c.f. the GDPR, which while pretty flawed was a real and tangible win for users). You'll never win a fight over "ads". Ever.


The vast majority of Americans are concerned about data collection:

https://www.pewresearch.org/fact-tank/2019/11/15/key-takeawa...

There's no point in additional discourse on the issue. We may as well be arguing about whether cigarettes cure lung disease.


But "many people" and "no one serious", which is what you claimed, are also not the same.


> You'll never win a fight over "ads". Ever.

https://en.wikipedia.org/wiki/Cidade_Limpa

Even for cities that do not outright ban it, outdoor advertisement is heavily regulated in many parts of the world. Compare you typical european and asian big city street. Clearly many people do not like ads even without the tracking.


My personal view is that most ads drive materialism and fulfillment through consumption which is evil. We also have research correlating familiarity and trust, so ads are there to brainwash you into trusting brands you have no reason to trust. If the ad business was boiled down to ads that were good for society it would be so small there’d be little conversation about it. The dark side of the ad business is what keeps everyone in it, the abusive user tracking is just the newest horrible dimension of it.

> People want ads

Says who? Sounds like rationalization from someone who works in the industry. I could be wrong.


> We may find individual ads annoying

I actually don't really mind ads as long as the volume of them isn't too great. But I greatly mind the tracking that comes along with them.

> at least so far the dystopia hasn't arrived

I disagree. From my point of view, we reached the dystopia stage quite a while ago.


Just don't use Chrome if it's user hostile. Use something else that's not.


Uh... The point of the article is that Chrome is proactively honoring its promise not to store user-identifiable data, even in ways that aren't ones front end developers would normally expect.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: