There's something called "Macaroons" that can be used for this.
"Macaroons: Cookies with Contextual Caveats for Decentralized Authorization in the Cloud" Arnar Birgisson, Joe Gibbs Politz, Úlfar Erlingsson, Ankur Taly, Michael Vrable, Mark Lentczner ; Network and Distributed System Security Symposium, Internet Society (2014)
~2003, assuming its related to the paper[0] of the same name (and sharing an author/diagrams). The paper was very influential and helped change minds on the subject.
Although we don't know who the specific reviewers were, they were from the Usenix Security program committee, and so from the elite of the field. This rejection captures perfectly the tone of dismissal common in academia at the time. The common wisdom was that capabilities were a failed and unworkable idea that we need not bother further discussing. As you can see from the date on that email, when we got this rejection, we immediately posted it publicly.
My sense is that the paper together with this referee rejection, posted and discussed publicly, caused the initial influence. The embarrassment from that rejection was not on the authors. Within two years, many still thought capabilities were wrong. But the sneer was gone. Arguments could be heard. I dare say it marks the beginning of the capability revival in academia.
There are some real issues with implementing revokable capabilities on UNIX and Unix Like operating systems.
For example, access to a file. Lets say you have a capability that grants file access. One can open the file and read and write to it when you have the capability. The file descriptor can be made to refuse access when the capability is revoked -- cutting off reads and writes, you'll really need a new error code for this as no others really explain what happened. However, this isn't enough, on UNIX systems that support mmap() your "access" capability now needs to be intertwined into the paging system, every page or mapping now has to be capability marked and checked. Paging is asynchronous, what happens when the capability is revoked? Is there now a hole in the address space of the process? How on earth do you communicate that to a process that might have pointer references into the mapped data? Which software programming language could support this? What happens when the process if forked? Does this duplicate the capability or not?
Similar problems exist with shared memory and semaphores -- revoking an associated capability could deadlock a system.
In my opinion, in order to make a system useable, a programmer or user must be able to build a mental model of how things work. Capabilities and resources suddenly disappearing is a challenging environment to work in.
Just look at all the issues / problems that plague pluggable devices.
The tone of your post doesn't come across as very professional. "when we got this rejection, we immediately posted it publicly" - were you one of the authors, contributors or involved in writing it?
I have read the paper, and the rejection notes you linked to, and I can't evaluate them (lack of time + lack of experience in this field) but they don't come across as snooty dismissals. Feels more like guys on all sides trying their best.
Kind of. There's still a lot of work to do. OAuth and SAML are quite common, but they cannot federate capabilities; it is not trivial to issue somebody a capability to act on your behalf without an entire infrastructure of assistant services and trusted providers. That said, each decade looks better than the last, but I still encounter the myths covered in the paper, especially incorrect beliefs about revokation, on a regular basis.
The opposite is true. Virtually no web auth systems are capability based. Maybe security SaaS providers are using it, but relatively spreading, they're not representative of the web proper.
So I haven't been exposed to capability systems before so this might be a dumb question.
In an OS like KeyKOS, how does the OS protect against privilege escalation using side-channel attacks similar to how encryption keys are extracted via hardware side-channels?
AFAIK, there are two ways to represent a capability. One way would be through a cryptographic token or key, where knowledge of that token or key allows using the capability; in that case, a side channel attack which leaks that token or key would allow privilege escalation. The other way is to represent the capability as an index in a per-process or per-thread table managed by the operating system, similar to how Unix file descriptors work. In this case, a side channel attack does not help, since knowledge of the capability is not enough; you have to convince the operating system to add it to your capability table.
I'm not the person to answer but let me try. Anyone correct me please.
Side channel attacks are AIUI about information leakage. They extract info on the state, not modify state[0]. I don't think privilege escalation is really related because that needs to modify state you should not have access to[1].
Capabilities at the OS and software level are software enforced, which is built on assumptions about the security (correctness guarantees) of the hardware. If the hardware leaks, the software is vulnerable unless/until patched to try to paper over the leaky cracks in the hardware.
Again, AIUI.
[0] Though such extracted data might be a security token that can subsequently be used to actively attack, such as logging in via someone's newly pwned admin password.
[1] I suppose rowhammer is exactly that, though, an attack that intentionally leads to a handy state change for the attacker.
> I don't think privilege escalation is really related because that needs to modify state you should not have access to.
But in a capability-based OS, privilege is all about the keys you hold, no? So assuming a malicious process can copy keys from other processes, can the OS detect and prevent the malicious process from using the key?
I tried reading up on KeyKOS[1] and there was very little info on what exactly a key is and how it is verified by the OS, at least that I could see.
You are right - see my note [0] but I missed the bleedin obvious. Thanks for pointing it out.
In that case the hardware is effectively broken and all that's left is attempted software mitigation. Which may/may not work, but surely will cost performance. And extra complexity. And consequential possible bugs.
From memory a key can be represented as a random 64-bit integer taken from a sparse set. IOW you'd have bugger all chance of guessing another currently used 64-bit int, and if you tried and guessed wrong the OS, which would keep a list of caps allocated to each process) would say when you tried to use it "and where exactly did this come from, eh?" and kill you. But that's from memory and guesswork.
The main myth we discussed over coffee and biscuits back in the compsci staffroom was .. expensive as all hell on the computers we have now. (a good handwaving often used to say "one day, in the future, somebody will make it work")
Looking at it strictly from the outside, it looks like there's a ton of indirection to make it work. People usually do want revocable capabilities, which implies that every capability grant has one or more levels of indirection that go with it.
Indirection's costs haven't gotten cheaper in well over a decade.
Considering now every time you want to do anything on a computer you're getting interpreted code in a memory heavy gui environment to send a HTTP request over a TLS connection to a remote computer via a flaky wifi connection, the cost of doing an indirection might be relatively much less than it was a decade ago. In those days, you might recall, most of the time you clicked in your software it would be doing direct lookups in the machine's comparatively fast spinning disk using native compiled code in a gui api designed with a considered tradeoff between developer's convenience and memory usage - designed in the days when there were fewer developers and much less memory.
Of course, half the time those indirections will probably be remote to the server you're already waiting on...
Absolutely not. Turing-complete functions are subject to Rice's Theorem - meaning that they are undecidable in the general case. Also, if it's Turing-complete then anyone can make a ridiculous mess of an area with little or no tooling. Configuration should be Turing-incomplete unless you have a very good reason to do otherwise.
Undecidability means loading your configuration could never complete or could consume an indeterminate amount of resources. Which means it's a security concern as a bad config can DOS your system.
"Macaroons: Cookies with Contextual Caveats for Decentralized Authorization in the Cloud" Arnar Birgisson, Joe Gibbs Politz, Úlfar Erlingsson, Ankur Taly, Michael Vrable, Mark Lentczner ; Network and Distributed System Security Symposium, Internet Society (2014)
https://research.google/pubs/pub41892/
"Google's Macaroons in Five Minutes or Less" https://blog.bren2010.io/2014/12/04/macaroons.html
A Javascript implementation: https://github.com/nitram509/macaroons.js