It wasn't that long ago that I could package silicon with a little
quartz window.
Anyone who remembers old-skool EPROMS will know what I mean.
When doing my VLSI course at UCL in the 1980s we all had to make a
"chip". Mine was a 16 bit counter with ripple adder and accumulator.
And we had them fabbed (yes probably unbelievable in this day and age)
and had to test them on the lab bench!
BTW I'm not talking about FPGA here. We actually had a class visit to
the fab-plant and put on bunny suits to go into a level-3 clean area
and look at our UV masks in the projector room.
So, obviously to save money, all the class had a little bit of silicon
real-estate and some shared tristate registers that we could access
our design through with our own address space.
Now my point is; that when the chips were delivered we were all super
excited. We looked at our silicon through a high power bench
microscope. Everyone had signed their name (initials) in silicon next
to their plot on the die. I remember marvelling at the smallest
writing I had ever done in my life!
So, here's part of how you make semi-secure hardware.
Hardware you can actually look at using a simple lab microscope, no
need for x-rays etc.
Just sample randomly from the wafers and have those packaged in
quartz, while 99% of the rest are normally packaged in opaque
resin/ceramic. Make the randomly sampled wafers available for public
inspection.
If you sign up for TinyTapeout [1], you can design a small digital circuit and have it manufactured on actual silicon for about a 100 dollars. It will be the same as your scenario, where everyone gets a tiny patch on the larger chip. You get the chip at the end.
Even on older nodes this is harder than you would expect, because you can't see past the first layer with visible light. There is more recent blog post fromt he same author looking at using infrared microscopy to do some verification of chips though (you can't see the low level details alas, but at least it makes it harder to hide something when it has to blend in with what the chip is supposed to look like)
Even if your process is destructive, you can still use statistics to get a probabilistic verification. For example, buy 10 chips, verify 9 randomly. If all are clean, then there's 90% confidence your chip is clean as well. More realistically, you would collect samples (a certain %) from a large batch to check none is compromised, and then take corrective measures to secure your supply chain. This kind of technique might be significant in highly sensitive applications like electronic voting.
The fairly obvious solution here is to stop integration at the point where a truth table can be made of a single chip. That would allow you to exhaustively verify your design. It would run slower but there is no way that I can imagine that would allow a lowly logic circuit to suddenly become something else and both know it's place in the whole circuit, have access to an interesting datastream and be able to exfiltrate all at the same time.
But once circuitry gets more complicated and chips become more integrated you can do just that because the only thing that would need to change is the contents of one single chip.
There was a big scare around small components used to insert new code into target machinery:
But that - if true - mostly hinged on being able to dynamically alter the software loaded into a high level design where the purpose, wiring diagram and target were intimately known to the attackers, all they had to do was engage in camouflaging their part as something innocent, any kind of component level audit would show that this part - which apparently wasn't on the circuit diagram in the first place - performed to its normal specifications.
Initially I was quite skeptical but then a while later this appeared:
That makes the attack sound more plausible. The practical upshot is that if you outsource your fabrication to an unknown third party that you can never be sure what you get unless your skills are orders of magnitude better than theirs. This goes for normal hardware and probably for high value targets and networking components you will have to be extra careful. But I'd be just as careful with for instance large battery packs or things that can be turned on or off remotely. (But that's getting away from component level bad stuff and into the realm of 'normal' attack vectors into embedded hardware.)
Terrifying on the same level as Ken Thompson's "On trusting trust".
In theory even a capacitor could be listening for a serial activation
code and then go short or open circuit to modulate a new signal onto a
wire.
It's a real problem, to put it mildly. I've done enough work with electronics that I know I'd have a hard time identifying such a device or something more up to date and I have at least a basic understanding of what it would take to do this. Someone that is unwary doesn't stand a chance.
Just to add a point of view, most of what we do with computers is easily verifiable (or of no concern). So the part that actually needs to be trustworthy is relatively small.
But I have no proposal of architecture that includes an external verification processor.
I mean, his Stanford work (when he was a professor) was the basis for the series of graphics engines SGI released. It's interesting to think about the Times Before, when people didn't appreciate graphics acceleration or matrix calculations nearly as much as we do today.
What to do was understood as early as the Evans and Sutherland Line Drawing System 1 in the early 1970s, which had 4x4 transform hardware. But the GPU was a cabinet the size of a mainframe CPU. There were a series of mainframe-sized GPUs from E&S. Jim Clark worked for E&S. Affordability was a few decades ahead.
> at the end of the day any hardware of sufficient complexity is not practical to verify, whether open or closed. Even if we published the complete mask set for a modern billion-transistor CPU, this “source code” is meaningless without a practical method to verify an equivalence between the mask set and the chip in your possession down to a near-atomic level without simultaneously destroying the CPU.
Sounds about right to me. And if you can't trust the hardware, it goes without saying that you can't trust the software running on it.
They go on to explain/promote their "Betrusted" verifiable hardware approach, which is good and all, but ultimately I feel suffers from the same fundamental problem (hard to verify the actual hardware you're using without destroying it). Some mitigation strategies are offered, though. I'm glad they aren't over-promising, and are clear that these are steps in the right direction rather than full solutions.
I think most people (myself included) don't have the technical expertise to know if a given verification is valid, either. Such people would need to offload that task to someone else — someone they must trust, yet again.
The notion of trust is so profound and slippery, even without involving computers. It's fascinating to see it play out in a more rigorously-observed fashion through modern technology.
I would be very happy with a method of verifying hardware even if it included destruction. I think my biggest risk is that the hardware is compromised before I become its owner. I would feel comfortable buying 5 of everything, picking one at random to keep and verify/destroying the other 4.
Depending on my results of this process, the cost of the hardware, and the sensitivity of the applications I could see my increasing the number of items I would need to purchase. I can also see "verification" companies that test a statistically significant sample of devices before reselling them to individuals.
Assume this was true. Now, for the sake of argument, say someone makes a mask with an error. How would you identify where in the manufacturing system the fault lies?
I wonder if you only need to be able to verify a small portion. The stuff that deals with passwords and encryption and transferring money between bank accounts and sending and editing secure messages.
I’m not sure I care if the code that helps me view 3d graphics has been compromised or not.
A point that often lacks mentioning in "trustable" hardware devices is the abuse potential. As soon as there is a practical venue of increasing "trust" in any kind of device, it will be immediately abused by DRM and other noxious actors - try rooting your Android phone: not only will you not be able to lock the verified-boot keys to your own (to prevent others from flashing malicious firmware), you'll get greeted at boot with a "can't trust this device" message (great, I already know I'm rooted, I'd love to know though if someone like the US CBP tries to mess with my device), and not just Netflix will severely degrade performance, but other applications (Google Pay, most banking apps and a ton of games) will refuse to run entirely.
I'm not sure which world was better, the old PC world where rootkits and other similar malware had free reign, or the modern "trusted" world.
> I'm not sure which world was better, the old PC world where rootkits and other similar malware had free reign, or the modern "trusted" world.
Indeed, I would choose the old PC world, though I concede that most users would not. The saddest part of it is that they aren't mutually exclusive and shouldn't have to sacrifice one for the other. You do lose a little bit of "security" by trusting the user (and therefore anyone with physical access), but this is something I think the owner of the device should get to decide. If it's not configurable, we should be very clear about the fact that when you "buy" it you're merely renting with no specified return date.
> I'm not sure which world was better, the old PC world where rootkits and other similar malware had free reign, or the modern "trusted" world
I'll take the old world over the new one, no contest. Sure, I had to actively work to protect myself from bad actors in the old one. But in the new one, I also have to protect myself from the devices themselves. That's a much more difficult prospect.
>not be able to lock the verified-boot keys to your own (to prevent others from flashing malicious firmware)
Am I mistaken in believing that installing GrapheneOS on a Google Pixel phone completely takes over the phone including taking over the phone's verified-boot subsystem with the result that GrapheneOS can tell if the boot process has been tampered with, e.g., by an evil maid?
A post like this is why I'm excited about Oxide (https://oxide.computer/). I know it is a huge undertaking, but imagine a 100% open source server, from the bios, to the network drivers, to the OS. I know this doesn't completely answer the trustable hardware question, but it at least answers the concerns about how trustworthy the drivers and bios are.
The best approach is to assume it is compromised and take steps to limit the potential damage.
The Germans assumed Enigma was secure, and as a result lost their U-Boot fleet and lost the Afrika Korps. This was despite a lot of evidence that it was compromised, but they kept using it for critical information.
Code-breaking also cost Japan the battle of Midway.
People should just assume their phone is infected with Pegasus.
Own a fab, design and verify your own chips, produce them under a strict control and cross-checking, same for all other parts, assembly, transportation, storage, and operation. Background-check everyone you employ. This is basically how high-assurance military hardware is handled.
The question for the rest of us is: can we buy trustable hardware? Even more specifically, can we buy if for the peanuts we're used to pay, instead of buying the whole wad of production chains as the military does?
Here the answer is negative in the strict sense: we can't be certain. But the answer is mostly-somehow-positive in many cases, because nobody cares enough to add subtle behaviors to MCUs that go for cents and control light switches and kids toys. The more advanced is your machine though, and the more there is a reason to bug it, e.g. to have an ability to remotely disable affected devices, or worse.
In theory it should be possible to manufacture IC's at home, similar to 3D printing. And a (very!) few people have even done so.
A whole lot of progress in IC manufacturing is making things smaller. No reason one couldn't include the fabrication machinery itself in that quest.
Another approach is to fabricate IC's in a blank state, and have end users 'burn' its function. FPGAs are a thing, iirc even some fuse-programmed ones exist. Some gate arrays consist(ed? not sure if made anymore) of bulk, prefabricated logic, with a layer of customer-specified routing on top. Something like that could lend itself well to 'diy-at-home'.
All these would sidestep much of the "trust 3rd party" problem.
So yeah it's possible. Technology (or more precisely: equipment on the market) just isn't quite there yet.
Trust is a human problem, not a technical one, and it all comes down to the root. Provenance matters more than all else. You could provide me with detailed schematics of your entire board, and the literal VHDL files for every chip it runs, and I still wouldn't "trust" you if that trust hasn't been earned elsewise. You could be giving me absolutely anything. And sure, I could verify it... but then that's no longer trust.
> I still wouldn't "trust" you if that trust hasn't been earned elsewise.
And since a large swath of the tech industry (and most of the consumer tech industry) has spent years actively demonstrating that they can't be trusted, even if by some miracle they all suddenly decided to start being decent, it will take decades to even get back to the neutral point on the trust scale.
> And sure, I could verify it... but then that's no longer trust
That is the common trust in the open-source software community, but AFAIK not practical in hardware hence impossible. In software we can build from source and verify the checksum to make sure the binaries aren't manipulated, but not for the hardware.
We know we can't trust humans so we use software to protect us.
The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can trust the platform to be unmodified from the expected, and whether or not those programs are innocent or malicious or whether they execute tasks that are undesired by the user.
The “tamper with product, then return product to e-commerce site” is an interesting attack vector. It seems impractical for any platform to defend against such an attack, short of recycling returned products instead of reselling them. Has anyone written more about this from the defender’s perspective?
For software, I don't think it's all that impractical. You could cryptographically verify that the factory firmware is signed and valid, and if it isn't then you re-flash it before reselling. This should be (and often is) a standard practice for "refurbished" goods. Even at a small scale you should already have the means of verifying the software/firmware loaded on the device because you don't want to ship a broken product. Using that to verify should generally speaking be fairly easy to do.
For hardware it does get a little more difficult, but if the attacker isn't allowed to run their own (modified) version of the firmware/software, then such attacks are very, very difficult to do. You would want to design your product in such a way that you can check the storage/flash contents without having to trust the product to self-report, otherwise a modified software build could easily lie to the verifier about it's hash.
So for most use cases, both hardware and software are mitigated substantially by verifying the software build.
An important note here is that it doesn't need to require distrusting the user. Of course modern tech companies love to distrust the user because it gives them substantially more control over how their product is used, so the "security" justification of using anti-tamper technologies is highly attractive to them.
I don't really see that being a feasible attack vector for any attacker - you can't really control who's gonna buy the tampered-with thing, so it can't be a targeted attack (unless the buyers are all high-value victims, in which case, don't resell; otherwise, add a random delay on reselling), and it's gonna be much more expensive than alternative methods for a non-targeted attack.
I always thought this would be a good attack vector for hardware bitcoin wallets.
You make a subtly backdoored version (for example, tamper with the RNG so it only has a few bits of entropy) and you either return it, or sign up as a third-party seller and ship it to Amazon yourself, to be commingled with existing legitimate inventory.
Then a year or two later, because you know all but a few bits of the private key, you grab the money and run.
No need to control who gets the backdoored devices, just ship enough that you luck into some rich buyers.
Does diverse "compilation" not address this problem in the hardware space?
There are multiple sources of 555 timer equivalent and replacement chips for example: multiple fabs implementing the same spec should get results which are equivalent.
Though I suppose you can't really verify an unintended excession of function, which is really how you hide malicious behavior (unless you did something like space missions do where you have multiple systems running in lock-step and having their results cross-checked).
Maybe we can, but I don't think we should. So called "trusted hardware" is and will be used against users (DRM, WEI, forced subscriptions, making it impossible to repair the device and so on).
that's a pretty heavy misrepresentation of this post.
actually one of those rare moments when i have to ask someone "did you even read the _title_", because you'll note it doesn't even say "trusted hardware" but "trustable hardware" and i've never heard that latter term used in the context you're alluding to.
I did not read the article, I've only read the title and from the title it wasn't clear there's no talk in the article about tamper-resistancy. I have no problem with trustable user-owned hardware as long as it doesn't contain any tamper-resistancy.
Yes, assuming you have sole control of every single step of the process from procuring raw materials to the final product. That could even fail if you're drunk or high.
Although it may be impossible to build fully trustable hardware with 100% consistency for every possible case, I think it's still a worthy cause to build hardware that is trustable for the majority of people. I think having one's machine tampered during transportation is a risk most people can accept as it would be too expensive and obvious for any entity to carry out chip substitution indiscriminately on a large scale. Most people would not be worth the trouble. What we should be worried about are manufacturers which build malware into their products as part of the manufacturing process.
Maybe a better area of focus than malware prevention would be malware detection and filtering. We could have hardware chips which detect anomalous activity, block it as much as possible and notify the user. You could source multiple such chips from a range of providers to reduce the risk of tampering of any specific chip.
Anyone who remembers old-skool EPROMS will know what I mean.
When doing my VLSI course at UCL in the 1980s we all had to make a "chip". Mine was a 16 bit counter with ripple adder and accumulator.
And we had them fabbed (yes probably unbelievable in this day and age) and had to test them on the lab bench!
BTW I'm not talking about FPGA here. We actually had a class visit to the fab-plant and put on bunny suits to go into a level-3 clean area and look at our UV masks in the projector room.
So, obviously to save money, all the class had a little bit of silicon real-estate and some shared tristate registers that we could access our design through with our own address space.
Now my point is; that when the chips were delivered we were all super excited. We looked at our silicon through a high power bench microscope. Everyone had signed their name (initials) in silicon next to their plot on the die. I remember marvelling at the smallest writing I had ever done in my life!
So, here's part of how you make semi-secure hardware.
Hardware you can actually look at using a simple lab microscope, no need for x-rays etc.
Just sample randomly from the wafers and have those packaged in quartz, while 99% of the rest are normally packaged in opaque resin/ceramic. Make the randomly sampled wafers available for public inspection.