Bypassing disk encryption on systems with automatic TPM2 unlock
Posted by odd_lama@reddit | linux | View on Reddit | 26 comments
Posted by odd_lama@reddit | linux | View on Reddit | 26 comments
Weekly-Salamander155@reddit
It seems like really bad security design to choose static PCR values which remain public in the TPM that are unrelated to the thing you are unlocking and then use them to compare against before retrieving a secret but I guess its been 10 years since TPM 2.0 came out eons ago in computer security terminology.
msuhanov@reddit
These file system UUID, partition UUID, file system label collision attacks against LUKS and LVM-on-LUKS are being constantly rediscovered, again and again, but with a slightly different exploitation path in almost every case.
See, for example, QSB-021-2015. And exactly this (in this Reddit post) issue was mitigated in the BitLocker since Windows Vista.
msuhanov@reddit
Also, even if you protect the root file system against such collisions, you need to implement similar protections for the swap space (if it's not stored as a file in the root file system) and the hibernation images in the swap space, both cases involve different execution paths in the initramfs scripts (e.g., the unencrypted swap space could be activated instead of the encrypted one, due to a similar collision).
Nebucatnetzer@reddit
This was a good article. Technical but still quite possible to follow along when you’re not too familiar with all moving parts.
AntLive9218@reddit
What's the advantage of "bite the bullet and add a TPM PIN" over just using a password for LUKS? I suspect it's somewhat more resistant to evil maid attacks, but I have doubts about complete protection, and there have been way too many issues with TPMs to completely trust them.
It's silly how securing booting keeps on failing with all kinds of implementation issues. I wouldn't compromise on needing a user provided secret for storage containing sensitive data, but the bootloader / pre-kernel environment should really get some security for safe secret handling.
Something like https://gitlab.com/cryptographic_id/cryptographic-id-rs would be useful at least for attestation, but I believe that the whole TPM usage approach relies on the BIOS being protected which is definitely not the case on a whole lot of junk sold.
ElvishJerricco@reddit
It's absolutely about evil maid attacks. If the system has been tampered with, you want a security module that lets you know the system is tampered, as well as help reduce the usefulness of your potentially leaked password (that's why Apple makes it so hard to replace their MacBook's keyboards; they encrypt the connection so that I can't just install a keylogger to steal your credentials).
The point is not to trust the TPM2 as a hard security boundary; after all, the secret seed for all of its cryptographic functions are literally inside the machine, so it's always going to be physically possible to extract the secrets. The point is to add a significant barrier. Like, it shouldn't be excessively easy to crack it. A TPM2 adds the protection of requiring secure boot to be honored, and the protection of rate limiting brute force attacks. Of course these things can and will be defeated, but it's a substantial barrier to attackers that could very well convince someone who's just trying to pawn a laptop they stole to stop trying to steal the data.
Of course, a much more secure option is to just have a very strong password, or something like a YubiKey that you make sure is always on your person. But these things are notably inconvenient, so a TPM2 at least provides a soft protection for the people who don't want to do any of that.
the_abortionat0r@reddit
Bro did you just spread propaganda for Apple?
They want their products to only be serviceable by them alone FOR MONEY. If they gave have a sh*t about your security they wouldn't have let it be possible to get malware from simply updating your OS( before you even try to reply look it up).
If you think Apples anti consumer practices are some how a plus you aren't to be taken seriously period.
ElvishJerricco@reddit
I mean, it's both. Apple cares about security, and they don't want their products to be serviceable. And bugs happen; it makes sense that Apple slips up on security from time to time, as unfortunate as it is. But I get the impression that many of Apple's designs were made with good intentions, and then they're just negligent about serviceability. When they realize that it's not serviceable, they just think "oh, that's a nice bonus; let's keep it that way."
So yea, I think Apple is indeed really bad about serviceability on purpose. But I think they also care about security. Macs are undoubtably more secure than Windows and the majority of desktop Linux systems. It is possible to make a system as secure as a Mac or an iPhone without hindering serviceability, but I think Apple just doesn't care, and I don't think there's a mainstream OS / hardware combination that does security as well as Apple. I would very much love to see a good Linux based alternative someday that's just as secure; it just doesn't really exist right now (though, there are certainly people working on improving that right now).
AntLive9218@reddit
It's a mistake to mix in Apple's security through obscurity here, it just hinders discussion of security that could be verified by the user.
While most users seem to be happy with just being sold a sense of security, the discussion here is about objective improvements, not subjective ones. The lack of source code automatically makes security through obscurity solutions inferior, making Apple claims dubious. Comparing solutions based on the claim of what they are capable of and not the observable security mechanisms is a fool's errand.
ElvishJerricco@reddit
There's truth to what you say, but it's not exactly security through obscurity. Apple has detailed documents about the design of their security mechanisms. You're right that it would be more secure if it could be audited as FOSS, but Apple's claims are often verified pretty effectively through reverse engineering. External auditors are also given privileged access to variants of their devices that allow for much more privileged access to the devices' functions.
Don't get me wrong, making more of it FOSS would be an improvement. But I don't agree with the characterization that it's entirely security through obscurity.
aperson1054@reddit
It has much higher entropy than a password and its bound to your device so only a trusted OS with your PIN(it can actually be a full password, and is protected against brute force) can access your data
odd_lama@reddit (OP)
I agree with what you say, we are definitely not quite there yet with TPMs. I also will probably never trust them completely especially since a lot of boards still communicate with an external TPM without encrypting the traffic. So you can certainly abuse many boards with the right equipment, but at least it does require special equipment.
A short PIN is reasonably safe against brute force attacks, while a short password is not. Other than that, no real difference I suppose.
draeath@reddit
I'll call you out on that. What do you mean by 'short' for both of these cases?
A 4-bit PIN has an entropy of only 13.29 bits. An 8-digit PIN has an entropy of 26.58 bits. Replacing a single digit in that 8-digit PIN with a lowercase letter almost doubles that, at 41.36 bits. Using 2 lower, 2 upper, 2 digits, and 2 special characters is... only 47.63 bits.
(using this caltulator)
odd_lama@reddit (OP)
Despite its name, a TPM PIN doesn't necessarily need to use numbers only - you can use a normal password if you like. But the point here is that you only have a very limited amount of tries to correctly enter your TPM PIN before the hardware will lock you out for 24 hours (typically). So you cannot brute-force a short TPM PIN, while you can brute-force a short LUKS password.
Hafnon@reddit
TPMs can be configured to enforce rate limits for failed attempts at the hardware level, if you believe that they can be trusted that is.
AntLive9218@reddit
Yeah, trust is a significant problem in the days of users trying to defend with regularly broken and backdoored hardware against a commercialized black hat industry selling apparently quite affordable weapons.
A hardware rate limit would be superior if it would work for sure, but with doubts, Argon2(i)(d) with cruel parameters could make even a weak password more appealing.
zappleberry@reddit
Why not use full disk encryption with LUKS (encrypt root and use keyfile to automatically mount other encrypted volumes or whatever other flavor of FDE you want) with a long diceware password?
I'm not familiar with TPM2 so is it a convenience thing?
IchVerstehNurBahnhof@reddit
It's interesting for enterprise environments because ideally it's completely transparent to the end user, while having to enter a long device specific password before entering your user password is not. It's hard enough to convince non technical users not to reuse passwords.
For consumers it's probably not something you want most of the time. Either you don't need (and don't want) disk encryption at all, or you really need it and then you don't want to take the risk on stuff like this.
nightblackdragon@reddit
Yes, it is convenience thing. You can store your LUKS key in TPM2 and during boot system state (things like kernel, boot options, SecureBoot state etc.) is verified and compared with state saved in TPM2. If those things match then TPM2 releases stored key and system will decrypt volumes without asking for password. If somebody tampered with system in some way (like disabled Secure Boot, changed boot options, kernel etc.) then system verification will fail and TPM2 won’t release key prompting system to ask for password.
Hafnon@reddit
Is this something that a PCR policy bound to PCR 11 could fix? If PCR 11 is measured into before initrd is exited, then the attacker won't have access to the decryption key, right?
(https://old.reddit.com/r/Fedora/comments/13ff4hh/deleted_by_user/jjwtsm1/?context=3)
odd_lama@reddit (OP)
I think that should work, yes. You just have to ensure at least one PCR is scrambled before handing control flow to the user code
Majiir@reddit
Along a similar vein, you can bind to PCR 15 with a value of all zeroes, and invalidate that before executing
init
by settingtpm2-measure-pcr=yes
in crypttab. You can find a discussion here where /u/elvishjerricco describes this approach.ElvishJerricco@reddit
Unfortunately PCR 15 isn't actually what you ought to use. It's just very convenient and easy to do if your OS doesn't have support for systemd-pcrlock, which NixOS does not. systemd-pcrlock can and should be integrated in lanzaboote at some point, but I and the other maintainers just haven't gotten around to it yet. There's an additional alternative called systemd-pcrphase, and technically it's fine, but I don't prefer it over pcrlock because pcrlock actually binds the disk to specific boot configurations, including things like the specific kernel's and boot loader's content hash. And it's enrolled as TPM2 NVRAM instead of on the disk, making it easy to enroll many boot configurations.
Of course, as mentioned in my post that you linked, actually doing some kind of root FS verification would be another way to combat this category of attacks, but it would only be an effective mitigation if you're doing a self-signed secure boot method. For something signed by the vendor, like Windows and macOS are, the attack described in my post is still possible because the attacker can just use a stock OS as their replacement of your root FS. So IMO both the TPM2 and secure boot should be used to protect the encryption.
I know I'm mostly just repeating what I said in your link, but I thought it was worth having directly readable here. And I wanted to mention that pcrlock is IMO the best way to deal with all of this :)
odd_lama@reddit (OP)
Do you just have a single disk? If I'm not mistaken then there still would be a possible attack when you have multiple disks that are unlocked in the initrd, since we can overwrite the last one without causing an error.
As long as the last disk allows us to gain control flow (e.g. it is the nix store or contains some other executables called by the system at some point) then we gain control over all the other disks which were decrypted previously. We of course cannot unseal the secret from the TPM anymore, but if we are lucky the other disks already contain some sensitive data or executables which we can modify. Worst case we have to overwrite an executable with something malicious and undo our changes to read any encrypted data.
Also I'm curious, how do you ensure the disks are mounted in the correct order? Currently it seems like the order is random by default.
ElvishJerricco@reddit
Hm that's a very interesting problem, which I have actually already inadvertently solved :P In two different ways. I have three systems where I'm using this TPM2+Secure Boot stuff, and two of them do indeed have multiple disks. I think the solution is basically to have only one TPM2-bound LUKS volume.
On one of these two systems, the TPM2-bound file system contains the keyfile for all the other disks. So if the root LUKS volume doesn't get decrypted by my initrd, nothing else is possible to decrypt.
On my other system, I have a different but similar approach. On this one, I'm using ZFS's native encryption feature. So all the disks are just in a single pool, and only one dataset is unencrypted. It's actually a very small zvol (ZFS's built-in virtual block device feature) encrypted with LUKS that contains the key material for the encryptionroot dataset. So if you can't unlock this zvol (which is only possible in initrd, and if PCR 0+2+7 is correct), then nothing else is possible to decrypt. And for added measure, this LUKS device is closed before transitioning out of initrd.
odd_lama@reddit (OP)
Nice setups, I might steal this since then I don't have to store the final PCR 15 value in the initrd to compare against :D