Without cryptographic pairing (and a hard requirement at that) and secure channels that aren't optional, none of the setups can really be secure. SDCP tried to do this, but with so much being optional and scoped smaller than the actual attack surface (pun intended), it's about as good as TPM-over-LPC.
As soon as it's optional or can be exchanged/replaced with no side-effects, you're screwed.
Wait, the match-on-sensor devices don’t have a token (hash of fingerprint or whatever) associated with the actual enrolled fingerprint that the host stores? If there was one, then a fingerprint enrollment would only work if the host had the token.
IMO this should work a lot more like U2F: the host stores an opaque blob containing the encrypted biometric data, and the device would decrypt the blob and verify the fingerprint. Heck, this plus a proof that the device was able to decrypt the blob would defeat all three attacks.
IMO the root cause is that the fingerprint sensor is a black box and just sends an "OK" to the OS. So you have to take extra care to authenticate the sensor or you are open to attacks like this. The developers were trying to be extra clever and to make sure the OS never sees the fingerprint. I think I would have prefered the KISS solution of the sensor sending the fingerprint (or maybe just a digest) to the OS and have the OS compare.
I'll take a solution that is in principle a bit less secure, but easier to understand, write drivers for, and not a cryptographic lockdownfest, over a solution that is theoretically safer but fails when not implemented 100% correctly.
Hmmm. That doesn't make sense to me. If your threat model is someone swapping parts, surely someone will sell you a genuine or counterfeit part. The buses aren't actually encrypted, so you can still just hijack it.
Sorry, but Apple's stance on this is not security, just $$$ in disguise. The same can be said about all their "privacy" things, they just want the data themselves so they can sell you more stuff and their competitors can't.
It's not a matter of "genuine or counterfeit". The two components (secure enclave and touch id/face id sensor IIRC) are cryptographically linked.
If you switch the touch/face sensor to a counterfeit or even a genuine part, the secure enclave won't recognise it any more and won't unlock.
You need to be able to re-link them cryptographically and Apple won't do that because it's a gaping security hole. The damn FBI couldn't crack it on the iPhone 7 and had to go another route. If it was just "$$$ in disguise", why was it so effective half a decade ago?
It's understandable though... 'supports linux' tends to mean secure boot isn't enforced, and you have a much higher chance of being able to just boot from a USB and dump the disk...
I was disappointed when I installed linux on my laptop to find that a default ubuntu install could mount and decrypt a bitlocker encrypted windows drive with no trouble - while I was under the impression that the encryption keys were locked deep in the TPM and couldn't be extracted by anything but a securely booted windows installation.
When I installed Windows 10 Bitlocker was turned on from the get go. I didn't choose it. Then it grumbled that it was "waiting for activation", which led me to the superuser post I linked to.
Yep Microsoft has started trying to do bitlocker-by-default, it sits at "waiting for activation" until the user signs into windows with a microsoft account and then it ties the bitlocker to the person's microsoft account.
Then an errant bios update can brick the computer.
Indeed and it's always good practice to keep the key backed up somewhere safe. Often times you need to manually enter the bitlocker key after a hardware change. This is only slightly inconvenient and exactly what should happen.
Popular linux distros being behind Windows on this front is one of the key reasons I still daily Windows.
Windows 11 (and maybe Windows 10?) default to Bitlocker activated if it realizes it's installed on a laptop or other mobile device.
This means it's an opt-out situation and the user probably isn't aware that a boot failure or hardware failure can brick all his data that he really should keep backups of.
> a default ubuntu install could mount and decrypt a bitlocker encrypted windows drive with no trouble
This means BitLocker was off or "Pending Activation," so the Volume Master Key (VMK) was available in plaintext rather than sealed.
When BitLocker is "On," the default is to seal the VMK using TPM PCRs 0, 2, 4, 7, and 11, so tampering with the Firmware (PCR0), UEFI Extensions (PCR2), UEFI Boot (PCR4), Secure Boot State (PCR7), or the BitLocker state itself (PCR11) will result in a failure to decrypt the key. Of course, there are vulnerabilities at every stage (especially sniffing key material as it transits the TPM), but the concept is reasonably sound.
Is this why people elsewhere in this thread are saying that a bios update can brick a machine which uses bitlocker? I guess if a user doesn't have the key saved, a bios update would result in a request for the decryption key that can't be satisfied?
Right, a BIOS upgrade will break the hash chain, because the system has gone and replaced the current trusted early boot software with new early boot software.
malicious and/or willfully ignorant rumor -- supports linux is only prevented by manipulative and secretly negotiated boot locking from Redmond and friends
you could also say that "supports linux" allowed them to discover and remedy this vulnerability which would likely have still existed without linux support, thus making the device stronger for both windows and linux users.
This is pretty much a certainty when the laptop, OS and security peripherals are all made by different companies. The only real security comes from tight vertical integration.
Coming from being used to apple laptops, I was shocked at how bad even microsoft-branded surface devices were at sw/hw integration and useful quality of components in general.
I had the original Surface Laptop at a previous job. If I remember correctly, I had to RMA it four times for different assorted catastrophic hardware failures. I lined my cubicle wall with their RMA receipts.
> only real security comes from tight vertical integration.
You're not describing security, you're describing theater.
Tight vertical integration doesn't prevent security defects, it makes it harder or impossible for independent researchers to discover, and become known publicly. Pick your favorite TLA, they'll figure it out and tell no one.
Better security actually comes from public scrutiny, at least when the issues are actually fixed by companies.
I am not an expert in this, but as far as I can tell from what I read, Linux doesn’t even try to implement a safe protocol here.
Apple is the only company that enforces a safe pairing that is - to the best of my knowledge - unbroken as of today.
That state of affairs directly contradicts your assertion, at least in this case. Which naturally doesn’t make the opposite unconditionally true, but things seem to be a little more complicated than you assert.
> I am not an expert in this, but as far as I can tell from what I read, Linux doesn’t even try to implement a safe protocol here.
Define safe? Safe meaning someone can take apart the device replace the fingerprint sensor, and return it to you without your knowledge? Have you heard of dma?
You also would never trust this responsibility to the kernel, the hardware itself would need to be responsible for this. A good TPM chip that actively wants to support Linux would be nice.
> Apple is the only company that enforces a safe pairing that is - to the best of my knowledge - unbroken as of today.
lol, that's my whole point though. Apple doesn't publish any information, and actively obstructs free open reverse engineering, it's not going to be found by someone who wants you to know, it will still be found by people who don't want you to know.
> That state of affairs directly contradicts your assertion, at least in this case. Which naturally doesn’t make the opposite unconditionally true, but things seem to be a little more complicated than you assert.
No, you misunderstand my assertion. Perhaps colored by lack of understanding of the threat model here.
If someone is able to separate you from your device and perform any modification to the device hardware itself you've lost no amount of vertical integration is going to save you from that. if they're able to separate you from your device and replace the biometric sensor in a way that's undetectable by you, you lost a long time ago because you leave your fingerprints and face scan everywhere you go.
Attacks on biometric devices while interesting and useful for some threats, are merely interesting theoretical attacks.
Does apples tight integration and obscene control over their biometric devices increase the cost of an attack, and by proxy, the security of the system? yes absolutely. does it do so meaningfully? not even close.
does apples take control over the biometric systems increase the cost and difficulty of a repair? yes absolutely. does apples type integration of biometric systems increase the cost of repair meaningfully such that it is an apples financial interests to do so? Yes, that's the point. For an additional few bucks, Apple gets to claim security and if your screen breaks now you need a whole new iPhone or you pay Apple to replace it for you.
saying tight vertical integration increases security is like saying a $1,000,001 is more than $1,000,000. The added security is effectively a rounding error, until you solve all the other attacks available.
Tight integration is necessary - (in)security does not respect boundaries - but it is not sufficient. if your vendors are not doing it right, it's still your problem.
> The only real security comes from tight vertical integration.
Yep because when I think of Microsoft I think security. It's not like they ever did something mind numbingly stupid like decide to default to the administrative user if you called the Azure API without an AuthZ header......
As soon as it's optional or can be exchanged/replaced with no side-effects, you're screwed.