Stripe do this in a cool way. Their REST API is version based on date, and each time they change it they add a stackable compatibility layer. So your decade old code will still work.
Ceph overheads aren't that large for a small cluster, but they grow as you add more hosts, drives, and more storage. Probably the main gotcha is that you're (ideally) writing your data three times on different machines, which is going to lead to a large overhead compared with local storage.
Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.
Arguably, "comma as a separator" is close enough to comma's usage in (many) written languages that it makes it easier for less technical users to interact with CSV.
https://www.openssh.com/legacy.html - Legacy algorithms in OpenSSH, which explains a little what they do. Then there is also your Identity key that you authenticate yourself with, which is placed in the servers authorized_keys.
Usually smooth. But if you're running a production workload definitely do your prep work. Working and tested backups, upgrade one node at a time and test, read release notes, wait for a week after major releases, etc. If you don't have a second node I highly recommend it, Proxmox can do ZFS replication for fast live migrations without shared storage.
Unfortunately clustered storage is just a hard problem, and there is a lack of good implementations. OCFS2 and GFS2 exist, but IIRC there are challenges for using them for VM storage, especially for snapshots. Proxmox 9 added a new feature to use multiple QCOW2 files as a volume chain, which may improve this, but for now that's only used for LVM. (Making Proxmox 9 much more viable on a shared iSCSI/FC LUN).
If your requirements are flexible Proxmox does have one nice alternative though - local ZFS + scheduled replication. This feature performs ZFS snapshots + ZFS send every few minutes, giving you snapshots on your other nodes. This snapshot can be used for manual HA, auto HA, and even for fast live migration. Not great for databases, but a decent alternative for homelab and small business.
> IP does indeed have broadcast/multicast capabilities that cause the sender's egress traffic to remain independent of the number of recipients rather than being equal to the sum of recipients' ingress traffic, right?
Yes multicast, however you can't do multicast over the internet. In practise the technology is mainly used in production and enterprise scenarios (broadcast, signage, hotels, stadiums, etc).
Instead big streaming platforms like netflix or twich use CDN boxes installed locally at major ISPs. Also with so much hardware acceleration on modern NICs these days, it's surprisingly easy to handle Gbits of throughput for audio/video streaming.
They're probably referring to the podman.socket, which isn't quite like a daemon-mode but means it can emulate it pretty well. Unless there is some daemon mode I missed that got added, but I'd be rather surprised at that.
In places where you're doing a `dnf install podman` all you typically need to do is start the service and then point either the podman cli or docker cli directly at it. In Fedora for example it's podman.service.
I honestly prefer using the official docker cli when talking to podman.
You can mitigate this by including PCRs that sign the kernel and initrd, however it means whenever you update you need to unlock manually. On Redhat-based distros this can be done with PCRs 8 and 9, though IIRC this may change on other distros.
Also AFAIK there is no standard way to guess the new PCRs on reboot so you can't pre-update them before rebooting. So you either need to unlock manually or use a network decryption like dracut-sshd.
> You can mitigate this by including PCRs that sign the kernel and initrd.
No, that's not an effective mitigation. The signed kernel+initrd would still boot into the impersonated root.
> however it means whenever you update you need to unlock manually. On Redhat-based distros this can be done with PCRs 8 and 9, though IIRC this may change on other distros.
> Also AFAIK there is no standard way to guess the new PCRs on reboot so you can't pre-update them before rebooting. So you either need to unlock manually or use a network decryption like dracut-sshd.
> You can mitigate this by including PCRs that sign the kernel and initrd
nope! the trick the article is describing works even if the kernel and initrd is measured. it uses the same kernel, initrd, and command line.
the reason this trick works is that initrds usually fall back to password unlock if the key from the tpm doesn't work. so the hack replaces the encrypted volume, not the kernel, with a compromised one. that is:
1. (temporarily) replace encrypted volume with our own, encrypted with a known password.
2. boot the device.
3. the automated tpm unlock fails, prompting for a password.
4. type in our password. now we're in, using the original kernel and initrd, but it's our special filesystem, not the one we're trying to decrypt.
5. ask the tpm again for the key. since we're still using the original kernel, initrd, and command line, we should now get the key to unlock the original encrypted volume.
the way to fix this is to somehow also measure encrypted volume itself. the article points to suggestions of deriving a value from the encryption key.
It's ridiculous that there's no software implementation to do this, it's a huge problem.
Auto update should be able to include the kernel, initrd and grub cmdline from the running system I have no idea what's holding this back since evidently code already exists somewhere to do exactly that.
1 - Core system firmware data/host platform configuration; typically contains serial and model numbers
2 - Extended or pluggable executable code; includes option ROMs on pluggable hardware
3 - Extended or pluggable firmware data; includes information about pluggable hardware
4 - Boot loader and additional drivers; binaries and extensions loaded by the boot loader
7 - SecureBoot state
8 - Commands and kernel command line
9 - All files read (including kernel image)
Now the problem is, 8 and 9 I would argue are the most important (since technically 7 probably covers everything else in that list?), whereas my kernel and initrd are not encrypted and my command line can just be edited (but normally wouldn't need to be). But I can't find anyway to get grub, from a booted system, to simulate the output of those values so I can pre-seal the LUKS volume with the new values.
So in practice, I just always need to remember my password (bad) which means there's no way to make a reasonable assessment of system integrity on boot if I get prompted (I'd argue also the UI experience here isn't good: if I'm being prompted for a password, that clevis boot script should output what changed at what level - i.e. if secure boot got turned off, or my UEFI firmware changed on me when I'm staying in a hotel, maybe I shouldn't unlock that disk).
At least for PCR 7, it's well specified and documented how the digest is generated. You can dump the component digests of a PCR using `tpm2_eventlog`, and I've written a tool that can be used to populate the requisite data structures for hashing.
https://stripe.com/blog/api-versioning