Supply chain attacks really worry me. I do most of my work in docker containers partly as a small attempt to mitigate this. I run the full stack in the container, including Claude Code, Neovim, Postgres, etc.
I do have a fair number of Neovim plugins on my host machine, and a number of Arch packages that I probably could do without.
I’ve considered keeping my host’s Neovim vanilla, but telescope is hard to live without.
Supply chain attacks mean you need to trust your choice of suppliers, trust their security posture and choice of suppliers and so on. Even docker itself has FROM and often a few "apt get" (or similar) commands to build the image. Even with no file access, they can exfiltrate data.
This and MCP, IoT all the things, vibe coding, AI impersonation for social attacks and cryptocurrency rewards it's a golden age for criminal hackers!
You can say the same about the vast majority of distribution methods we have. There's no difference between `curl | sh` and executing a binary you download from the internet.
Checksums and signatures make it slightly better. At least you can go from OK to vulnerable by downloading the same thing as an hour ago. But if you upgrade then yeah.
The number of dependencies that require inordinate amounts of effort to build from a clean repository without network access is truly alarming. Even many core tools can't be bootstrapped (at least easily or in a manner supported by the developers) without downloading opaque binary blobs. It's like the entire software ecosystem is underpinned by sketchy characters hanging out in dark alleys who clandestinely slip you the required binaries if you ask nicely.
Same worries and setup here, with the only difference that I use Nix to either spawn a QEMU VM or build an LXC container that runs on a Chromebook (through Crostini).
I started using throwaway environments, one per project. I try keeping the stuff installed in the host OS to the bare minimum.
For the things I need to run on the host, I try to heavily sandbox it (mostly through the opaque macOS sandbox) so that it cannot access the network and can only access a whitelist of directories. Sandboxing is painful and requires trial an error, so I wish there was a better (UX-wise) way to do that.
Do you use devcontainers or a custom-built solution? Would you mind sharing how you do your dev work using containers? I've been looking to try it out, and this attack might be the tipping point to where I actually do that.
Custom. I have a little script: “dev sh” which creates a new container for whatever folder I’m in. The container has full access to that folder, but nothing else. If there’s a .podman/env file, the script uses that to configure things like ports, etc.
From what I saw of devcontainers, they basically grant access to your entire system (.ssh, etc). May be wrong. That’s my recollection, though.
I do have a fair number of Neovim plugins on my host machine, and a number of Arch packages that I probably could do without.
I’ve considered keeping my host’s Neovim vanilla, but telescope is hard to live without.