Hacker Newsnew | past | comments | ask | show | jobs | submit | dangoodmanUT's commentslogin

This is similar to what rivet (1) does, perhaps focusing more on stateless than rivet does

(1) https://www.rivet.dev/docs/actors/


A tool that I made a while back that has become a staple in everything I build, I started wanting kustomize-like functionality within environments.

The use case originally was having a simple overlay of S3 environment variables for testing so that they'd use a shared test bucket to provide agents with files, but otherwise we can use a local S3 (garage) when building.

That turned into "overlay environments": you can overlay them on top of each other just like an overlayfs, but for environment variables :)

HN link seems to strip the URL fragments, here's the direct link to the feature: https://github.com/danthegoodman1/EpicEnv/tree/main?tab=read...


dwight shrute detected

At least spell my name correctly ffs, it's right there in the org chart

Gemini does this a lot, getting stuck generating the same tokens over and over indefinitely

Because these ai machines aren’t replacing old machines, they’re replacing old humans

Yes, but there's a hidden benefit taken for granted: machines do not make human errors.

Sadly, machines not needing human treatment might be reason enough.


I’m praying you have backups. That last paragraph gives me anxiety

Well, I haven't lost them in over a decade... So I have a pretty good track record. The system it's hosted on has had at least one, maybe two hardware failures over that time. A system isn't done being set up until it has backups up and running.

When the server goes up in smoke, you won’t be able to restore from your track record.

A simple ‘scp remote local’ once a month will save you from years of “damn… if only I had backed up”


Ah. I see below you’re using rsync. Phew!

...and tested.

I only had to see one machine "that was being backed up" unable to restore from backup. Wasn't mine, but was enough to teach me to test them.


Absolutely! You're preaching to the choir here.

However, that said, over ~3 decades I've found that having a successful rsync exit code and alerting when that is not true, along with periodic "full" rsync checksum runs, is effectively a failsafe way of ensuing a good backup.

For our less critical systems, this plus "spot checking" by regularly going in and looking at "what did this file look like a few weeks ago" (something we commonly use backups for), has proven pretty effective while also being low work.

For critical systems, definitely do test recoveries. Our database server, for example, every week recovers the production database into our staging and dev environments, so backup problems tend to get noticed pretty quickly.


The problem without having consent is that it's easy to track who is using your service. Because there's no consent, they can redirect you to login and back, and grab your identity, without you doing anything other than loading the page.

I was just sufficiently nerd sniped by this, so let me know if I’m close:

Based on what the commenter below found about sshpiper I believe that you use the ssh identity + the ip from the slot to resolve the vm target. sshpiper knows how to route the ssh identity + slot ip to the correct VM. I suspect you have a custom sshpiper plugin to do that routing.

You use the slot record indirection so you can change the ip of a slot without having to update everyone’s A records across the customer base. It also makes it easy to shuffle around vm-slot mappings within a customer. I haven’t tested, but I’m guessing this dns server is internal (coredns?), and the ips too.

I did something similar (ip + identity routing) for a project a few weeks ago. Yours is a lot more elegant with the dns indirection.

I’m no ssh expert, but in theory you should be able to ssh -J exe.dev myvm.exe.xyz for a one-liner? Or maybe you don't even need it, if that DNS server within the ssh exe.dev is the same as the public DNS. Pardon for not testing it yet!


> Zero-copy deserialization

Just a nit on this section: zero-copy deserialization is not Rust specific (see flatbuffers). rkyv as a crate for doing so in Rust is though


This script makes it easy to copy an NPM package into a `vendor/` dir.

Helps against supply-chain attacks, and also makes it easier for LLMs to investigate how packages work.

Warning: Opus 4.5 did most of the work (but we use this in prod)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: