Hacker Newsnew | past | comments | ask | show | jobs | submit | raggi's commentslogin

At runtime rather than install time yes. I did some prototyping on this back in the day, one of the issues is the language lacks an efficient data structure to store that information and you can’t (easily) build one efficiently because instances are too heavy.

What I would do to really squeeze the rest out in pure ruby (bear in mind I’ve been away about a decade so there _might be_ new bits but nothing meaningful as far as I know): Use a cheaper to parse index format (the gists I wrote years ago cover this: https://gist.github.com/raggi/4957402) Use threads for the initial archive downloads (this is just io, and you want to reuse some caches like the index) Use a few forks for the unpacking and post install steps (because these have unpredictable concurrency behaviors)

> there _might be_ new bits but nothing meaningful as far as I know

If you didn't need backwards compatibility with older rubies you could use Ractors in lieu of forks and not have to IPC between the two and have cleaner communication channels. I can peg all the cores on my machine with a simple Ractor pool doing simple computation, which feels like a miracle as a Ruby old head. Bundler could get away with creating their own Ractor safe installer pool which would be cool as it'd be the first large scale use of Ractors that I know of.


It’s definitely possible, I wrote a prototype many many years ago https://ra66i.org/tmp/how_gems_should_be.mov

I don't want the observability of my applications to be bound by themselves, it's kind of a real pain. I'm all for microvm images without excess dependencies, but coupling the kernel and diagnostic tools to rapidly developing application code can be a real nightmare as soon as the sun stops shining.

I found it very opaquely worded the whole way through. I think the work being presented is simply an implementation of the technique in eduos, but short of going and reading the paper I don’t know.

We are doing this, and it’s terrible. Having done both at scale this one is worse.

I have been self hosting a product on Postgres that serves GIS applications for 20 years and that has been upgraded through all of the various versions during that time. It has a near perfect uptime record modulo two hardware failures and short maintenance periods for final upgrade cutovers. The application has real traffic - the database is bigger than those at my day job.

It may not matter for clouds with massive margins but there are substantial opportunities for optimizing wear.

I would think hyperscalers stand to benefit the most from optimizing wear!

We care about wear to the extent we can get the expected 5 years out of SSDs as a capital asset, but below that threshold it doesn't really matter to us.

I think this is a fair take. Local security for Linux is nightmare fuel already, we don't need more.


Do you have actual evidence of this? What ASN operates this way?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: