Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well shit. I think you're right.


Oh another thing, I'm not a fan of the premise:

"As a general-purpose memory allocator, though, we can't get away with having no free implementation."

I have a belief that the future of software are short-lived programs that never free memory. Programs allocate and terminate. Short-lived program communicate with each other via blocking CSP-style channels (see Reppy's Concurrent Programming in ML).

If you could also educate me on why this is a bad idea I would appreciate.


My first large scale web application was a webmail service built in C++ (!) where I early on decided we'd ditch nearly all freeing of memory, as it was running as a CGI, and it was much faster to just let the OS free memory on termination. The exception was for any particularly large buffers. Coupled with statically linking it, it reduced the overhead sufficiently that running it as a CGI performed well enough to save us the massive pain of guaranteeing sufficient isolation and ensuring we were free of memory leaks.

Especially in a request/reply style environment, long running application servers is largely a workaround for high startup costs, and it's only a "bad idea" in the instances where removing that high startup cost is too difficult to be practical. Overall I love avoiding long running programs.


I agree with your point but disagree with your reasoning. I think programs should always free memory at some point because then it's easier to reason about debugging memory leaks.

Practically speaking though, there are arena allocators that do exactly this - you allocate a bunch of memory at once, assign like-typed instances to "slots" in that memory region, and then deallocate everything all at once. Thus, the individual instance `free()` is a no-op.


It's not general purpose, and lots of programs that were designed to be short-lived often end up not being so in the future. People used to point at compilers as a typical example of this kind of thing, well, now we have compilers as libraries sitting resident in every popular developer tool.


With a simple enough allocator, freeing things could maybe even be beneficial even for short-lived programs, purely from the memory being in cache already, instead of needing to ask the OS for more (incl. page faulting & zeroing, besides being limited by RAM throughput). For a buddy allocator without coalescing, a free() that just adds the argument to the corresponding freelist can be as simple as 5 or so x86-64 instructions (the fast path of the allocator being ~8 or so instructions; certainly more than a bump allocator, but not by much, and the reuse benefits can easily be pretty big).


It's funny, I saw and retweeted this while writing this post: https://twitter.com/samwhoo/status/1650572915770036225?s=20

Not sure the future you describe is where we'll end up, haven't given it a huge amount of thought. Would be interesting to see, though.

Things like web servers could probably get away with doing some sort of arena allocation per request (I'd be surprised if some don't already do this).


Apache does this! And I do this in my own C web framework:

https://github.com/williamcotton/express-c/blob/master/deps/...


This would probably be something closer to actors (https://en.wikipedia.org/wiki/Actor_model) rather than programs since programs are traditionally implemented as OS processes which are relatively expensive to spin up and terminate. At some level, though, somebody has to deal with freeing the memory, and they may do it less efficiently than you can.


My take on this is that code should always match up malloc and free, but your application may use an allocator where free is noop, if that's appropriate for the application you write. This way your code is more generic and can be reused in an other application with different constraints.

And as soon as you are replacing free, you can replace malloc as well to be optimized for your use case. No need to build difficult bookkeeping hierarchies when they will never get used.


I suspect this would cause the same performance issues as a long running application that constantly malloc'd and free'd memory? Many application runtimes allocate but don't free memory - they just reuse it internally - for this reason. For example in a ruby application you'll see memory usage climb after boot, and eventually level off when it has all it will need for its lifetime, but never go down.


Even if the programs don't free memory, something has to allocate and free memory for the programs and channels.


So basically garbage collection via just terminating and letting the OS handle it?


Yes, this is what Ur/Web does albeit this is limited to Web Server requests. I'd argue all programs could be short lived and memory management becomes a matter of sizing program's scope/role to the amount of memory you can greedily consume. Certainly many sorting program (for e.g.) can leak until they terminate. Then, cheap instantiation and communication between programs.


That's no different than writing a traditional program and using big arena allocators for everything instead of individual allocations, except it's more complicated for no apparent reason.


Well otherwise, I learned a lot and the basics are much simpler than I expected, thank you for the article.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: