Hacker Newsnew | past | comments | ask | show | jobs | submit | rtpg's commentslogin

A thing I've experienced is people buying a bigger fridge, and then just leaving it when they move out because they're moving to a place with a fridge that is fine (for example, moving in with someone who has a nice fridge). Everyone walks away from that basically fine.

Especially the landlord who got a free fridge updgrade.

One thing I think worth considering for systems languages on this point: if you don't want to solve every expressiveness issue downstream of Result/Option/etc from the outset, look at Swift, which has nullable types.

MyObject can't be null. MyObject? can be null. Handling nullability as a special thing might help with the billion-dollar mistake without generating pressure to have a fully fleshed out ADT solution and everything downstream of that.

To people who would dismiss ADTs as a hard problem in terms of ergonomics: Rust makes it less miserable thanks to things like the question-mark shorthand and a bazillion trait methods. Languages like Haskell solve it with a monads + do syntax + operating overload galore. Languages like Scala _don't_ solve it for Result/Option in any fun way and thus are miserable on this point IMHO


I like to think about how many problems a feature solves to judge whether it's "worth it". I believe that the Sum types solve enough different problems that they're worth it, whereas nullability solves only one problem (the C-style or Java-style null object) the Sum types can solve that with Option<T> and also provide error handling with Result<T, Err> and control flow with ControlFlow<Continue, Break> among others so that's already a better deal.

Nullability is a good retro-fit, like Java's type erased generics, or the DSL technology to cram a reasonable short-distance network protocol onto the existing copper lines for telephones. But in the same way that you probably wouldn't start with type erased generics, or build a new city with copper telephone cables, nullability isn't worth it for a new language IMO.


I'm an advocate for "both".

- `Option<T>` and `Result<T,E>` at core;

- `?T` and `T!E` as type declaration syntax that desugars to them;

- and `.?` and `.!` operators so chains like `foo()?.bar()!.baz()` can be written and all the relevant possible return branches are inserted without a fuss.

Having `Option` and `Result` be simply normal types (and not special-casing "nullable") has benefits that are... obvious, I'd say. They're just _normal_. Not being special cases is great. Then, having syntactic sugars to make the very, _very_ common cases be easy to describe is just a huge win that makes correct typing more accessible to many more people by simply minimizing keystrokes.

The type declaration sugar is perhaps merely nice to have, but I think it really does change the way the average programmer is willing to write. The chaining operators, though... I would say I borderline can't live without those, anymore.

Chaining operators can change the SLOC count of some functions by as much as... say, 75%, if we consider a language like Go with it's infamous "if err not nil" clause that is mandated to spread across three lines.


Erased generics give parametricity, which most PL people think is fairly important. See https://en.wikipedia.org/wiki/Parametricity or https://www.cl.cam.ac.uk/teaching/1617/L28/parametricity.pdf

I mean, yeah, type erasure does give parametricity, but, you can instead design your language so that you monomorphize but insist on parametricity anyway. If you write stable Rust your implementations get monomorphized but you aren't allowed to specialize them - the stable language doesn't provide a way to write two distinct versions of the polymorphic function.

And if you only regard parametricity as valuable rather than essential then you can choose to relax that and say OK, you're allowed to specialize but if you do then you're no longer parametric and the resulting lovely consequences go away, leaving it to the programmers to decide whether parametricity is worth it here.


I don't understand your first paragraph. Monomorphization and parametricity are not in conflict; the compiler has access to information that the language may hide from the programmer. As an existance proof, MLTon monomorphizes arrays while Standard ML is very definitely parametric: http://www.mlton.org/Features

I agree that maintaining parametricity or not is a design decision. However, recent languages that break it (e.g. Zig) don't seem to understand what they're doing in this regard. At least I've never seen a design justification for this, but I have seen criticism of their approach. Given that type classes and their ilk (implicit parameters; modular implicits) give the benefits of ad-hoc polymorphism while mantaining parametricity, and are well established enough to the point that Java is considering adding them (https://www.youtube.com/watch?v=Gz7Or9C0TpM), I don't see any compelling reason to drop parametricity.


My point was that you don't need to erase types to get parametricity. It may be that my terminology is faulty, and that in fact what Rust is doing here does constitute "erasing" the types, in that case what describes the distinction between say a Rust function which is polymorphic over a function to be invoked, and a Rust function which merely takes a function pointer as a parameter and then invokes it ? I would say the latter is type erased.

The Scala solution is the same as Haskell. for comprehensions are the same thing as do notation. The future is probably effect systems, so writing direct style code instead of using monads.

It's interesting that effect system-ish ideas are in Zig and Odin as well. Odin has "context". There was a blog post saying it's basically for passing around a memory allocator (IIRC), which I think is a failure of imagination. Zig's new IO model is essentially pass around the IO implementation. Both capture some of the core ideas of effect systems, without the type system work that make effect systems extensible and more pleasant to use.


> for comprehensions are the same thing as do notation

The ergonomics of scala for comprehensions are, in my mind, needlessly gnarly and unpleasant to use despite the semantics being the same


I personally don't enjoy the MyObject? typing, because it leads to edge cases where you'd like to have MyObject??, but it's indistinguishable from MyObject?.

E.g. if you have a list finding function that returns X?, then if you give it a list of MyObject?, you don't know if you found a null element or if you found nothing.

It's still obviously way better than having all object types include the null value.


When you want to distinguish `MyObj??` then you'll have to distinguish the optionality of one piece of code (wherever your `MyObj?` in the list came from) with some other (list find) before "mixing" them. E.g. by first mapping `MyObj?` to `MyObj | NotFoundInMyMap` (or similar polymorphic variant/anonymous sum types) and then putting it in a list. This could be easily optimized away or be a safe no-op cast.

Common sum types allow you to get around this, because they always do this "mapping" intrinsically by their structure/constructors when you use `Either/Maybe/Option` instead of `|`. However, it still doesn't always allow you to distinguish after "mixing" various optionalities - if find for Maps, Lists, etc all return `Option<MyObj>` and you have a bunch of them, you also don't know which of those it came from. This is often what one wants, but if you don't, you will still have to map to another sum type like above. In addition, when you don't care about null/not found, you'll have the dual problem and you will need to flatten nested sum types as the List find would return `Option<Option<MyObj>>` - `flatten`/`flat_map`/similar need to be used regularly and aren't necessary with anonymous sum types that do this implicitly.

Both communicate similar but slightly different intent in the types of an API. Anonymous sum types are great for errors for example to avoid global definitions of all error cases, precisely specify which can happen for a function and accumulate multiple cases without wrapping/mapping/reordering. Sadly, most programming languages do not support both.


> E.g. if you have a list finding function that returns X?, then if you give it a list of MyObject?, you don't know if you found a null element or if you found nothing.

This is a problem with the signature of the function in the first place. If it's:

  template <typename T>
  T* FindObject(ListType<T> items, std::function<bool(const T&)> predicate)
Whether T is MyObject or MyObject?, you're still using nullpointers as a sentinel value;

  MyObject* Result = FindObject(items, predicate);
The solution is for FindObject to return a result type;

  template <typename T>
  Result<T&> FindObject(ListType<T> items, std::function<bool(const T&)> predicate)
where the _result_ is responsible for the return value wrapping. Making this not copy is a more advanced exercise that is bordering on impossible (safely) in C++, but Rust and newer languages have no excuse for it

Different language, but I find this Kotlin RFC proposing union types has a nice canonical example (https://youtrack.jetbrains.com/projects/KT/issues/KT-68296/U...)

    inline fun <T> Sequence<T>.last(predicate: (T) -> Boolean): T {
        var last: T? = null
        var found = false
        for (element in this) {
            if (predicate(element)) {
                last = element
                found = true
            }
        }
        if (!found) throw NoSuchElementException("Sequence contains no element matching the predicate.")
        @Suppress("UNCHECKED_CAST")
        return last as T
    }
A proper option type like Swift's or Rust's cleans up this function nicely.

Your example produces very distinguishable results. e.g. if Array.first finds a nil value it returns Optional<Type?>.some(.none), and if it doesn't find any value it returns Optional<Type?>.none

The two are not equal, and only the second one evaluates to true when compared to a naked nil.


What language is this? I'd expect a language with a ? -type would not use an Optional type at all.

In languages such as OCaml, Haskell and Rust this of course works as you say.


This is Swift, where Type? is syntax sugar for Optional<Type>. Swift's Optional is a standard sum type, with a lot of syntax sugar and compiler niceties to make common cases easier and nicer to work with.

Right, so it's not like a union type Type | Null. Then naturally it works the same way as in the languages I listed.

Well, in a language with nullable reference types, you could use something like

  fn find<T>(self: List<T>) -> (T, bool)
to express what you want.

But exactly like Go's error handling via (fake) unnamed tuple, it's very much error-prone (and return value might contain absurd values like `(someInstanceOfT, false)`). So yeah, I also prefer language w/ ADT which solves it via sum-type rather than being stuck with product-type forever.


How does this work if it is given an empty list as a parameter?

I guess if one is always able to construct default values of T then this is not a problem.


> I guess if one is always able to construct default values of T then this is not a problem.

this is how go handles it;

  func do_thing(val string) (string, error)
is expected to return `"", errors.New("invalid state")` which... sucks for performance and for actually coding.

I like go’s approach on having default value, which for struct is nil. I don’t think I’ve ever cared between null result and no result, as they’re semantically the same thing (what I’m looking for doesn’t exist)

In Go, the default (zero) value for a struct is an empty struct.

Eh, it’s not uncommon to need this distinction. The Go convention is to return (res *MyStruct, ok bool).

An Option type is a cleaner representation.


You don't even need to end the file in `.go` or the like when using shebangs, and any self-respecting editor will be good at parsing out shebangs to identify file types (... well, Emacs seems to do it well enough for me)

no need to name your program foo.go when you could just name it foo


The `go run` tool will not execute (or even recognize) a file that does not end in .go, so this is not good advice.

Unfortunate! Yet another little bit of Go design decision that makes things worse for, in my opinion, no reason.

I have the same sound issues with a lot of stuff, my current theory at this point is that TVs have gotten bigger and we're further away from them but speakers have stayed kinda shitty... but things are being mixed by people using headphones or otherwise good sound equipment

it's very funny how when watching a movie on my macbook pro it's better for me to just use HDMI for the video to my TV but keep on using my MBP speaker for the audio, since the speakers are just much better.


If anything I'd say speakers have only gotten shittier as screens have thinned out. And it used to be fairly common for people to have dedicated speakers, but not anymore.

Just anecdotally, I can tell speaker tech has progressed slowly. Stepping in a car from 20 years ago sound... pretty good, actually.


I agree that speaker tech has progressed slowly, but cars from 20 years ago? Most car audio systems from every era have sounded kinda mediocre at best.

IMO, half the issue with audio is that stereo systems used to be a kind of status symbol, and you used to see more tower speakers or big cabinets at friends' houses. We had good speakers 20 years ago and good speakers today, but sound bars aren't good.


On the other side being I needed to make some compromises with my life partner and we ended up buying a pair HomePod mini (because stereo was a hard line for me).

They do sound pretty much ok for very discreet objects compared to tower speaker. I only occasionally rant when sound skip a beat because of WiFi or other smart-assery. (Nb: of course I never ever activated the smart assistant, I use them purely as speakers).


A high end amp+speaker system from 50 years ago will still sound good. The tradeoffs back then were size, price, and power consumption. Same as now.

Lower spec speakers have become good enough, and DSP has improved to the point that tiny speakers can now output mediocre/acceptable sound. The effect of this is that the midrange market is kind of gone, replaced with neat but still worse products such as soundbars (for AV use) or even portable speakers instead of hi-fi systems.

On the high end, I think amplified multi-way speakers with active crossovers are much more common now thanks to advances in Class-D amplifiers.


I feel like an Apple TV plus 2 homepod minis work well enough for 90% of people’s viewing situations, and Apple TV plus 2 homepods for 98% of situations. That would cost $330 to $750 plus tax and less than 5 minutes of setup/research time.

The time and money cost of going further than that is not going to provide a sufficient return on investment except to a very small proportion of people.


Speakers haven't gotten a lot cheaper either. Almost every other kind of technology has fallen in price a lot. A good (single) speaker, though, costs a few hundred euros, which is the same it has pretty much always costed. You'd think that the scales of manufacturing the (good) speakers would bring the costs down, but apparently this hasn't happened for whatever reason.

Sure, but it's the job of whoever is mastering the audio to take such constraints into account.

Bass is the only thing that counts.

Doesn't matter if it makes vocals part of the backgroud at all times.


True. I can tell when my neighbor is watching an action film, due to the rumbling every few minutes. And silence in between.

I have a relatively high end speaker setup (Focal Chora bookshelves and a Rotel stereo receiver all connected to the PC and AppleTV via optical cable) and I suffer from the muffled dialogue situation. I end up with subtitles, and I thought I was going deaf.

I strongly recommend you try adding a center channel to your viewing setup, also a subwoofer if you have the space. I had issues with clarity until I did that.

I legit would recommend you try the "macbook pro speaker" test if you have one... it was really night and day for me

It is a well known issue: https://zvox.com/blogs/news/why-can-t-i-hear-dialogue-on-tv-...

I don't find the source anymore but I think that I saw that it was even a kind of small conspiracy on tv streaming so that you set your speakers louder and then the advertisement time arrive you will hear them louder than your movie.

Officially it is just that they switch to a better encoding for ads (like mpeg2 to MPEG-4 for DVB) but unofficially for the money as always...


I feel like the Occam's Razor explanation would be that way TVs are advertised makes it really easy to understand picture quality and far less so to understand audio. In stores, they'll be next to a bunch of others playing the same thing such that really only visual differences will stand out. The specs that will stand out online will be things like the resolution, brightness, color accuracy, etc.

I have a dedicated multi-channel speaker system and still have the problem

I think the issue is dynamic range rather than a minor conspiracy.

Film makers want to preserve dynamic range so they can render sounds both subtle and with a lot of punch, preserving detail, whereas ads just want to be heard as much as possible.

Ads will compress sound so it sounds uniform, colorless and as clear and loud as possible for a given volume.


Even leading ads out the dynamic range is really an issue for anyone not living alone in a house with no other homes very close.

> I don't find the source anymore but I think that I saw that it was even a kind of small conspiracy on tv streaming so that you set your speakers louder and then the advertisement time arrive you will hear them louder than your movie.

It's not just that. It's obsession with "cinematic" mixing where dialogues are not only quieter that they could, to make any explosion and other effects be much louder than them, but also not enough above background effects.

This all work in cinema where you have good quality speakers playing much louder than how most people have at home.

But at home you just end up with muddled dialogue that's too quiet.


> And a few days ago a security vulnerability was found in the Rust Linux kernel code.

was it a security vulnerability? I'm pretty sure it was "just" a crash. Though maybe someone smarter than me could have turned that into something more.

I have no dog in this race, I really like the idea of Rust drivers but can very much understand retiscience at getting Rust to be handling more core parts of the kernel, just because Rust's value seems to pay off way more in higher level code where you have these invariants to maintain across large code paths (meanwhile writing a bunch of doubly-linked lists in unsafe Rust seems a bit like busy work, modulo the niceties Rust itself can give you)


> was it a security vulnerability? I'm pretty sure it was "just" a crash.

It's a race condition resulting in memory corruption.[1][2] That corruption is shown to result in a crash. I don't think the implication is that it can result only in crashes, but this is not mentioned in the CVE.

Whether it is a vulnerability that an attacker can crash a system depends on your security model, I guess. In general it is not expected to happen and it stops other software from running, and can be controlled by entities or software who should not have that level of control, so it's considered a vulnerability.

[1] https://www.cve.org/CVERecord/?id=CVE-2025-68260 [2] https://lore.kernel.org/linux-cve-announce/2025121614-CVE-20...


It is entertaining to observe that how - after the bullshit and propaganda phase - Rust now slowly enters reality and the excuses for problems that did not magically disappear are now exactly the same as what we saw before from C programmers and which Rust proponents would have completely dismissed as unacceptable in the past ("this CVE is not exploitable", "all programmers make mistakes", "unwrap should never been used in production", "this really is an example how fantastic Rust is").


You have a wild amount of confirmation bias going on here, though.

Of course, this bug was in an `unsafe` block, which is exactly what you would expect given Rust's promises.

The promise of Rust was never that it is magical. The promise is that it is significantly easier to manage these types of problems.


There were certainly a lot of people running around claiming that "Rust eliminates the whole class of memory safety bugs." Of course, not everybody made such claims, but some did.

Whether it is "significantly easier" to manage these types of problems and at what cost remains to be seen.

I do not understand you comment about "confirmation bias" as did not make a quantitative prediction that could have bias.


> There were certainly a lot of people running around claiming that "Rust eliminates the whole class of memory safety bugs."

Safe Rust does do this. Dropping into unsafe Rust is the prerogative of the programmer who wants to take on the burden of preventing bugs themselves. Part of the technique of Rust programming is minimising the unsafe part so memory errors are eliminated as much as possible.

If the kernel could be written in 100% safe Rust, then any memory error would be a compiler bug.


Yes, but this is the marketing bullshit I am calling out. "Safe Rust" != "Rust" and it is not "Safe Rust" which is competing with C it is "Rust".


> it is not "Safe Rust" which is competing with C it is "Rust".

It is intended that Safe Rust be the main competitor to C. You are not meant to write your whole program in unsafe Rust using raw pointers - that would indicate a significant failure of Rust’s expressive power.

Its true that many Rust programs involve some element of unsafe Rust, but that unsafety is meant to be contained and abstracted, not pervasive throughout the program. That’s a significant difference from how C’s unsafety works.


But there are more than 2000 uses of "unsafe" even in the tiny amount of Rust use in the Linux kernel. And you would need to compare to C code where an equally amount of effort was done to develop safe abstractions. So essentially this is part of the fallacy Rust marketing exploits: comparing an idealized "Safe Rust" scenario compared to real-word resource-constrained usage of C by overworked maintainers.


The C code comparison exists because people have written DRM drivers in Rust that were of exceedengly high quality and safety compared to the C equivalents.


This is just so obtuse. Be serious.

Even if you somehow manage to ignore the very obvious theoretical argument why it works, the amount of quantitative evidence at this point is staggering: Rust, including unsafe warts and all, substantially improve the ability of any competent team to deliver working software. By a huge margin.

This is the programming equivalent of vaccine denialism.


There is a lot of science showing vaccines work. For Rust showing that it is better this is still lacking. And no Google's blog posts are not science.


So kernel devs claiming Rust works isn't good enough? CloudFlare? Mozilla? Your're raising the bar to a place where no software will be good enough for you.


Safe Rust absolutely eliminates entire categories of bugs


> Of course, this bug was in an `unsafe` block, which is exactly what you would expect given Rust's promises.

The fix was outside of any Rust unsafe blocks. Which confused a lot of Rust developers on Reddit and elsewhere. Since fans of Rust have often repeated that only unsafe blocks have to be checked. Despite the Rustonomicon clearly spelling out that much more than the unsafe blocks might need to be checked in order to avoid UB.


The unsafe code relied on an assumption that was not true; the chosen fix was to make that assumption be true. Makes perfect sense to me.


Rust fanboys on Reddit are not contributing to the Linux kernel. What matters here is that Rust helps serious people deliver great code.


Is it any more or less amusimg, or perhaps tedious, watching the first Rust Linux kernel CVE be pounced on as evidence that "problems .. did not magically disappear"?

Does anyone involved in any of this work believe that a CVE in an unsafe block could not happen?


In case anyone is keen for an explanation of the vulnerability, LowLevelTV has done a video on this:

https://youtu.be/dgPI7NfKCiQ?si=BVBQ0MxuDpsbCvOk

The TLDR is that this race condition happened with unsafe code, which was needed to interact with existing C code. This was not a vulnerability with Rust's model.

That said, you can absolutely use bad coding practices in Rust that can cause issues, even for a regular programmer.

Using unwrap without dealing with all return cases is one example. Of course, there is a right way to dealing with return methods, but it's up to the programmer to follow it


Thunderbird has succeeded at doing this and is in a somewhat similar spot (though huge asterisk there given the existence of Chrome)


My magical ideal is to be able to do this even without the sender's consent. Let me, as a reviewer, chop up half of someone's PR and then get that sent in, CI'd, approved.


Very interesting historical document, though I don't have that much confidence in the precision of the explanation of the terms.

Related to this: does anyone know if there's any document that delves into how Church landed on Church numerals in particular? I get how they work, etc, but at least the papers I saw from him seem to just drop the definition out of thin air.

Were church numerals capturing some canonical representation of naturals in logic that was just known in the domain at the time? Are there any notes or the like that provide more insight?


Before Church there was Peano, and before Peano there was Grassmann

> It is rather well-known, through Peano's own acknowledgement, that Peano […] made extensive use of Grassmann's work in his development of the axioms. It is not so well-known that Grassmann had essentially the characterization of the set of all integers, now customary in texts of modern algebra, that it forms an ordered integral domain in which each set of positive elements has a least member. […] [Grassmann's book] was probably the first serious and rather successful attempt to put numbers on a more or less axiomatic basis.


While I don't know much about Church numbers or the theory how lambda calculus works, taking a glance at the definitions on wikipedia they seem to be the math idea of how numbers works (at the meta level)

I forgot the name of this, but they seem the equivalent of successors in math In the low level math theory you represent numbers as sequences of successors from 0 (or 1 I forgot)

Basically you have one then sucessor of one which is two, sucessor of two and so on So a number n is n successor operations from one

To me it seems Church numbers replace this sucessor operation with a function but it's the same idea


Church ends up defining zero as the identity function, and N as "apply a function to a zero-unit N times"

While defining numbers in terms of their successors is decently doable, this logical jump (that works super well all things considered!) to making numbers take _both_ the successor _and_ the zero just feels like a great idea, and it's a shame to me that the papers I read from Church didn't intuit how to get there.

After the fact, with all the CS reflexes we have, it might be ... easier to reach this definition if you start off "knowing" you could implement everything using just functions and with some idea of not having access to a zero, but even then I think most people would expect these objects to be some sort of structure rather than a process.

There is, of course, the other possibility which is just that I, personally, lack imagination and am not as smart as Alonzo Church. That's why I want to know the thought process!


> Church ends up defining zero as the identity function

Zero is not the identity function. Zero takes a function and calls it zero times on a second function. The end result of this is that it returns the identity function. In Haskell it would be `const id` instead of `id`.

    zero := λf.λx.x
    one  := λf.λx.fx
    two  := λf.λx.ffx

    id   := λx.x
I suspect that this minor misconception may lead you to an answer to your original question!

Why isn't the identity function zero? Given that everything in lambda calculus is a function, and the identity function is the simplest function possible, it would make sense to at least try!

If you try, I suspect you'll quickly find that it starts to break down, particularly when you start trying to treat your numerals as functions (which is, after all, their intended purpose).

Church numerals are a minimal encoding. They are as simple as it possibly gets. This may not speak to Church's exact thought process, but I think it does highlight that there exists a clear process that anyone might follow in order to get Church's results. In other words, I suspect that his discover was largely mechanical, rather than a moment of particularly deep insight. (And I don't think this detracts from Church's brilliance at all!)


Their structural properties are similar to Peano's definition in terms of 0 and successor operation. ChatGPT does a pretty good job of spelling out the formal structural connection¹ but I doubt anyone knows how exactly he came up with the definition other than Church.

¹https://chatgpt.com/share/693f575d-0824-8009-bdca-bf3440a195...


Yeah I've been meaning to send a request to Princeton's libraries with his notes but don't know what a good request looks like

The jump from "there is a successor operator" to "numbers take a successor operator" is interesting to me. I wonder if it was the first computer science-y "oh I can use this single thing for two things" moment! Obviously not the first in all of science/math/whatever but it's a very good idea


The idea of Church numerals is quite similar to induction. An induction proof extends a method of treating the zero case and the successor case, to a treatment of all naturals. Or one can see it as defining the naturals as the numbers reachable by this process. The leap to Church numerals is not too big from this.


Probably not possible unless you have academic credentials to back up your request like being a historian writing a book on the history of logic & computability.


I am _not_ a microservices guy (like... at all) but reading this the "monorepo"/"microservices" false dichotomy stands out to me.

I think way too much tooling assumes 1:1 pairings between services and repos (_especially_ CI work). In huge orgs Git/whatever VCS you're using would have problems with everything in one repo, but I do think that there's loads of value in having everything in one spot even if it's all deployed more or less independently.

But so many settings and workflows couple repos together so it's hard to even have a frontend and backend in the same place if both teams manage those differently. So you end up having to mess around with N repos and can't send the one cross-cutting pull request very easily.

I would very much like to see improvements on this front, where one repo could still be split up on the forge side (or the CI side) in interesting ways, so review friction and local dev work friction can go down.

(shorter: github and friends should let me point to a folder and say that this is a different thing, without me having to interact with git submodules. I think this is easier than it used to be _but_)


I worked on building this at $PREV_EMPLOYER. We used a single repo for many services, so that you could run tests on all affected binaries/downstream libraries when a library changed.

We used Bazel to maintain the dependency tree, and then triggered builds based on a custom Github Actions hook that would use `bazel query` to find the transitive closure of affected targets. Then, if anything in a directory was affected, we'd trigger the set of tests defined in a config file in that directory (defaulting to :...), each as its own workflow run that would block PR submission. That worked really well, with the only real limiting factor being the ultimate upper limit of a repo in Github, but of course took a fair amount (a few SWE-months) to build all the tooling.


We’re in the middle of this right now. Go makes this easier: there’s a go CLI command that you can use to list a package’s dependencies, which can be cross-referenced with recent git changes. (duplicating the dependency graph in another build tool is a non-starter for me) But there are corner cases that we’re currently working through.

This, and if you want build + deploy that’s faster than doing it manually from your dev machine, you pay $$$ for either something like Depot, or a beefy VM to host CI.

A bit more work on those dependency corner cases, along with an auto-sleeping VM, should let us achieve nirvana. But it’s not like we have a lot of spare time on our small team.


Go with Bazel gives you a couple options:

* You can use gazelle to auto-generate Bazel rules across many modules - I think the most up to date usage guide is https://github.com/bazel-contrib/rules_go/blob/master/docs/g....

* In addition, you can make your life a lot easier by just making the whole repo a single Go module. Having done the alternate path - trying to keep go.mod and Bazel build files in sync - I would definitely recommend only one module per repo unless you have a very high pain tolerance or actually need to be able to import pieces of the repo with standard Go tooling.

> a beefy VM to host CI

Unless you really need to self-host, Github Actions or GCP Cloud Build can be set up to reference a shared Bazel cache server, which lets builds be quite snappy since it doesn't have to rebuild any leaves that haven't changed.


I've heard horror stories about Bazel, but a lot of them involve either not getting full buy in from the developer team or not investing in building out Bazel correctly. A few months of developer time upfront does seem like a steep ask.


You're pointing out exactly what bothered me with this post in the first place: "we moved from microservices to a monolith and our problems went away"... ... except the problems had not much to do with the service architecture but all to do with operational mistakes and insufficient tooling: bad CI, bad autoscaling, bad oncall.


The thing is that some section of the right has convinced itself that Calibre is some DEI font. Meanwhile the rest of the world is just living life and having to deal with people getting this worked up about the default font of Microsoft Office since what, 2008?

Parallel universes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: