Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's a bit naive to believe that the original coreutils developers only used what is now in the public test suite. Over that length of development, a lot of people probably tested a lot of things even if those tests didn't make it into the official CI suite. If you're doing a rewrite, just writing to the existing tests is really not enough.




It's not "naive". That is the nature of Open Source software. Everything is in the open.

Especially because people will not use a pre-compiled binary, but compile the software themselves (e.g., Gentoo users). So there must be no 'secret' tests, to guarantee that whoever compiles the software, as long as the dependencies are met, will produce a binary with the exact same behavior.

In fact, as an Open Source software, the test suite of the original coreutils is part of the Source package. It's in their (that is, coreutils' maintainers) interest to have the software tested against known edge cases. Because one day their project will be picked up by "some lone developer in Iowa" who will add new features. If there are 'secret' test cases, then the new developer's additions might break things.

This incident is merely coreutils happening to produce correct results on some edge case for uutils.


"Secret" tests have existed forever and will continue to exist. That's the nature of software. What gets pushed is only what the developer wants to maintain, not everything they did in the process of constructing and maintaining that software.

In practice "some lone developer in Iowa" will be held to the standard of quality of the original project if they want to add to it or replace it despite the support they get from the public package. Open-source software is also often not open to being pushed by any random person.


While you develop a feature, you do a lot of tests including white-box tests with the debugger, that would be annoying to automate and maybe wouldn't even survive the next commit. You also do "tests" by manually executing the code in your head modelling all the execution states. Automated tests often only test single cases, while rationing across all states, is hard to automate. They likely also had a specification next to their editor, referencing all the nitpicks in there and proving in their head that these are indeed addressed.

These kind of "tests" often are enforced as the codebase evolved, by having old guys e.g. named Torvalds that yell at new guys. They are hard to formalize short of writing a proof.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: