On systems with a single floppy, drives A: and B: were two logical drives mapped to the same physical drive. This enabled you to (tediously) copy files from one diskette to another.
I'm pretty sure jsonl was a bit earlier as a term, but ndjson is now the more prominent term used for this... been using this approach for years though, when I first started using Mongo/Elastic for denormalized data, I'd also backup that same data to S3 as .jsonl.gz Leaps and bounds better than XMl at least.
I went through the LFS process back in 2003 or 2004. I didn't have a Linux system handy, so I built it with Fedora 3 running under VMWare on a Windows XP laptop. It was not speedy.
I tried to automate the process as much as possible. There's a separate Automated Linux From Scratch project, but I choose to roll my own. This was my first experience with Bash scripts -- I developed skills (and bad habits) that I continue to use this day.
Back in the day (1991 or so) during my brief time as a UI person we called the first mode "H for Hawaii" and the second "HAW for Hawaii". We never attempted the "why not both" option.
I'd like to see a fully distributed version. All you need is 4B hosts (IPV6 FTW) named N.domain.com (where N varies from 0 to 4B-1). The driver app sends N to the first host (0.domain.com). Each host compares the incoming N with their N.domain.com name; if they match, return the host's true/false value. If they don't match, forward the request to (N+1).domain.com and return the result.
Also, I think this can easily be implemented; you don’t need 4B hosts, just 4B DNS entries, and those, you can create with a single wildcard entry (https://en.wikipedia.org/wiki/Wildcard_DNS_record)
reply