The full maxim I was taught being, “it’s either DNS or permissions”.
The fatal design flaw for the Domain Name System was failure to learn from SCSI, viz. that it should always be possible to sacrifice a goat to whatever gods are necessary to receive a blessing of stability. It hardly remains to observe that animal sacrifice is non-normative for IETF standards-track documents and the consequences for distributed systems everywhere are plainly evident.
Goats notwithstanding, I think it is splitting hairs to suggest that the phrase “it’s always DNS” is erroneously reductive, merely because it does not explicitly convey that an adjacent control-plane mechanism updating the records may also be implicated. I don’t believe this aphorism drives a misconception that DNS itself is an inherently unreliable design. We’re not laughing it off to the extent of terminating further investigation, root-cause analysis, or subsequent reliability and consistency improvement.
More constructively, also observe that the industry standard joke book has another one covering us for this circumstance, viz. “There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of processing 2. Exactly-once delivery”
SCSI had a reputation of being very stable and yet very finicky. Stable in the sense that not using the CPU for transfers yielded good performance and reliability. The finicky part was the quality of equipment (connectors, adapters, cables and terminators) something that led to users having to figure out the best order of connecting their devices in a chain that actually worked. “Hard drive into burner an always the scanner last.”
JSON unmarshalling often has to consider separately whether an attribute is absent, false, zero, null, or the empty string, but this was never quite semantically ambiguous enough for my tastes, so adding that void-ish values may also now be serialised as a tuple of length [0] seems to me an excellent additional obfuscation.
The use case here is to reduce the token usage with LLMs, such as an agent that outputs a list of commands eg. Tuples with files to write and their new contents.
Supporting this use case doesn’t require perfectly marshaling every data structure ever.
But to your point the tool could have wider use cases without the limitations.
If one trains a model to understand it then that model will inevitably emit it, which means in turn one shall have to parse it, and now the application supports TOON for anything, and good luck telling the users/customers any different.
Would he by any chance refer to it as Zulu or Zebra time? The Z-suffix shorthand for UTC/GMT standardisation has nautical roots IIRC and the nomenclature was adopted in civil aviation also. I sometimes say Zulu time and my own dad, whose naval aspirations were crushed by poor eyesight, is amongst the few that don’t double-take.
Enterprise DBAs will nevertheless provision separate /dev/null0 and /dev/null1 devices due to corporate policy. In the event of an outage, the symlink from null will be updated manually following an approved run book. Please note that this runbook must be revalidated annually as part of the sarbox audit, without which the null device is no longer authorised for production use and must be deleted
Neither is nominating a third party for your parking fine.
The point is to get away from centralized gatekeepers, not establish more of them. A hierarchy of disavowal. It’s like cache invalidation for accountability.
If you don’t wanna be held responsible for something, you’d better be prepared to point the finger at someone whois.
I was running millions of accounts using Postfix/Dovecot on shared-nothing storage with a single MUA-facing endpoint and complex policy options, and that was over a decade ago.
Fastmail today would be much bigger again, and they’re on CMU Cyrus.
150k is rookie numbers. Perhaps that was meant ironically to satirise mediocre enterprise thinking?
FWIW, GSuite seems to do fewer things, but at least does them better (think nested groups and calendar invitations for parent groups: adding/removing people does not update future events with MS tools).
But at the same time, within an org of 150k people, we have separate people to support our Teams usge, our Outlook usage, our AD/Entra usage: with the same number of "sysadmins", could we do the same with open source stack?
I don't know, but I know the bugs I see with MS365.
Cool, you got a blog article detailing how that works with Postfix/Dovecot? All clustering articles I'm seeing for those involved shared storage. Fastmail is not very specific how that works.
In any case, Exchange is not just email, it has Calendaring/Contacts stuff going on as well.
Why DAV should be integrated into any SMTPd ?? DAV is some protocol over HTTP - another service, another port. Why any architect want it in same binary or even deployed on same server ?? And even if some "cal" or "address" part is content in email that still processing it is totally different software layer then plain "sending mail" and storing it.
But no, people get self backdoored by using Exchange... Or clolud :) Or AI hosted by someone else...
Well, you can if the signed URL is signed for the CDN's verification instead of the underlying storage.
Generalising this; you don't need stateful logged-in authentication to defeat IDOR, you can include an appropriately salted HMAC in the construction of a shared identifier, optionally incorporating time or other scoping semantics as necessary, and verify that at your application's trust boundary.
This tends to make identifiers somewhat longer but still fit well inside a reasonable email'd URL to download your phone bill without having to dig up what your telco password was.
However, note that one of the baseline requirements of privacy-oriented data access is issuing different and opaque identifiers for the same underlying thing to each identifiable principal that asks for it. Whether that's achieved cryptographically or by a lookup table is a big can of engineering worms.
The actual experts I was paying attention to said that wearing a K/N-94/95 type mask lowers the statistical rate of transmission, that is, infection of others by your deadly virus.
The subsequent findings are that cloth-type masks are less effective (but not wholly ineffective) compared to clinical/surgical masks at limiting the aerosolized viral shedding from those already infected. So if a cloth mask was all you had, the advice became "please wear it".
Turns out, many people assume advice is only relevant when given for their own direct & immediate personal benefit, so they hear what they want to hear, and even the idea of giving a shit about externalities is sheer anathema. That gets boiled down further for idiot-grade TV and bad-faith social media troll engagement and we wind up with reductive and snarky soundbites, like the remark above, that help nobody at all.
Back on topic, the choice of so-called "experts" in the Guardian's coverage of the AWS matter seems to be a classic matchup of journalistic expediency with self-promoting interests to pad an article that otherwise has little to say beyond paraphrasing Amazon's operational updates.
It's unclear what you're arguing. The leading experts (Fauci/CDC) who most Americans were paying attention to were not providing this shading of meaning which you are trying to impute to them. That would be the case if they said something like N95 masks will provide excellent protection for you from the virus if worn correctly, but we have a shortage, so please make do with alternatives so that health care workers have access to them. That is not what they said. Instead they sacrificed credibility at the altar of expediency to the detriment of future trust.
What's reductive is assuming that people are motivated exclusively by self-interest instead of trusting them to make good decisions when told the truth.
> When you’re in the middle of an outbreak, wearing a mask might make people feel a little bit better and it might even block a droplet, but it’s not providing the perfect protection that people think that it is. And, often, there are unintended consequences — people keep fiddling with the mask and they keep touching their face.
> But, when you think masks, you should think of health care providers needing them and people who are ill... It could lead to a shortage of masks for the people who really need it.
He said that there's a shortage, and that he didn't trust that people would wear the masks correctly. I remember that most of the early anti-mask guidance I heard was claims that they weren't likely to prevent yourself from getting infected because: the mask would become an infectious surface; and people wouldn't handle the mask as infectious.
> It is mainly to prevent those people who have the virus — and might not know it — from spreading the infection to others.
> U.S. health authorities have long maintained that face masks should be reserved only for medical professionals and patients suffering from COVID-19, the deadly disease caused by the coronavirus. The CDC had based this recommendation on the fact that such coverings offer little protection for wearers, and the need to conserve the country's alarmingly sparse supplies of personal protective equipment.
Sounds more like you chose to ignore it. My family was wearing medical-grade disposable facemasks and socially distancing from February 2020 on the basis of healthcare advice.
Hunting for a bogeyman in retrospect is the bad-faith narrative of the mediocre culture warrior. Good luck with your undifferentiated rage or whatever.
The fatal design flaw for the Domain Name System was failure to learn from SCSI, viz. that it should always be possible to sacrifice a goat to whatever gods are necessary to receive a blessing of stability. It hardly remains to observe that animal sacrifice is non-normative for IETF standards-track documents and the consequences for distributed systems everywhere are plainly evident.
Goats notwithstanding, I think it is splitting hairs to suggest that the phrase “it’s always DNS” is erroneously reductive, merely because it does not explicitly convey that an adjacent control-plane mechanism updating the records may also be implicated. I don’t believe this aphorism drives a misconception that DNS itself is an inherently unreliable design. We’re not laughing it off to the extent of terminating further investigation, root-cause analysis, or subsequent reliability and consistency improvement.
More constructively, also observe that the industry standard joke book has another one covering us for this circumstance, viz. “There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of processing 2. Exactly-once delivery”
reply