> I don't think that anyone actually believes that writing code is only for junior developers.
That is, unquestionably, how it ought to be. However, the mainstream – regrettably – has devolved into a well-worn and intellectually stagnant trajectory, wherein senior developers are not merely encouraged but expected to abandon the coding altogether, ascending instead into roles such as engineering managers (no offence – good engineering managers are important, it is the quality that has been diluted across the board), platform overseers (a new term for stage gate keepers), or so-called solution architects (the ones who are imbued with compliance, governance and do not venture out past that).
In this model, neither role is expected – and in some lamentable cases, is explicitly forbidden[0] – to engage directly with code. The result is a sterile detachment from the very systems they are charged with overseeing.
Worse still, the industry actively incentivises ill-considered career leaps – for instance, elevating a developer with limited engineering depth into the position of a solution designer or architect. The outcome is as predictable as it is corrosive: individuals who can neither design nor architect.
The number of organisations in which expert-level coding proficiency remains the norm at senior or very senior levels has dwindled substantially over the past couple of decades or so – job ads explicitly call out the management experience, knowledge of vacuous or limited usefulness architectural frameworks (TOGAF and alike). There do remain rare islands in an ever-expanding ocean of managerial abstraction where architects who write code, not incessantly but when a need be, are still recognised as invaluable. Yet their presence is scarce.
The lamentable state of affairs has led to a piquant situation on the job market. In recent years, headhunters have started complaining about being unable to find an actually highly proficient, experienced, and, most importantly, technical architect. One's loss is another one's gain, or at least an opportunity, of course.
[0] Speaking from firsthand experience of observing a solution architect to have quit their job to run a bakery (yes) due to the head of architecture they were reporting to explicitly demanding the architect quit coding. The architect did quit, albeit in a different way.
> One reason is that using static binaries greatly simplifies the problem of establishing Binary Provenance, upon which security claims and many other important things rely.
It depends.
If it is a vulnerability stemming from libc, then every single binary has to be re-linked and redeployed, which can lead to a situation where something has been accidentally left out due to a unaccounted for artefact.
One solution could be bundling the binary or related multiple binaries with the operating system image but that would incur a multidimensional overhead that would be unacceptable for most people and then we would be talking about «an application binary statically linked into the operating system» so to speak.
> If it is a vulnerability stemming from libc, then every single binary has to be re-linked and redeployed, which can lead to a situation where something has been accidentally left out due to a unaccounted for artefact.
The whole point of Binary Provenance is that there are no unaccounted-for artifacts: Every build should produce binary provenance describing exactly how a given binary artifact was built: the inputs, the transformation, and the entity that performed the build. So, to use your example, you'll always know which artefacts were linked against that bad version of libc.
> […] which artefacts were linked against that bad version of libc.
There is one libc for the entire system (a physical server, a virtual one, etc.), including the application(s) that have/have been deployed into an operating environment.
In the case of the entire operating environment (the OS + applications) being statically linked against a libc, the entire operating environment has to be re-linked and redeployed as a single concerted effort.
In dynamically linked operating environments, only the libc needs to be updated.
The former is a substantially more laborious and inherently more risky effort unless the organisation has achieved a sufficiently large scale where such deployment artefacts are fully disposable and the deployment process is fully automated. Not many organisations practically operate at that level of maturity and scale, with FAANG or similar scale being a notable exception. It is often cited as an aspiration, yet the road to that level of maturity is windy and is fraught with many shortcuts in real life which result in the binary provenance being ignored or rendering it irrelevant. The expected aftermath is, of course, a security incident.
I claimed that Binary Provenance was important to organizations such as Google where it is important to know exactly what has gone into the artefacts that have been deployed into production. You then replied "it depends" but, when pressed, defended your claim by saying, in effect, that binary provenance doesn't work in organizations that have immaturate engineering practices where they don't actually follow the practice of enforcing Binary Provenance.
But I feel like we already knew that practices don't work unless organizations actually follow them.
My point is that static linking alone and by itself does not meaningfully improve binary provenance and is mostly expensive security theatre from a provenance standpoint due to a statically linked binary being more opaque from a component attribution perspective – unless an inseparable SBOM (which is cryptographically tied to the binary), plus signed build attestations are present.
Static linking actually destroys the boundaries that a provenance consumer would normally want due to erasure of the dependency identities rendering them irrecoverable in a trustworthy way from the binary by way of global code optimisation, inlining (sometimes heavy), LTO, dead code elimination and alike. It is harder to reason about and audit a single opaque blob than a set of separately versioned shared libraries.
Static linking, however, is very good at avoiding «shared/dynamic library dependency hell» which is a reliability and operability win. From a binary provenance standpoint, it is largely orthogonal.
Static linking can improve one narrow provenance-adjacent property: fewer moving parts at deploy and run time.
The «it depends» part of the comment concerned the FAANG-scale level of infrastructure and operational maturity where the organisation can reliably enforce hermetic builds and dependency pinning across teams, produce and retain attestations and SBOM's bound to release artefacts, rebuild the world quickly on demand and roll out safely with strong observability and rollback. Many organisations choose dynamic linking plus image sealing because it gives them similar provenance and incident response properties with less rebuild pressure at a substantially smaller cost.
So static linking mainly changes operational risk and deployment ergonomics, not evidentiary quality about where the code came from and how it was produced, whereas dynamic linking, on the other hand, may yield better provenance properties when the shared libraries themselves have strong identity and distribution provenance.
NB Please do note that the diatribe is not directed at you in any way, it is an off-hand remark and a reference to people who prescribe purported benefits to the static linking that it espouses because «Google does» it without taking into account the overall context, maturity and scale of the operating environment Google et al operate at.
… on the x86 ISA because it encodes the 32-bit jump/call offset directly in the opcode.
Whilst most RISC architecture do allow PC-relative branches, the offset is relatively small as 32-bit opcodes do not have enough room to squeeze a large offset in.
«Long» jumps and calls are indirect branches / calls done via registers where the entirety of 64 bits is available (address alignment rules apply in RISC architectures). The target address has to be loaded / calculated beforehand, though. Available in RISC and x86 64-bit architectures.
They are after the personal use VPN clients, but corporate users will follow soon.
Using the corporate VPN for personal purposes, including social media, is generally against corporate policy and is frowned upon (at least officially) in most businesses and organisations. It is also fraught with complications and could lead to disciplinary action or other unpleasant consequences. Just because the policy is not enforced does not mean it won’t be in the future.
If governments start targeting personal VPN's, it is only a matter of time before businesses crack down on unauthorised corporate VPN use as it will increase their risk of legal action stemming from employees’ missteps or misdeeds.
> A subset of an ISA will be incompatible with the full ISA and therefore be a new ISA. No existing software will run on it. So this won't really help anyone.
This isn't an issue in any way. Vendors have been routinely taking out rarely used instructions from the hardware and simulating them in the software for decades as part of the ongoing ISA revision.
Unimplemented instruction opcodes cause a CPU trap to occur where the missing instruction (s) is then emulated in the kernel's emulation layer.
In fact, this is what was frequently done for «budget» 80[34]86 systems that lacked the FPU – it was emulated. It was slow as a dog but worked.
Finnish and Hungarian, despite being spread well apart from each other, are from the same Uralian language family.
Both (and other languages in the family) share one distinctive feature – an excessively large number of noun cases (by Indo-European language family standards).
However, these languages do not have prepositions, i.e. the 16-20 odd noun cases replace them, so it makes it somewhat easier for a new learner.
The noun cases can also be thought of as postpositions despite obviously not being them, but it is a good and simple mental model.
The real outlier is Icelandic, which has a notoriously irregular grammar, multiple noun declension and verb conjugation groups, prepositions and postpositions despite a small number of noun cases.
For instance, Japanese and Vietnamese do not differentiate between blue and green and require context specific clarification, e.g «traffic light blue-green».
Celtic languages, and I believe Mayan, had a similar thing going on with blue and green. A lot of languages never distinguished orange from yellow really either.
WhatsApp did not have a dedicated Watch app until 1 or 2 months ago – it was not even possible to respond to WhatsApp messages on the Watch, only seeing the mirrored notifications was possible.
You can blame Apple for other things if that is the intention, but this particular one was a decision made by Meta and by Meta only.
Write to your regulator and make a complaint that Meta is keeping the WhatsApp stage gate.
> I have seen pushback on this kind of behavior because "users don't like error codes" or other such nonsense […]
There are two dimensions to it: UX and security.
Displaying excessive technical information on an end-user interface will complicate support and likely reveal too much about the internal system design, making it vulnerable to external attacks.
The latter is particularly concerning for any design facing the public internet. A frequently recommended approach is exception shielding. It involves logging two messages upon encountering a problem: a nondescript user-facing message (potentially including a reference ID pinpointing the problem in space and time) and a detailed internal message with the problem’s details and context for L3 support / engineering.
I used «powermetrics» bundled with macOS with «bandwidth» as one of the samplers (--samplers / -s set to «cpu_power,gpu_power,thermal,bandwidth»).
Unfortunately, Apple has taken out the «bandwidth» sampler from «powermetrics», and it is no longer possible to measure the memory bandwidth as easily.
Solaris, AIX, *BSD and others do not offer overcommit, which is a Linux construct, and they all require enough swap space to be available. Installation manuals provide explicit guidelines on the swap partition sizing, with the rule of thumb being «at least double the RAM size», but almost always more in practice.
That is the conservative design used by several traditional UNIX systems for anonymous memory and MAP_PRIVATE mappings: the kernel accounts for, and may reserve, enough swap to back the potential private pages up front. Tools and docs in the Solaris and BSD family talk explicitly in those terms. An easy way to test it out in a BSD would be disabling the swap partition and trying to launch a large process – it will get killed at startup, and it is not possible to modify this behaviour.
Linux’s default policy is the opposite end of that spectrum: optimistic memory allocation, where allocations and private mappings can succeed without guaranteeing backing store (i.e. swap), with failure deferred to fault time and handled by the OOM killer – that is what Linux calls overcommit.
That is, unquestionably, how it ought to be. However, the mainstream – regrettably – has devolved into a well-worn and intellectually stagnant trajectory, wherein senior developers are not merely encouraged but expected to abandon the coding altogether, ascending instead into roles such as engineering managers (no offence – good engineering managers are important, it is the quality that has been diluted across the board), platform overseers (a new term for stage gate keepers), or so-called solution architects (the ones who are imbued with compliance, governance and do not venture out past that).
In this model, neither role is expected – and in some lamentable cases, is explicitly forbidden[0] – to engage directly with code. The result is a sterile detachment from the very systems they are charged with overseeing.
Worse still, the industry actively incentivises ill-considered career leaps – for instance, elevating a developer with limited engineering depth into the position of a solution designer or architect. The outcome is as predictable as it is corrosive: individuals who can neither design nor architect.
The number of organisations in which expert-level coding proficiency remains the norm at senior or very senior levels has dwindled substantially over the past couple of decades or so – job ads explicitly call out the management experience, knowledge of vacuous or limited usefulness architectural frameworks (TOGAF and alike). There do remain rare islands in an ever-expanding ocean of managerial abstraction where architects who write code, not incessantly but when a need be, are still recognised as invaluable. Yet their presence is scarce.
The lamentable state of affairs has led to a piquant situation on the job market. In recent years, headhunters have started complaining about being unable to find an actually highly proficient, experienced, and, most importantly, technical architect. One's loss is another one's gain, or at least an opportunity, of course.
[0] Speaking from firsthand experience of observing a solution architect to have quit their job to run a bakery (yes) due to the head of architecture they were reporting to explicitly demanding the architect quit coding. The architect did quit, albeit in a different way.
reply