This question reminds me of the first time I met a blind programmer.
I asked him how he managed to code, and he replied with something that stayed with me: a good programmer should organize software in such a way that every piece of code has a clear and logical place. The organization should be so intuitive that anyone could build a mental model of the structure and navigate it easily, even without seeing it.
It felt like something out of a Yoda or Mr. Miyagi lesson. Skeptical, I asked his colleagues if he was truly able to code or if he was just exaggerating. To my surprise, they told me not only was he capable, but he was the best programmer they had ever worked with. They said no one else came close to writing code as organized as his.
That conversation changed my perspective. Ever since, whenever I’m unsure where to place new code, I don’t think about DDD or any specific methodology. Instead, I try to follow the logic and structure of the project in a way that feels natural and easy to follow later.
Later in life, I met two other blind programmers and heard similar stories about their ability to produce well-organized code.
To bring this back to the original question: I view LSP/IDE features the same way those programmers view "visual aids." Code should be organized according to a clear and logical structure that makes it easy to navigate.
Relying on features like Ctrl+Click to find where things are located worries me. Why? Because it can mask structural flaws in the codebase. If we can't intuitively figure out where something belongs, that’s a sign the codebase lacks structure—and that should motivate us to refactor it.
Not only do I avoid using LSP features, but I’m also opposed to their use. While they can help with navigation, they may prevent developers from experiencing and addressing the underlying structural issues in their code.
> It really feels like the employer is trying to screw over the potential employee before they’ve even joined.
I mean, on the flip side, the employee wants to get as much salary as possible. You say this is a "dumb game" that we've developed, but nearly all negotiations work this way, and a lot of it is fundamentally dependent on leverage: how much does the company really want to hire the person, and how much does the person need the job.
I will say, having hired a lot of software engineers in my time, that I never see it as a bad thing if a potential employee gives a very high number. Similarly, I think it's totally reasonable for an employee to ask "what's the salary range for this position" and to expect an honest answer. But I have seen employees "negotiate themselves out of a job" because they've read too many "principles of negotiating" books and somehow act like we're negotiating over the end to the Ukraine War. Basically, if folks are going to be a total pain in the ass before the job has even started, I'm pretty sure I don't want to work with you (and every single time I've "overridden my gut" and thought "well, maybe this person won't be so bad, after all they're great technically", I e come to regret it). As you point out, the employee is obviously free to walk as well - in my opinion, it's probably better for everyone if things reach a "hmm, someone is going to be unhappy with this decision" moment that folks walk away.
This is only partially true from my experience as a Chinese grew up in Northern China. I think a greater factor is deliberately causing pain or embarrassment.
Like the initiation rites for fraternities or sororities, the pain is the point. The reason behind this is that a commonly shared embarrassing experience creates intimacy and loyalty.
I'm more inclined to accept this theory over others including the one from the article, because it can explain many rules observable in the drinking culture (at least in Northern China).
1. It's rude to reject an invitation to drink together from a person with higher status.
2. When many people drink together, the more you drink, the greater respect you show to others.
3. As a consequence, the person with higher status can choose to drink less than others, or they can choose to drink the same to show greater respect.
4. People with lower status should proactively invite people with higher status to drink, in the order of their status.
Therefore, in a typical drinking party (饭局), the people with lower status usually drink the most, often to the point of being unconscious or causing a scene, and that would show their loyalty.
This was part of the job for government officials and those who work with them. And people could genuinely get pretty sick from alcohol consumption. Death from drinking is not uncommon. There was even a debate on how to divy up the legal punishment to the people at the same table when this occurred. And it's not a theoretical sophistication at all.
Luckily, Xi has banned this as part of his anti-corruption campaign (八项规定, the Eight Rules), so it's less of an issue now.
Furthermore, young people in China have moved passed the drinking culture as well. Back in the university, one can choose whether to drink or not, and when one drinks, any amount to their personal preference is acceptable.
I should add that I don't drink at all, but I was often in a situation where I was supposed to drink and got asked a lot. That's why I sought an explanation for the drinking culture and I suspect that many people who participate in this culture can't explain it explicitly.
For extremely low latency, Clojure can be an awkward fit. By default there's a lot of sugar in the syntax, so if you need to be precise about the underlying types of your data and the exact datastructure, you need to toss out most of the standard datatypes and datastructures. It's small details like: "I definitely have an array of unboxed integers, and I need to be sure this one operation won't accidentally allocate a new list of boxed integers, and that this function I'm calling doesn't dispatch dynamically" or things like that. You start using `deftype` everywhere, sometimes dipping into Java, etc. You may as well just write java at that point. However from a pure Clojure standpoint, the author of Neanderthal has done great work on making Clojure viable for high performance numerical computing.
A smart man once said, "I'd rather be a hypocrite than the same person forever." It was Adam Horovitz, of Beastie Boys, when confronted about going from sexism in their early content to defending gender equality later in life.
I think that hooks and Reagent developed somewhat independently; or at least, they are not as related as one might think at first glance.
The primary difference is where the data _flows from_. Like another commenter pointed out, your examples can be done similarly as trivially with Hooks. The difference is that the state is stored in Reagent in a global mutable atom, while the state in React is stored in the React tree, and is tied to the render lifecycle.
This doesn't seem like a huge difference on its face, but in practice it makes for very different experiences navigating a code base.
In Reagent, your data is flowing down via props but also from the side via any number of atoms/subscriptions/etc. In React, you know that your data is always either state local to the component or passed via props.
This isn't even to mention the other various hooks like useEffect, which allows you to coordinate side-effects with changes in data in a composable, locally-scoped way that Reagent does not allow you to do. In order to coordinate side effects, it requires you to keep the description of side effects completely separate from your components, which makes them less composable.
For people who enjoy the way that React Hooks makes you think, and are also interested in ClojureScript (immutable data structures + amazing standard library + hot reloading + macros built into the language), check out https://github.com/Lokeh/hx or its WIP successor, https://github.com/Lokeh/helix (I am the author of these libraries).
This was just taking the final React code in the article and rewriting it in vanilla as close as possible.
Your comment is fully correct, but I would like to point out:
- with such a tiny DOM to rerender, it is equivalent, probably even faster
- custom element encapsulates DOM and it is fairly trivial and extremely fast to pick DOM (like you can even use id's everywhere) in the 'old jquery' way, with simple library functions
- updating DOM with simple render() call is way more elegant, but if you keep custom elements DOM small (and one should), this way is not that bad at all
Like this:
<!doctype html>
<body>
<custom-element></custom-element>
<script>
const CHECKBOX_ID = "my-checkbox";
const defaultLabelContent = "Toggle me, you newbies";
const beforeDiscountText = "You have not availabled discount";
const afterDiscountText = "Discount Availed!";
const beforeLabelText = "Click on me to remove fake discount"
const afterLabelText = "Click me to apply fake discount!"
const state = new WeakMap();
// 'library' code
function qs(selector) {
return this.shadowRoot.querySelector(selector);
}
function getElem(nodeOrSelector) {
return nodeOrSelector === String(nodeOrSelector) ? qs.call(this, nodeOrSelector) : nodeOrSelector;
}
function replaceText(nodeOrSelector, text) {
let elem = getElem.call(this, nodeOrSelector);
if (elem) elem.textContent = text;
}
function updateAttribute(nodeOrSelector, name, value = '') {
let elem = getElem.call(this, nodeOrSelector);
if (elem) elem[(value ? 'set' : 'remove') + 'Attribute'](name, value);
}
// end of 'library' code
class CustomElement extends HTMLElement {
connectedCallback() {
this.attachShadow({ mode: 'open' });
this.shadowRoot.innerHTML = `
<input type="checkbox" id=${CHECKBOX_ID}>
<label for=${CHECKBOX_ID}></label>
<div></div>
`;
this.shadowRoot.addEventListener('change', (event) => {
state.set(this, event.target.checked);
this.update();
})
this.update();
}
update() {
let isChecked = state.get(this);
updateAttribute.call(this, 'input', 'checked', isChecked);
replaceText.call(this, 'label', isChecked ? beforeLabelText : afterLabelText);
replaceText.call(this, 'div', isChecked ? afterDiscountText : beforeDiscountText);
}
}
customElements.define('custom-element', CustomElement);
</script>
Well, let's look at what exactly Bill Gates said. I don't see a direct transcript; the most explicit description I can find is:
Asked directly by Sky’s Sophy Ridge if he thought changing patent restrictions “would be helpful,” Gates answered with a quick and curt “no,” before continuing:
“Well, there’s only so many vaccine factories in the world, and people are very serious about the safety of vaccines. And so moving something that had never been done — moving a vaccine from, say, a [Johnson & Johnson] factory into a factory in India — it’s novel. It’s only because of our grants and our expertise that can happen at all. The thing that’s holding things back in this case is not intellectual property. There’s not like some idle vaccine factory, with regulatory approval, that makes magically safe vaccines. You know, you’ve got to do the trials on these things, and every manufacturing process has to be looked at in a very careful way.”
So, first of all, I think it's a mistake to take this and say Bill Gates is "opposed". He's just saying he doesn't think it would be helpful. Secondly, I think this rationale is very clear. He does not think waiving patents would be helpful because the intellectual property is not a bottleneck on developing vaccines, because it isn't straightforward to develop these vaccines in new factories.
One fact that I think is really important to this debate - Moderna announced they are not enforcing their patents, months ago when their vaccine got approved.
However, there aren't any new factories springing up making the Moderna vaccine without Moderna being involved. In all this news coverage discussing whether waiving patent rights is urgent, it seems pretty important that Moderna is already waiving these rights! And it isn't making any difference.
I am not a vaccine expert but to me the most obvious conclusion is that Bill Gates believes what he said, that waiving patent rights won't really help that much. He might be wrong but his position really isn't obviously malicious or anything like that.
UUIDs are ... interesting. They offer both protections and some potentially unexpected affordances.
My comments here are informed by experience with three systems in particular:
- A 1990s era online publication's discussion forum which used sequential post IDs for its content.
- Google+, which utilised a form of UUID for both user IDs and submissions to the site.
- The somewhat notorious recent Parler web-scraping incident.
During the 1990s I was one of several participants in a forum that was being decommissioned, and which would be taken off-line entirely. Several years of what had seemed to be critically sigificant discussion history at the time (and in fairness, there are still bits I'd like to be able to call up now) would be lost.
The content management system (CMS) assigned a sequential post ID to each post on the site. Scraping the content was (mostly) as simple as running a bash loop of the form:
for i in [1..$MAX_POST_ID]; do wget $BASE_URL/post_$i; done
... which occupied a modest-for-the time laptop on a dial-up connection overnight.
The recent Parler content archival used a similar characteristic:
donk_enby managed to exploit weaknesses in the website’s design to pull the URL’s of every single public post on Parler in sequential order, from the very first to the very last, allowing her to then capture and archive the contents.
Google+, by contrast, assigned 20- or 21-digit numeric sequences to users and content on the site (along with "vanity" text-based user identifiers for a small but generally active set of users, which turned out to confound ultimate archival efforts). This was larger than the actual populated target space by a factor of billions to quadrillions. Exhaustive search of either user or content space was simply infeasible.
But there was an interesting side effect.
As a search company, Google is (or at least was) remarkably conscientious about providing comprehensive sitemaps for many of its properties. This included Google+, and (when I accessed them), some 25 gigabytes of sitemaps for every single one of the then 2.2 billion user ID assigned at Google+. These were broken out into files of about 50,000 records each (the maximum permitted under the sitemap protocol). Yes, roughly 45,000 sitemap files.
Because reasons, possibly the algorithm used to generate UUIDs, the contents of any one sitemap appeared and tested via several methods to be a random sampling of user IDs. So when I got the bright idea after one too many fruitless arguments with someone that based on my sense, Google+ activity was nowhere near the levels the company was claiming, I realised that I could simply grab any arbitrary sitemap and sequentially scrape user profile pages, and look for the date of the most recent public posting activity, if any.
The key to valid sample-based statistical inference is having a random sample. And Google had just handed this to me. After only about 100 profiles, it became clear to me that at best about 9% of accounts had ever posted to the site. Another Very Simple Bash Script chugged through all ~50k profiles in the file I'd chosen, again on a Laptop of Very Modest Proportions, though over a rather nice broadband connection of the time, and again, a night's data pull and some crude awk scripts for reporting revealed the hard truth about Google+ activity: https://ello.co/dredmorbius/post/naya9wqdemiovuvwvoyquq
But I totally punted on the sampling, trusting (and yes, doing some rough checks) that any one sitemap file would actually be a validly random sample.
Eric Enge of (then) Stone Temple Consulting replicated the methodology on a much larger sample of 500k profiles, and doing some more robust resampling of the data as I understand it, validating my own rough estimates though providing additional detail. (A larger sample does not increase the accuracy of a statistical analysis by much, though it can increase resolution of that analysis, especially for rarely-occurring phenomena, such as, in this case, people actually using Google+, roughly 0.3% of the total registered profiles.) https://blogs.perficient.com/2015/04/14/real-numbers-for-the...
(I was impressed by the work, unaware of it prior to publishing, and somewhat relieved I hadn't made any stupid huge errors in my own effort. A key point in publishing my own findings was to note that pretty much anyone could quickly come up with a rough assessment of true G+ activity, a fact that made Google's own increasingly ludicrous statements, openly mocked in the trade and business press, easily disprovable and ultimately hurting the company's overall credibility.)
When G+ was in the process of shutting down, I accessed those sitemaps again, this time to find the list of categories Google had used to classify data (interestingly: these existed for English, as one might expect, and Spanish. Only.) and for the Communities (groups) feature of the site. In that case, an immediate interesting find was that rather than the 5 million or so groups that was generally claimed for G+, there were nearly 8 million when I first pulled the data, with that count growing to over 8 million by the time new Community creation was finally disabled in January of 2019. Again, by drawing samples of the full population, and being aware that I was looking for rare phenomena (here, large or active communities), and still undergoing large amounts of both creation and deletion activity. In the few hours between pulling a new communities listing and being able to scan a portion of that for size and activity data, a substantial fraction no longer existed. This suggested some interesting dynamics going on, likely around spam or disinformation. Another interesting finding of this research was that Google had managed to police G+ commmunities pretty effectively against extremist groups, with little apparent collateral damage. But that's another story.
The methods aren't universally applicable. I haven't spent much time digging into, say, Facebook or Twitter numbers, though the simple approaches effective on G+ don't seem applicable to them. There might be other avenues to assessing activity or other characteristics independent of published statistics.
At the same time, the UUIDs also meant that the naive approach of archiving site content by simply iterating (or randomly poking) through the namespace would result in only one successful request for every few billions, or quadrillions, or worse, of requests. Without some means of identifying populated values, that approach was useless.
The upshot: UUIDs may protect user privacy. But depending on how deployed or exposed, could also provide a powerful tool for imputing true characteristics of site activity or behaviour.
> If you’re too boring it gets hard to recruit. You don’t want trend chasers joining, but good candidates have a healthy want and interest in new technology.
I think this is worth examining in a lot more detail.
I accept that candidates are going to be put off by obsolete or terrible technology. I wouldn't take a job working on a Java 1.7 codebase, for example. Or ATG (now Oracle Commerce) - i did three years on that, and will never touch it again.
But if a candidate is motivated to take a job because they get to use shiny new technology, i would suggest that they are not, in fact, a good candidate. In my experience, magpieism is associated with weaker programmers; not the very worst, but second quartile, say - people who get very engaged with superficial details, but don't think about deeper things.
The roadrunner is achieving prosperity. He goes far, fast. He keeps winning because he is obeying his animal spirits[1] -- literally running on roads. The coyote pursues prosperity, but rather than using his animal spirits, which would tell him to hunt like a coyote does, he instead repeatedly requisitions absurd contraptions that are manufactured by "Acme." We presume (because the cartoon is on TVs in America, and America's mythos is pervasive and ever-present) that Acme is a private company which uses the free market to drive its innovation and development cycles. Acme's advertising is presumably based on their mail-in order business, which at the time was heavily oriented toward homemakers and other "normal people." These contraptions invariably have negative unintended consequences, and yet Wile E Coyote turns to them in the next episode for the next great promise in technology, never learning his lesson.
The metaphorical business cycle plays out in every episode: Coyote puts his trust in the Solution To His Problems; he spends money and time buying and setting up the contraption that will Solve His Problem; he deploys it, but because of external (usually environmental) conditions it fails; Coyote is worse off than before because he over-extended himself. Coyote in this case would be a small business owner or similar everyman who is trying to figure out and out-play the game to get rich.
I guess the lesson here is to create an economy that is amenable to the localized, instinctive decisions made by each of its participants, and not to spend time on manufacturing conditions for success by setting up extremely efficient but otherwise very finicky systems, but rather to pursue it step by 3-toed step. The fancy contraptions (quantitative easing being an obvious and recent example) never end up working as intended and instead make life harder for workers in aggregate.
Having worked for 3 years designing 757 flight controls, it's a special plane for me, too. I always enjoy finding the bird I'm booked on is a 757. Some of the guys I worked with on it were an engineer's engineer. I lost my fear of flying through working on the 757.
tl;dr watch this fantastic intro to svelte talk by it's creator: https://www.youtube.com/watch?v=AdNJ3fydeao , it covers some of the growing pains of React that svelte addresses.
While it might look like the frontend is going around in circles, there are major & minor differences between the technologies, and they have each introduced novel (to JS at least) things... Off the top of my head (this timeline might not be right but it's how I went through it at least):
- Backbone got you away from jquery spaghetti, helped you actually manage your data & models. Often paired with Marionette to handle views (if you squint this is the precursor to components) and bring some order in that part of the app.
- Angular bundles backbone's data handling (services), marionette's orderly view separation (views, directives), but they made a fatal mistake in the apply/digest cycle and maybe encouraging a bit too much complexity. Angular is everything bundled together, with consistent usage/documentation/semantics, a familiar programming pattern (MVC), and a large corporate sponsor in Google it caught on like wildfire.
- Knockout sees Angular's MVC, laments it's complexity and focuses on MVVM -- simple binding of a model and a relatively simple reactive-where-necessary controller
- React comes along and suggests an even simpler world where the only first class citizen is components and everything else comes separate (this isn't necessarily new, there is an article out there comparing it to COM subsystems for windows). React is almost always used with react-router and some flux-pattern supporting data management lib -- these are also departures from how angular, backbone and knockout structured in-application communication (backbone was pure event bus, angular had the usual MVC methods, knockout was just callbacks -- if you have a handle to an observable when you change it things reload).
- Vue sees the complexity that React grew to (case in point: shouldComponentUpdate) and devises a much simpler subset along with a new way to handle reactivity -- using the `data` method of a vue component. Vue made a lot of decisions that helped it stay simple and small yet very productive and this is in large part thanks to React existing before hand.
- Svelte comes on the scene and realizes that truly optimal rendering could be achieved by just compiling away the excess and eschewing the virtual dom all together in most cases. No need to track all the structure if you compile the code with the updates necessary. Don't quote me on this but I think a whole host of things influenced this project -- smarter compilers, better type systems, the ideas around zero cost abstractions & doing more work at build time.
- Ember (formerly SproutCore) is actually an anomaly because it tries it's best to be both angular like (so large, and usable by large teams) and keeping up with the tech as it evolves (see: Glimmer, Octane). Ember also has some innovations, like Glimmer's VM-based approach -- turns out you can just ship binary data representing how to draw your components to the browser and skip a bunch of JS parsing, if you bring your own virtual machine that is optimized to draw the components.
As all this moves forward, there are ecosystem pieces like typescript that have gained tons of steam and changed how people are writing JS code these days.
For those that are interested my Dad was a Diplomatic Courier and published 10 years worth of personal letters traveling the world in the Navy and then as a Courier from 1956-1966.
“DEAR MOM” is a book told through 591 letters to my parents while living throughout the world.
The letters seen here are transcriptions of all letters I wrote to my parents for ten years following college graduation. They record my daily life in surprising detail from Navy Officer Candidate School until resigning from the U.S. Foreign Service in Viet Nam. This was a decade of practically non-stop travel throughout most of the world – some 4 million miles.
I've always sort of had this question that continues to feel naive - but I'm not sure I know the answer: why do so many companies feel like they have to grow perpetually? Why can't Twitter just be happy being Twitter, knowing its limits and making a stable profit? Instead it's more users, more VC money, more staff... constantly burning as quickly as possible. There's a ceiling on every business; it's all bound to come crashing down eventually if you don't stop somewhere. Either you do it gracefully or hundreds of folks have to eventually lose their job unexpectedly (very sad).
Is the answer simply that earlier VCs put pressure on the executives to keep growing so they can multiply their investment?
Personally I dream of making a living establishing a patio11-type software business. Something where I can do a high quality job and own all of the decision-making. The ceiling doesn't have to be very high for one guy to sustain himself, and software is appealing because you can automate away nearly all of the "work".
Right, but companies like Sabre [1] solved this in the 60s. If its gotten progressively easier over ~55 years, why should we be wowed now that anyone can spin up a travel aggregator that scales (just as I'm not impressed that Simple/BankSimple exists when they ride on top of a real bank, Bancorp; I'm actually deeply disappointed as one of the first Simple customers that ended up leaving because it took 5 years for them to implement joint checking accounts. At a bank). Amazon, airlines, Ticketmaster, all online companies that have to maintain "truth" about shared inventory and its pricing up to the second.
If you're breaking ground, awesome, you're doing something truly revolutionary. Would you be wowed if I built a Shopify clone off of Stripe and Squarespace? Or an app and site that performed ridesharing while simply talking to Uber or Tesla's backend? Probably not.
Ahh! There! Perfect example. I eagerly await the video, with baited breath, of a presentation from the team at Tesla rolling out autonomous driving using Nvidia's deep learning chipset. But if you build another Kayak, Hipmunk, etc, I do give you credit for grinding away on it if its a successful business. Grinding away on a business day after day for years is fucking hard. I'd just argue its not revolutionary or breaking new ground.
I asked him how he managed to code, and he replied with something that stayed with me: a good programmer should organize software in such a way that every piece of code has a clear and logical place. The organization should be so intuitive that anyone could build a mental model of the structure and navigate it easily, even without seeing it.
It felt like something out of a Yoda or Mr. Miyagi lesson. Skeptical, I asked his colleagues if he was truly able to code or if he was just exaggerating. To my surprise, they told me not only was he capable, but he was the best programmer they had ever worked with. They said no one else came close to writing code as organized as his.
That conversation changed my perspective. Ever since, whenever I’m unsure where to place new code, I don’t think about DDD or any specific methodology. Instead, I try to follow the logic and structure of the project in a way that feels natural and easy to follow later.
Later in life, I met two other blind programmers and heard similar stories about their ability to produce well-organized code.
To bring this back to the original question: I view LSP/IDE features the same way those programmers view "visual aids." Code should be organized according to a clear and logical structure that makes it easy to navigate.
Relying on features like Ctrl+Click to find where things are located worries me. Why? Because it can mask structural flaws in the codebase. If we can't intuitively figure out where something belongs, that’s a sign the codebase lacks structure—and that should motivate us to refactor it.
Not only do I avoid using LSP features, but I’m also opposed to their use. While they can help with navigation, they may prevent developers from experiencing and addressing the underlying structural issues in their code.