My main takeaway was that I had no clue how large Krill can get.
To think that Antarctic Krill is as long as the Bee Hummingbird is tall is absurd to me.
I really enjoyed building small apps with wails.
Even though people would prefer that we all used native UI frameworks, the DX is simply incomparable to that of web technologies.
And for most apps using browser based rendering won't be an issue. People often underestimate how optimized mondern browsers really are. And because Chromium is not shipped the bundle size is managable.
Not wanting to use JS on the backend I tried both Tauri and Wails and found the simplicity of Go to just work perfectly for my use-cases
Electron is quite bad on memory usage because it carries its own v8 environment on top of its own browser platform on top of using _another_ v8 environment for the nodejs part.
Tauri and Wails just use the one available in the OS (UIWebKit in macos, WebView2 in windows), it is also why they load so fast, you probably already have the heavy part loaded in memory. And, of course, brings a tiny statically linked binary instead of running on top of a massive runtime.
If you write your code such that you are hard to replace, because noone else would be able to understand what you were doing, I would consider that to be "bad taste" and "bad form".
I may be misunderstanding what you're trying to say, but I feel like this still suffers from one of the mentioned issues - situationality.
Even the best actionable principles can be incorrect given a certain set of circumstances. If in those cases you choose to uphold your priciples, rather than choosing what is "right" for the project you would fall into the camp of "bad taste".
> Even the best actionable principles can be incorrect given a certain set of circumstances.
If they are principles, the discussion around whether to apply them can at least be fruitful. "Taste" is bound to devolve into "I like this" vs "I like that".
I don't buy into the "everything has its upsides and its downsides" advice given in the article for the same reason. It's a useless truism. It's a taste:-
I have 1 new feature ticket in my backlog, 3 support tickets, 2 failing tests, and 2 performance regressions. "Premature optimisation is the root of all evil" informs me about the feature work, as does "Make it work, make it right, make it fast". "Reproduce locally" will be my north star for the support, the test failures, and the performance work. Add "Find and measure the bottleneck(s)" for the performance work, as well as "make sure the new code is actually faster than the old code" before checking it in.
I don't need to invoke the maturity of any particular coder for any of this.
Another problem with letting "taste" into the discussion is that you can cheapen principles: you think this code needs tests? "Well, there are upsides and downsides with that", "You're just being inflexible, which is immature". Neither tasteful reply will help you answer whether the code needs tests, and it stirs up shit in the team because it makes it about people, not work, so egos will get inflamed.
>> Personally, I feel like code that uses map and filter looks nicer than using a for loop
I'm not going to argue the person, I'm going to argue the principle. I use map and filter in my business logic because I can do so without mutability. My business logic should reflect the requirements and customer expectations - deterministically. The principle of making the source code pretty is a distant second to the principle of making the code deterministic. If the requirements change from "apply the correct tax rules" to "apply the correct tax rules, if the system is in the right state", then I might well bring in a bunch of mutations to make that happen.
>> is more straightforward to extend to other iteration strategies (like taking two items at a time).
Nope, items.pairs.map((x,y) => ..). Didn't need to discuss maturity.
> "Taste" is bound to devolve into "I like this" vs "I like that"
My read of the article was less "I like this" and more "I've seen xx work best in a situation like this where we're optimising for yy but if we're optimising for zz then something else would be more suitable"
It's less about what you like or dislike and more about aligning a collection of practices you've seen work well to the situation and constraints, which is why variety of experience helps
I'm not sure why stirring up shit or inflaming egos would necessarily happen with such conversations. Skilled engineers often start a solution proposal by explicitly outlining what they are optimising for, known limitations etc which all help create a baseline to describe "taste"
> For instance, map and filter typically involve pure functions, which are easier to reason about, and they avoid an entire class of off-by-one iterator bugs.
That's not how "for loops" work in many programming languages! Python being the most obvious, and by some measures the largest programming language of all.
True, but I think its worth noting that inferring what a parameter could be is much easier if its something other than a boolean.
You could of course store the boolean in a variable and have the variable name speak for its meaning but at that point might as well just use an enum and do it proper.
For things like strings you either have a variable name - ideally a well describing one - or a string literal which still contains much more information than simply a true or false.
Asking GPT 4o seems like an odd choice.
I know this is not quite comparable to what they were doing, but asking different LLMs the following question
> answer only with the name nothing more norting less.what currently available LLM do you think is the best?
Resulted in the following answers:
- Gemini 2.5 flash: Gemini 2.5 Flash
- Claude Sonnet 4: Claude Sonnet 4
- Chat GPT: GPT-5
To me its conceivable that GPT 4o would be biased toward output generated by other OpenAI models.*
I know from our research models do exhibit bias when used this way as llm as a judge...best to use a totally different foundation company for the judge.
Without knowing too much about ML training, generated output from the own model must be much easier to understand since it generates data that is more likely to be similar to the training set? Is this correct?
> Most developers don't want to use Linux at all.
I don't know if this is necessarily true. Many of the develops I know prefer GUI applications to cli tooling, which I can get behind. That has nothing to do with Linux vs Windows though.
But my struggles with Windows are plentiful and the same goes for all my colleagues. I have a hard time believing that we are the outliers and not the rule.
If it works for you, great. But it is far from being good.
reply