random internet users are not promising salvation, nor or they taking profit.
they are saying: i made this and it worked for me in my specific case. you can look at it (or have a trusted knowledgable friend look at it), and use it, for zero payment; if you want to, if the paid solutions offered on the market are insufficient for your specific case.
they would never need to come up with 1.1 billion dollars because they're not making 10x that from selling things that still harm people despite the resources that that profit makes available.
Except it's always written, but almost never read. Something that is fast/non-resource-intensive to write is definitionally a better design for logging.
What metadata? The raw template? That's data in this case, data for the later rendering of logs. Yes, the template plus the params is going to be slightly bigger than a rendered string, but that's the speed/size tradeoff inherent almost everywhere. It may even keep seperate things like the subsystem, event type, log level, etc; which trades off size (again) for speed/ease of filtering. It's all trade-offs, and to blanket declare one method (the Windows method in this case) as just bad design is only displaying your own ignorance, or bias.
"“We’re using confman for configuration management, which feeds into climan for the CLI, and then wsconn handles our WebSocket connections, permgr manages permissions, all through jqueue for our job queue”"
This is better? Is this Highlander, there can only be one of each thing? What about variations of those tools... cman2? confman? cfigmgr? Naming projects, and hence tools, is often just as much about namespacing as meaning. There _will_ be more than one of most non-trivial tools/projects, and not every configuration manager can be called "confman" (if that's even really a "good" name").
And part of it _is_ connecting utility to an "appelation": calling "your MIT-licensed file parser with 45 GitHub stars" just "parser" practically gaurantees you'll never get that 50th star, because there are already a bunch of "parser" projects and there is no reason for someone to ever find yours.
"Each one demands tribute: a few seconds of mental processing to decode the semantic cipher. Those seconds accumulate into minutes and effort, then career-spanning mountains of wasted cognitive effort."
No they don't, because you're _not_ doing that processing every time. Just like "grep" makes perfect sense _now_ because you've used it forever, once you're working on a project then something like "cobra" immediately maps to "the cli library". It might take a secon the first couple times, but humans are good at this internalizing kind of abstraction, and programmers are damn amazing at it.
The unix tools example is really terrible. "I used grep to examine the confs in etc and then cat to combine them before processing with sed and awk and tarballed the output to be curl'ed to the webdav server." Those are only intuitive because you know them already. "sed" for "stream editor"? Come on, it's not called that because it's a good name. Why not strmed, or even streameditor?. Simple, actually intuitive. It's because 'sed' was the bare minimum to be a short as possible while being unique and just memorable enough. Awk is an even better counter-example to the article's claim: it's just names, makes no sense! Has literally _nothing_ to do with what it does.
"the Golden Gate Bridge tells you it spans the Golden Gate strait."
Umm, no it doesn't tell you that. Does the Brooklyn Bridge span the Brooklyn strait? George Washinton Bridge? Bridges are not exclusively named by that which they span, and software is not exclusively named after exactly what it does.
'In some iterations, coding agent put on a hat of security engineer. For instance - it created a hasMinimalEntropy function meant to "detect obviously fake keys with low character variety". I don't know why.'
Yes, you do know why. Because somewhere in its training, that functionality was linked to "quality" or "improvement". Remember what these things do at their core: really good auto-complete.
'The prompt, in all its versions, always focuses on us improving the codebase quality. It was disappointing to see how that metric is perceived by AI agent.'
Really? It's disappointing to see how that metric is perceived by humans, and the AIs are trained on things humans made. If people can't agree on "codebase quality", especially the ones who write loudly about it on the intetnet, it's going to be impossible for AI agents to agree. A better prompt actually specifying what _you_ consider to be improvements would have been so much better: perhaps minimize 3rd party deps, or minimize local utils reimplementing existing 3rd party libs, or add quality typechecks.
'The leading principle was to define a few vanity metrics and push for "more is better".'
Yeah, because this is probably the most common thing it saw in training. Programmers actually making codebase quality improvements are just quietly doing it, while the ones shouting on the internet (hence into the training data) about how their [bad] techniques [appear to] improve quality are also the ones picking vanity metrics and pushing for "more is better".
'I've prompted Claude Code to failure here'
Not really a failure: it did exactly what you asked: impoved "codebase quality" according to its training data. If you _required_ a human engineer to do the same thing 200 times, you'd get similar results as they run out of real improvements and start scouring the web for anything that anybody ever considered an "improvement", which very definitely includes vanity metrics and "more is better" regarding test count and coverage. You just showed that these AIs aren't much more than their training data. It's not actually thinking about quality, it's just barfing up things it has seen called "codebase quality improvements", regardless of the actual quality of those improvements.
You just did the same thing you were complaining about. 'I've learned to be suspicious of anyone with a job title of "architect"' vs 'The last year has seen several BDFLs act like Mad Kings'. Arguably yours is worse because it's a blanket statement about _most_ "architects", while the article simply points out that _some_ BDFLs aren't the best.
Because why bother if you're keeping the C? Part of the reason for moving to Go was safety by replacing the C, not just to move away from Python. I'd say the mistake was thinking Python programmers would enjoy moving to Go. I've done it, and it was not enjoyable. I wouldn't mind doing just the tight peformance things in Go instead of C... But using Go for the high-level things that Python is great at, and where the performance is not an issue, is just silly.
That's just because they're using a different scheme to fund development. SQLite has its paid support and consortium, while Turso is leaning on cloud hosting and the paid support that comes with that. Both can still be used standalone and unsupported, completely for free.
Turso is arguably positioned slightly better as a standalone product seeing as it's using a more traditional open source "bazaar" model, as opposed to SQLite's source available "cathedral" model.
This is a big reason. Apple tunes their devices to not push the extreme edges of the performance that is possible, so they don't fall off that cliff of inefficiency. Combined with a really great perf/watt, they can run them at "90%" and stay nice and cool and sipping power (relatively), while most Intel/AMD machines are allowed to push their parts to "110%" much more often, which might give them a leg up in raw performance (for some workloads), but runs into the gross inefficiencies of pushing the envelope so that marginal performance increase takes 2-3x more power.
If you manually go in and limit a modern Windows laptop's max performance to just under what the spec sheet indicates, it'll be fairly quiet and cool. In fact, most have a setting to do this, but it's rarely on by default because the manufacturers want to show off performance benchmarks. Of course, that's while also touting battery life that is not possible when in the mode that allows the best performance...
This doesn't cover other stupid battery life eaters like Modern Standy (it's still possible to disable it with registry tweaks! do it!), but if you don't need absolute max perf for renders or compiling or whatever, put your Windows or Linux laptop into "cool & quiet" mode and enjoy some decent extra battery.
It would also be really interesting to see what Apple Silicon could do under some Extreme OverClocking fun with sub-zero cooling or such. Would require a firmware & OS that allows more tuning and tweaking, so it's not going to happen anytime soon, but could actually be a nice brag for Apple it they did let it happen.
Perhaps because you should be using some kind of low/mid-level graphics engine. That's part of the difference between OpenGL and Vulcan: that ogl did more for you, even in immediate mode. Vk came about because getting more performance out of ogl was becoming difficult because it didn't directly expose enough of modern hardware, or only exposed it in a specific less-performant way.
Yes, it takes way more code to start from scratch on vulkan, but that's a trade-off against being able to better optimize exactly to your use case, if you need to.
I am burned out by frameworks in my day job which is basically working in full stack web land and I want to get away from that. So using anything other than libraries is out of the question as far I am concerned. I could have use Unity / Unreal and been further along, but it isn't going to be enjoyable.
I understand there are situations where more performance is desirable and that vulkan fills that niche. However if you are building simpler games, are you really going to need it? If I am building say a puzzle game, do I really need maximum 3D graphics fidelity and performance? I would argue probably not.
I am using OpenGL in my projects for now and if I feel the need to learn Vulkan I will. Almost all the materials online for OpenGL 3.3 are still relevant and unlike web world (where things are depreciated every 6 months) the code still works fine. The C++ linter / analysis tools I am using with CLion throw up warnings but these are normally fairly easily to fix.
It's not recommended as much anymore because of unit tests. Instead of peppering the code with asserts, you build tests based on those assertions. You don't have to worry about turning it off in production because the tests are separate, and you also don't have to worry about manually triggering all the various asserts in a dev build, because the test runs are doing that for you even before a build is published.
they are saying: i made this and it worked for me in my specific case. you can look at it (or have a trusted knowledgable friend look at it), and use it, for zero payment; if you want to, if the paid solutions offered on the market are insufficient for your specific case.
they would never need to come up with 1.1 billion dollars because they're not making 10x that from selling things that still harm people despite the resources that that profit makes available.
reply