It's the most representative sample size if you're interested in your own performance though. I really don't care if other people are more productive with AI, if I'm the outlier that's not then I'd want to know.
Yes, doing so at a construction site it is considerably less dangerous to society in general than public roads. Not to mention the individual is more likely implied to be operating at some baseline of function (observed by foreman and others at the work site) which is not the case when a private individual is just driving on public roads.
Quite some years ago I created a Python FUSE filesystem[1] to to interact with dokuwiki (a wiki system).
It's built on hde llfuse[2]. But that required implementing a bunch of low level APIs that were not really related to dokuwiki. So I created easyfuse[3][4] as a wrapper, which implemented the things that were unrelated the dokuwiki implementation. If you're interested it in building a FUSE system it might be worth looking at.
Can you tell what's the usecase for creating FUSE for dokuwiki? Basically, dokuwiki is just a bunch of text files so wouldn't it be simpler and more efficient to e.g. mount them as NFS or share via Dropbox/Syncthing?
I was forced to use the dokuwiki, but I very much disliked editing stuff in the web interface. Having a filesystem interface to the wiki system allowed me to create and edit pages using vim , which I like to use for writing.
I actually think the wall clock time that is used here is a more useful metric here. Using as few keystrokes is not necessarily most "efficient" in time if you have to think longer about which ones to press.
The most effective way I've found to get other people to "write" good commit messages is by changing the "Default commit message" for squash merges on the GitHub repo to "Pull request title and description". [1]
That fixes the "Squashing, when you have 100 crap commits, and then not re-editing the message is a crime" item, because suddenly not re-editing will give you a fairly useful message. This ofcourse assumes the PR description is useful, but I've found it much easier to convince people to write a decent PR description than to write decent commit messages.
I love giving examples and context in a PR description. Squash-merged PRs can become cumbersome snowballs, but the final commit message should elaborate in proportion.
This is what we have resorted to in my team. It was just too difficult to get everyone to keep good commit hygiene and follow a best practice like conventional commits.
Having been through the pain of getting teams to adopt conventional commits a few times, I found that integrating a wizard like commitizen helped folks who were annoyed by commit linting learn and get comfortable with the rules and format so there was less friction when their commit was rejected by the linter.
It also really helps if you can wire up some continuous deployment to automate something tedious like properly incrementing the version number in the semantic version, updating the changelog, and deploy out a new `latest` or `next` tag to the package registry.
Even the most reticent users are often inspired to follow conventional commits once they see the possibilities that open up.
But in this scenario, wouldn't you want to break those tools precisely because they are going around the centralized config from which .conf is supposed to be generated?
But that file gets read at startup. So making changes there is a valid way of making changes.
What you really want is to prevent postgres from writing to that file.
That’s more complicated than just making it write only for everyone. Adding an option to stop postgres from doing what you don’t want it to do makes sense to me.