I used to be a heavy user of i3. It's very flexible and configurable, and you can do much more than just moving windows. But after I switched to Mac, I couldn't find a tiling window manager that was both feature-rich and stable. After trying several options, I just use Rectangle[1]. It's not a window manager; it only provides shortcuts for window placements like simply moving windows to left/right/top/bottom or splitting the screen into 3/4/6 sections and place windows. It covers 80% of my needs and there are no pitfalls or unexpected behavior, so now I'm happily using it. Another reason is that I'm getting old and tired of using very flexible software with tons of custom configs.
Look for kitchen printers. They're dot-matrix / ink ribbon receipt printers for use in restaurant kitchens, where the plate warmers and other sources of heat will turn thermal paper completely black. So, instead, they use rolls of ordinary bond paper.
The fact that they make a loud noise every time an order comes through is useful for a restaurant kitchen, too.
I've used Scalene at various times in the past few years, and always liked using it when I want to dig deeper compared to cProfile/profile. You might also want to look at:
Prior to using Atuin, I had some fun fish plugins that used fzf to search my history. I still find that I use that most often (it even searches my atuin history too), but when that fails - or becomes overly complicated, that’s where atuin’s native search comes in. It really is a game changer for working on the console and I can’t recommend it enough. Here’s some of the things that are really great about it:
1. As mentioned above, scope awareness when searching history. This can be exceptionally helpful when you know you’re in the same directory where you previously ran a command.
2. Sync - this is why I started with atuin. It’s pretty easy to run your own sync server if you’re not big on send your commands to some random server somewhere.
3. Persistence - similar to sync, I love having my whole command history available when I stand up a new machine.
4. Secrets hidden - you can even set it so secrets are not persisted in your history. This is useful if you haven’t yet migrated to using something 1Password to inject secrets. Also, as a side, it makes it really easy to find secret references you’ve used before too.
The C for this function (without includes and export directives):
char *
func_equals (const char *func_name, unsigned int argc, char **argv)
{
char *result = NULL;
if (strcmp(argv[0], argv[1]) == 0) {
result = gmk_alloc(strlen(argv[0]) + 1); /* not handling failure for simplicity */
strcpy(result, argv[0]);
}
return result;
}
This can be done with a macro but it's ugly and verbose. Macros also slow makefile parsing a lot and for a large build like e.g. an operating system this makes a big difference - it's a penalty you pay every time you run "make" even if you only changed 1 file.
There are plenty of things you cannot do with macros too. $(shell) is a getout card but it drastically slows down large makefiles.
Your module has a setup function which gets called when it's loaded and this adds the function into gmake:
Things that are hard/slow to do with macros like arithmetic - comparing, adding and so on are even better candidates. A hash function is great for generating intermediate target names that aren't too long for the filesystem.
My favorite one that I've done is embedding a python interpreter into make - this is very convenient as it's MUCH faster than running a process from $(shell) and it keeps state between uses which can be useful.
Stats is great! The only issue I've had with it is each time it updates itself as it's unsigned the binary is flagged and can't be opened until you run xattr -rc /Applications/Stats.app on it.
Headless mode skips the visual rendering meant for humans, but the DOM structure and layout still exist, allowing the model to parse elements programmatically (e.g. button locations). Instead of 'seeing' an image, the model interacts with the page's underlying structure, which is faster and more efficient. Our browser removes the rendering engine as well, so it won't handle 100% of automation use cases, but it's also what allows us to be faster and lighter than Chrome in headless mode.
I had a somewhat similar experience trying to use LLMs to do OCR.
All the models I've tried (Sonnet 3.5, GPT 4o, Llama 3.2, Qwen2 VL) have been pretty good at extracting text, but they failed miserably at finding bounding boxes, usually just making up random coordinates. I thought this might have been due to internal resizing of images so tried to get them to use relative % based coordinates, but no luck there either.
Eventually gave up and went back to good old PP-OCR models (are these still state of the art? would love to try out some better ones). The actual extraction feels a bit less accurate than the best LLMs, but bounding box detection is pretty much spot on all the time, and it's literally several orders of magnitude more efficient in terms of memory and overall energy use.
My conclusion was that current gen models still just aren't capable enough yet, but I can't help but feel like I might be missing something. How the heck did Anthropic and OpenAI manage to build computer use if their models can't give them accurate coordinates of objects in screenshots?
This kind of device is called a "compressor", and they are ubiquitous in recording studios. They can get very expensive and complicated, but for your purpose something like this may suffice:
Specifically what you want is a "stereo compressor" or "compressor/limiter"; if you want something more sophisticated than the device above, there are many 1U rack options available for ~$200 (dbx is a good choice), or used on reverb.com more like $70-$100.
1 - Get an amplifier that has a "Night Mode" function. It has been a basic function on most AV-receivers in the past decade+, assuming you don't buy the most pedestrian model. It compresses the dynamic range of the sound, to avoid the loud parts waking up the neighbors while you can also hear the conversations. Of course here you are looking for an investment of between $350 and $inf. Buying secondhand can save big bucks.
2 - Use a PC for your video needs. Most video players support the same function (VLC, GOM player, Kodi... look for "dynamic range compression" and similar options). A 10 years old mid-tier machine will play everything including UHD, so this solution is fairly cheap. If you get a cheap IR-USB remote, you won't even have to mess with keyboard and mouse.
I am using actual budget (which is free and open source) for this exact use case. It won't notify you but it has dashboard that can show you how you are spending
The unique feature of Zasper is that the Jupyter kernel handling is built with Go coroutines and is far superior to how it's done by JupyterLab in Python.
Zasper uses one fourth of RAM and one fourth of CPU used by Jupterlab. While Jupyterlab uses around 104.8 MB of RAM and 0.8 CPUs, Zasper uses 26.7 MB of RAM and 0.2 CPUs.
Other features like Search are slow because they are not refined.
I am building it alone fulltime and this is just the first draft. Improvements will come for sure in the near future.
To bring things full circle: the cross-entropy loss is the KL divergence. So intuitively, when you're minimizing cross-entropy loss, you're trying to minimize the "divergence" between the true distribution and your model distribution.
This intuition really helped me understand CE loss.
- Keep your eyes open all times. E.g. I always pop into careers of a HN post, I keep a list of companies I am curious about. I do this even though I am not looking for a job.
- Negotiate - ask a FT job if they will do PT or contract.
The Qwen family of models are REALLY impressive. I would encourage anyone who hasn't paid them any attention to at least add them to your mental list of LLMs worth knowing about.
QwQ is the Qwen team's exploration of the o1-style of model that has built in chain-of-thought. It's absolutely fascinating, partly because if you ask it a question in English it will often think in Chinese before spitting out an answer in English. My notes on that one here: https://simonwillison.net/2024/Nov/27/qwq/
Most of the Qwen models are Apache 2 licensed, which makes them more open than many of the other open weights models (Llama etc).
(Unsurprisingly they all get quite stubborn if you ask them about topics like Tiananmen Square)
It's also the foundation of a very good estimator for global illumination! See https://en.wikipedia.org/wiki/Metropolis_light_transport . It used to achieve state of the art quality on scenes with caustics and/or tough indirect-light dominated scenes (eg. veach door).
I think it’s more likely we can preserve our ego rather than our consciousness. For instance, create an AI replica of yourself that accurately behaves the same way you do. Although you would be dead, your ego can carry on living and responding to changes in the external world, and people could have interactions with you that accurately simulate how you would respond to them long after you’re gone. And as your ego learns about the world, it develops opinions closely similar to opinions you would have based on your life experience. Perhaps in this way people in power could remain in power indefinitely.
Been working on markwhen for a few years now, originally inspired by cheeaun's life timeline that another commenter posted about.
At this point markwhen is available as a VS Code extension, Obsidian plugin, CLI tool, and web editor in Meridiem.
Some recent markwhen developments:
- Dial, a fork of bolt.new (Stackblitz's very cool tool that leverages AI to help quickly scaffold web projects): an in-browser editor that lets you edit existing markwhen visualizations like the timeline or calendar or make your own. I just released that yesterday so it's still rough but I have big plans for it (it's one of the visualizations in meridiem)
- Event properties: each entry can have it's own "frontmatter" in the form of `key: value` pairs. I wanted this as I'm aiming for more iCal interoperability in the future, so each event could theoretically have things like "attendees" or google calendar ids or other metadata. This was released in the last month or two.
- remark.ing: this one isn't ready yet by any means but it's like a twitter/bluesky/mastodon-esque aggregated blog site. So you write markwhen and each entry is a post. In this way "scheduling" a post is just writing a future date next to it, and you have all your blog in one file. This one is a major WIP
[1] https://rectangleapp.com/