You just copy paste as in you copy paste all the necessary context and the results. You don't give it access to your codebase for read or write, correct?
> You don't give it access to your codebase for read or write, correct?
I'm sure you can derive some benefit without doing that, but you're not going to see much of a speedup if you're still copy/pasting and manually prompting after each change. If anybody is copy/pasting and saying "I don't get it", yeah you don't.
Durability always has an asterisk i.e. guaranteed up to N number of devices failing. Once that N is set, your durability is out the moment those N devices all fail together. Whether that N counts local disks or remote servers.
Interestingly, on bare metal or old-school VMs, durability of local storage was pretty good. If the rack power failed, your data was probably still there. Sure, maybe it was only 99% or 99.9%, but that’s not bad if power failures are rare.
AWS etc, in contrast, have really quite abysmal durability for local disk storage. If you want the performance and cost benefits of using local storage (as opposed to S3, EBS, etc), there are plenty of server failure scenarios where the probability that your data is still there hovers around 0%.
This is about not even trying durability before returning a result ("Commit-to-disk on a single system is [...] unnecessary") it's hoping that servers won't crash and restart together: some might fail but others will eventually commit. However that assumes a subset of random (uncoordinated) hardware failures, maybe a cosmic ray blasts the ssd controller. That's fine, but it fails to account for coordinated failure where, a particular workload leads to the same overflow scenario on all servers the same. They all acknowledge the writes to the client but then all crash and restart.
To some extent the only way around that is to use non-uniform hardware though.
Suppose you have each server commit the data "to disk" but it's really a RAID controller with a battery-backed write cache or enterprise SSD with a DRAM cache and an internal capacitor to flush the cache on power failure. If they're all the same model and you find a usage pattern that will crash the firmware before it does the write, you lose the data. It's little different than having the storage node do it. If the code has a bug and they all run the same code then they all run the same bug.
Yeah good point, at least if you wait till you get an acknowledgement for the fsync on N nodes it's already in an a lot better position. Maybe overkill but you can also read the back the data and reverify the checksum. But yeah in general you make a good point, I think that's why some folks deliberately use different drive models and/or raid controllers to avoid cases like that.
I’ve been actively using the first tier paid version of:
- GPT
- Claude
- Gemini
Usually it’s via the cli tool. (Codex, Claude code, Gemini cli)
I have a bunch of scripts setup that write to the tmux pane that has these chats open - so I’ll visually highlight something nvim and pipe that into either of the panes that have one of these tools open and start a discussion.
If I want it to read the full file, I’ll just use the TUIs search (they all use the @ prefix to search for files) and then discuss. If I want to pipe a few files, I’ll add the files I want to nvim quickfix list of literally pipe the files I want to a markdown file (with a full path) and discuss.
So yes - the chat interface in these cli tools mostly. I’m one of those devs that don’t leave the terminal much lol
Is there a way to pass compiler switches to disable specific C++ features? Or other static analysis tools that break the build upon using prohibited features?
There is -fno-rtti, -fno-exceptions, -Wmultiple-inheritance, -Wvirtual-inheritance, -Wnamespaces, -Wsuggest-final-types, -Wsuggest-final-methods, -Wsuggest-override, -Wtemplates, -Woverloaded-virtual, -Weffc++, -fpermissive, -fno-operator-names and probably many more. The warnings can be turned into errors, e.g. -Werror=namespaces.
No two development groups agree on the desired features, so it would have to be a custom compiler plugin.
You could start with a Perl script that looks at the output of “clang++ -Xclang -ast-dump” and verifies that only permitted AST nodes are present in files that are part of the project sources.
For sure no two groups want the same subset but is there no "standard way" to opt in / out in the ecosystem? It's strange that there are large orgs like Google enforcing style guidelines but manual code reviews are required to enforce it. (or may be my understanding of that's enforced is wrong)