Hacker Newsnew | past | comments | ask | show | jobs | submit | hawk_'s commentslogin

You just copy paste as in you copy paste all the necessary context and the results. You don't give it access to your codebase for read or write, correct?


> You don't give it access to your codebase for read or write, correct?

I'm sure you can derive some benefit without doing that, but you're not going to see much of a speedup if you're still copy/pasting and manually prompting after each change. If anybody is copy/pasting and saying "I don't get it", yeah you don't.


Exactly, copy paste related code and files. Give it some background ontext on what it is you're doing and then tell it what you'd like it to do.


"I am seeing" as in do you use CO2 batteries at home or something?


If you tested this on macos, be careful. The fsync on it lies.


nope, linux python script that writes a little data and calls os.fsync


What's a little data?

In many situations, fsync flushes everything, including totally uncorrelated stuff that might be running on your system.


fsync on most OSes lie to some degree


Durability always has an asterisk i.e. guaranteed up to N number of devices failing. Once that N is set, your durability is out the moment those N devices all fail together. Whether that N counts local disks or remote servers.


Interestingly, on bare metal or old-school VMs, durability of local storage was pretty good. If the rack power failed, your data was probably still there. Sure, maybe it was only 99% or 99.9%, but that’s not bad if power failures are rare.

AWS etc, in contrast, have really quite abysmal durability for local disk storage. If you want the performance and cost benefits of using local storage (as opposed to S3, EBS, etc), there are plenty of server failure scenarios where the probability that your data is still there hovers around 0%.


This is about not even trying durability before returning a result ("Commit-to-disk on a single system is [...] unnecessary") it's hoping that servers won't crash and restart together: some might fail but others will eventually commit. However that assumes a subset of random (uncoordinated) hardware failures, maybe a cosmic ray blasts the ssd controller. That's fine, but it fails to account for coordinated failure where, a particular workload leads to the same overflow scenario on all servers the same. They all acknowledge the writes to the client but then all crash and restart.


To some extent the only way around that is to use non-uniform hardware though.

Suppose you have each server commit the data "to disk" but it's really a RAID controller with a battery-backed write cache or enterprise SSD with a DRAM cache and an internal capacitor to flush the cache on power failure. If they're all the same model and you find a usage pattern that will crash the firmware before it does the write, you lose the data. It's little different than having the storage node do it. If the code has a bug and they all run the same code then they all run the same bug.


Yeah good point, at least if you wait till you get an acknowledgement for the fsync on N nodes it's already in an a lot better position. Maybe overkill but you can also read the back the data and reverify the checksum. But yeah in general you make a good point, I think that's why some folks deliberately use different drive models and/or raid controllers to avoid cases like that.


How much impact do the various compression formats have on query performance?


Yes but try putting that on your CV.


Ok a related note, how does it compare to SeaStar?


May be the ones that are dead are from poor imitation of more successful bots that are operating in our midst whose comments aren't dead?


I always wonder how much of HN is just bots debating bots


Social media is becoming less a place to communicate with people and more just a place to navigate the hierarchy of potential conversations.


That's totally something a bot would say to blend in ;-)


I swear I make all my captchas


Yes, down with bots.


Or asking baiting questions just to appear intellectually connected?


nice try clanker


Sounds a lot like X these days.


In that vein - if they are upvoted by real people, does it matter?


Of course it does. People are using forums and social media to connect, however flimsily, with other people.

Replacing that with bots and thinking that's equivalent is actually really sad


For what it’s worth, I agree - I’m just no longer certain I could tell.


For those discussions with the LLM do you just use Gemini chat or chat GPT etc... i.e. the chat interface?


Depends.

I’ve been actively using the first tier paid version of:

- GPT - Claude - Gemini

Usually it’s via the cli tool. (Codex, Claude code, Gemini cli)

I have a bunch of scripts setup that write to the tmux pane that has these chats open - so I’ll visually highlight something nvim and pipe that into either of the panes that have one of these tools open and start a discussion.

If I want it to read the full file, I’ll just use the TUIs search (they all use the @ prefix to search for files) and then discuss. If I want to pipe a few files, I’ll add the files I want to nvim quickfix list of literally pipe the files I want to a markdown file (with a full path) and discuss.

So yes - the chat interface in these cli tools mostly. I’m one of those devs that don’t leave the terminal much lol


Is there a way to pass compiler switches to disable specific C++ features? Or other static analysis tools that break the build upon using prohibited features?


There is -fno-rtti, -fno-exceptions, -Wmultiple-inheritance, -Wvirtual-inheritance, -Wnamespaces, -Wsuggest-final-types, -Wsuggest-final-methods, -Wsuggest-override, -Wtemplates, -Woverloaded-virtual, -Weffc++, -fpermissive, -fno-operator-names and probably many more. The warnings can be turned into errors, e.g. -Werror=namespaces.


No two development groups agree on the desired features, so it would have to be a custom compiler plugin.

You could start with a Perl script that looks at the output of “clang++ -Xclang -ast-dump” and verifies that only permitted AST nodes are present in files that are part of the project sources.


For sure no two groups want the same subset but is there no "standard way" to opt in / out in the ecosystem? It's strange that there are large orgs like Google enforcing style guidelines but manual code reviews are required to enforce it. (or may be my understanding of that's enforced is wrong)


Yes, via static analysis tools it is possible.

As usual with additional tooling, there must exist some willingness to adopt them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: