Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

hmm but how many examples of that exist on the internet practically verbatim?


Good point.

Turns out OpenAI's llms are pretty decent at coding x86_64 bios bootloaders in assembly, but as soon as you go off script from the two main examples online, it falls apart really quickly, as it's crystal clear it has no idea what is actually going on or the limitations of how bootloaders (and 2, and 3 stage bootloaders) work.


A while back I wanted to learn more about how direct threaded Forth implementations work. Just explanations, not code. I went back and forth with Claude for a while, and I noticed that its ability to give me a coherent explanation was terrible. However, it was able to give me perfectly fine x86 assembler to implement portions of it -- I knew it was perfectly fine because it was reproducing code from jonesforth, which I had open in another tab.


Not sure that's relevant. Obviously an LLM has to learn from something, but it's not a database. I could also program this myself and I don't think that it's an argument against my coding abilities that I have read the source code to many existing interpreters. I can only do it because I not only read but also understood and internalised.


It’s not a database, it’s a prediction engine. It’s going to be very good at predicting things it was trained on.


But it's almost a database. See the full glass of wine conundrum.


That seems to only have affected standalone diffusion models. E.g. gpt-4o native image generation is easily able to generate a full glass of wine.


not a satisfactory version, from what I saw posted here yesterday.

https://news.ycombinator.com/item?id=43475314




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: