Hacker Newsnew | past | comments | ask | show | jobs | submit | kissgyorgy's commentslogin

Scott Hanselman have a good blog post about this suggesting you should detach yourself from your code: https://www.hanselman.com/blog/you-are-not-your-code

Especially true when working as an employee where you don't own your code.


This prompt: "What do you have in User Interaction Metadata about me?"

reveals that your approximate location is included in the system prompt.


I asked it this in a conversation where it referenced my city (I never mentioned it) and it conveniently left out the location in the metadata response, which was shrewd. I started a new conversation and asked the same thing and this time it did include approximate location as "United States" (no mention of city though).

I just tried it out and docling finished in 20s (with pretty good results) the same document which in Tensorlake is still pending for 10 minutes. I won't even wait for the results.

There was an unusual traffic spike around that time, if you try now it should be a lot faster. We were calling up but there was not enough GPU capacity at that time.

There is also the llm tool written by simonwillison: https://github.com/simonw/llm

I personally use "claude -p" for this


Compared to the llm tool, qqqa is as lightweight as it gets. In the Ruby world it would be Sinatra, not Rails.

I have no interest in adding too many complex features. It is supposed to be fast and get out of your way.

Different philosophies.


I was excited because it looks really good. I looked into the backend code and it's vibe coded with Claude. All the terrible exception handling patterns, all the useless comments and psychopancy is left there. I can't trust this codebase. :(

You're absolutely right and I appreciate the honest feedback.

Yes, a lot of this was AI-assisted coding (Claude/Cursor), and I didn't clean up all the patterns. The exception handling is inconsistent, there are useless comments, and the code quality varies.

I'm the first to admit the codebase needs a lot of work. I was learning and experimenting, and it shows.

If you (or anyone) wants to improve it, I'd welcome PRs! The architecture/approach might be useful even if the implementation is rough.

Thanks for looking at the code and giving honest feedback - this is exactly the kind of thing I needed to hear.


It's MIT licensed, can be easily picked up by someone else.

If you don't want to upgrade and follow model development so much, I would just pay one provider and stick with them.

This model worth knowing about, because it's 3x cheaper and 2x faster than the previous Claude model.


I think clean code is more important than ever. LLMs can work better with good code (no surprise), and they are trained on so much shit code they produce garbage in terms of clean code. They also don't have a good taste or deeper architectural understanding if big codebasis where it's even more important.

What you learned over the years, you can just scale up with agents.


If you want to keep up with AI progress and model updates, simonw is the man to follow!


The huge advantage of SQLite is not that it's on the same machine, but it's that is it in process which makes deployment and everything else just simpler.


Yeah, totally agreed. An embedded Postgres would be sweet (see pglite elsewhere here in the comments, looks interesting).


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: