Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd love to hear from folks who mainly use Claude Code on why they prefer it and how they compare. It seems to be the most popular option here in HN, or at least the most frequently mentioned, and I never quite got why.

I always preferred the deep IDE integration that Cursor offers. I do use AI extensively for coding, but as a tool in the toolbox, it's not always the best in every context, and I see myself often switching between vibe coding and regular coding, with various levels of hand-holding. And I do also like having access to other AI providers, I have used various Claude models quite a lot, but they are not the be-all-end-all. I often got better results with o3 and now GPT-5 Thinking, even if they are slower, it's good to be able to switch and test.

I always felt that the UX of tools like Claude Code encourage you to blindly do everything through AI, it's not as seamless to dig-in and take more control when it makes sense to do so. That being said, they are very similar now, they all constantly copy each other. I suppose for many it's just inertia as well, simply about which one they tried first and what they are subscribed to, to an extent that is the case for me too.



I don't think we are in a phase where we can confidently state that there's a correct answer on how to do development, productivity self reports are notoriously unreliable.

At least personally, the reason why I prefer CLI tools like Claude and Codex is precisely that they feel like yet another tool in my toolbox, more so than with AI integrated in the editor. As a matter of fact I dislike almost all AI integrations and Claude Code was when AI really "clicked" for me. I'd rather start a session on a fresh branch, work on something else while I wait for the task to be done, and then look at the diff with git difftool or IDE-integrated equivalent. I'd argue you have just as much control with this workflow!

A final note on the models: I'm a fan of Claude models, but I have to begrudgingly admit that gpt-5-codex high is very good. I wouldn't have subscribed just for the gpt-5 family, but Codex is worth it.


It's primarily the simplicity with which I can work on multiple things. Claude code is also very good with using tools and stuff like that in the background so I just use a browser MCP and it does stuff by itself. I hook it up to staging bigquery and it uses test data. I don't need to see all these things. I want to look at a diff, polish it up in my IDE, and then git commit. The intermediate stuff is not that interesting to me.

This suddenly reminded me that I have a Cursor subscription so I'm going to drop it.

But of course if someone says that Cursor's flow suddenly 2x'd in speed or quality, I would switch to it. I do like having the agent tool be model hotpluggable so we're not stuck on someone's model because their agent is better, but in the end CC is good at both things and codex is similar enough that I'm fine with it. But I have little loyalty here.


That makes sense. Personally I have rarely gotten truly satisfactory results with such a hands-off approach, I cannot really stop babysitting it, so facilities to run it in the background or be able to do multiple things at once are rather irrelevant to me.

But I can see how it might make sense for you. It does depend a lot on how mainstream what you are working on is, I have definitely seen it be more than capable enough to leave it do its thing for webdev with standard stacks or conventional backend coding. I tend to switch a lot between that and a bit more exotic stuff, so I need to be able to fluidly navigate the spectrum between fully manual coding and pure vibe coding.


I think personally I really like Claude, but our company has standardized on Cursor. Both are very good. I do like the tab completion. The "accept/undo" flow of cursor is really annoying for me. I get why its there, but it just seems like a secondary on top of Git. I usually get everything in a completely committed state so I can already see all my changes through the standard git management features of "VSCode".

I think Claude's latest VSCode plugin is really great, and it does make me question why Cursor decided to fork instead of make a plugin. I'd rather have it be a plugin so I don't have to wipe out my entire Python extension stack.


I like the "accept/undo" feature because it allows for much more granular control. You can accept some files or lines, and give feedback or intervene manually in other parts. I don't like building up technical debt by accepting everything by default.


As in chess, stock trading, and combat aviation, people at first believed humans ought to curate computer-generated strategies. Then it become obvious the humans were unnecessary.


No doubt, I am simply being pragmatic. I will keep hand-holding AI when needed, it is increasingly less needed, good. I am not a skeptic, I will keep using AI to the limits of its ability the whole way, but one quickly learns its limits when you try to do some professional work with it.

It’s still plenty useful of course, but it absolutely needs constant babysitting for now, which is fine. I like AI coding tools that acknowledge those limits and help you work around them, rather than just pretending its magic and hiding its workings from you as an autonomous background process. Maybe soon such need for control will become obsolete, awesome, I will be the first one onboard.

PS:

Chess AI is definitely superhuman now, but Stockfish is a small NN surrounded by tons of carefully human-engineered heuristics and rules. Training an LLM (or any end-to-end self-supervised model) to be superhuman at chess is still surprisingly hard. I did some serious R&D on it a while back. Maybe we’ve gotten there in the last few years, not sure, but it’s very new and still not that much better than the best players.

Most real-world stock trading is still carefully supervised and managed by human traders. Even for automated high-frequency trading, what really works is to have an army of mathematicians devising lots of trading scripts, trading with proper deep-learning/reinforcement-learning is still surprisingly niche and unsuccessful.

Also combat aviation is far from being automated, sure they can bomb but not dogfight, and most drones are remote controlled dumb puppets.

I do agree with your point generally, but any good engineer needs to understand the details of where we are at so we can make real progress.


You can just use the official Claude Code, OpenAI Codex, and Gemini extensions on VS Code. You get diffs just like in Cursor now. The performance of these models can vary wildly depending on the agent harness they're on.

The official tools won't necessarily give you the best performance, but they're a safer bet for now. This is merely anecdotal as I haven't bothered to check rigorously, but I and others online have found that GPT-5-Codex is worse in Cursor than in the official CLI/extension/web UI.


Maximalists who find value in "deep IDE integration" and go on about it also enjoy meetings.


That is a bit uncalled for, I like to be lean and technically precise as much as the next guy.

I am not talking about "deep IDE integration" in a wishy-washy sense, what I care about as a professional engineer is that such an integration allows me to seamlessly intervene and control the AI when necessary, while still benefiting from its advantages when it does work well on its own.

Blindly trusting the AI while it does things in the background has rarely worked well for me, so a UX optimized for that is less useful to me, as opposed to one designed to have the AI right where I can interlieve it with normal coding seamlessly and avoid context-switching.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: