Hacker Newsnew | past | comments | ask | show | jobs | submit | stitched2gethr's commentslogin

I'm surprised there was no mention of shorts which take attention away from long form content. It's all they a pushing these days. In my feed I regularly see 6 shorts, then a video, then 6 more shorts. TikTok is not the ideal model for everyone.

If you're on a web browser, there are extensions that hide them (for now). That and members only and such.

Soon, assuming my setup continues to work, my youtube experience will be completely unlike the default: no shorts, no autoplay, no ads, no sidebar recommendations, homepage is subscriptions, no premium.

I'm bleeding them dry!


May I recommend de-arrow and sponsorblock as well?

https://unhook.app/ - removes the addiction from the tube

(i'm not affiliated except as a user)


I've seen it many times. And then every task takes longer than the last one, which is what pushes teams to start rewrites. "There's never enough time to do it right, but always time to do it again."


Na. Consenting adults and all that.


There were developed around the same time so maybe that accounts for some of the comparisons, but at least in this case I think it matters that they are both relatively new languages with modern tooling.


If they were being compared, shouldn't we also see the inverse? There is a discussion about Rust on the front page right now, with 230 comments at time of writing, and not a single mention of Go.

In fairness, the next Rust discussion a few pages deep does mention Go, but in the context of:

1. Someone claiming that GC languages are inherently slow, where it was pointed out that it doesn't have to be that way, using Go as an example. It wasn't said in comparison with Rust.

2. A couple of instances of the same above behaviour; extolling the virtues of Rust and randomly deriding Go. As strange as the above behaviour is, at least Go was already introduced into the discussion. In these mentioned cases Go came from completely out in left field, having absolutely nothing to do with the original article or having any relation to the thread, only showing up seemingly because someone felt the need to put it down.



This contains some specific data with pretty graphs: https://youtu.be/tbDDYKRFjhk?t=623

But if you do professional development and use something like Claude Code (the current standard, IMO) you'll quickly get a handle on what it's good at and what it isn't. I think it took me about 3-4 weeks of working with it at an overall 0x gain to realize what it's going to help me with and what it will make take longer.


Thank you for sharing the video, it's great! Puts into words a lot of things I was thinking.


This is a great conference talk, thanks for sharing!

To summarize, the authors enlisted a panel of expert developers to review the quality of various pull requests, in terms of architecture, readability, maintainability, etc. (see 8:27 in the video for a partial list of criteria), and then somehow aggregate these criteria into an overall "productivity score." They then trained a model on the judgments of the expert developers, and found that their model had a high correlation with the experts' judgment. Finally, they applied this model to PRs across thousands of codebases, with knowledge of whether the PR was AI-assisted or not.

They found a 35-40% productivity gain for easy/greenfield tasks, 10-15% for hard/greenfield tasks, 15-20% for easy/brownfield tasks, and 0-10% for hard/brownfield tasks. Most productivity gains went towards "reworked" code, i.e. refactoring of recent code.

All in all, this is a great attempt at rigorously quantifying AI impact. However, I do take one major issue with it. Let's assume that their "productivity score" does indeed capture the overall quality of a PR (big assumption). I'm not sure this measures the overall net positive/negative impact to the codebase. Just because a PR is well-written according to a panel of expert engineers doesn't mean it's valuable to the project as a whole. Plenty of well-written code is utterly superfluous (trivial object setters/getters come to mind). Conversely, code that might appear poorly written to an outsider expert engineer could be essential to the project (the highly optimized C/assembly code of ffmpeg comes to mind, or to use an extreme example, anything from Arthur Whitney). "Reworking" that code to be "better written" would be hugely detrimental, even though the judgment of an outside observer (and an AI trained on it) might conclude that said code is terrible.


> I think you've actually lost that time in the first case. And in the second case, as opposed to the middle ground.


For those that want an easy button. Here ya go.

``` review () { if [[ -n $(git status -s) ]] then echo 'must start with clean tree!' return 1 fi

        git checkout pristine # a branch that I never commit to
        git rebase origin/master

        branch="$1"
        git branch -D "$branch"
        git checkout "$branch"

        git rebase origin/master
        git reset --soft origin/master
        git reset

        nvim -c ':G' # opens neovim with the fugitive plugin - replace with your favorite editor

        git reset --hard
        git status -s | awk '{ print $2 }' | xargs rm
        git checkout pristine
        git branch -D "$branch"
} ```


Having a PR worktree is good with this kind of workflow.


I'll keep using markdown.


This take rings true for me after admittedly only a couple of hours of use of gpt-5. I had an issue I had been working with Claude on but it was difficult to give it real-time feedback so it floundered. gpt-5 struggled in the same areas but after about $2 of tokens it did fix the issue. It was far from a 1 shot like I might have expected from the hype, but it did get the job in about an hour done where Claude could not in 3.

For reference my Claude usage was mostly Sonnet, but with consulting from Opus.


Would you be comfortable sharing a brief description of what the issue was?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: