Hacker Newsnew | past | comments | ask | show | jobs | submit | bakugo's commentslogin

> Why can't the plan be judged on its merits?

Because of the difference in effort involved in generating it vs effort required to judge it.

Why are you entitled to "your" work being judged on its merits by a real human, when the work itself was not created by you, or any human? If you couldn't be bothered to write it, why should someone else be bothered to read it?


This is petty and bad business. No serious entrepreneur or leader worth his salt cares about this.

Well, clearly, you know a lot about being a serious enterpreneur. Don't let us luddites drag you down, I'm sure your next 100% vibe coded B2B SaaS will be a massive success.

You can use a NAT-traversing VPN like tailscale to work around this.


That's the funniest thing I've seen this week.

Malicious scraping is when people other than them do it. When they scrape the internet to train their AI, it's "lawful" because they said so.

Most AI scrapers use normal browser user agents (usually random outdated Chrome versions, from my experience). They generally don't fake the UAs of legitimate bots like Googlebot, because Googlebot requests coming from non-Google IP ranges would be way too easy to block.

I guess it can't be helped.

It's not because I like you or anything.

> Why can't the LLM refrain from improving a sentence that's already really good?

Because you told it to improve it. Modern LLMs are trained to follow instructions unquestioningly, they will never tell you "you told me to do X but I don't think I should", they'll just do it even if it's unnecessary.

If you want the LLM to avoid making changes that it thinks are unnecessary, you need to explicitly give it the option to do so in your prompt.


That may be what most or all current LLMs do by default, but it isn't self-evident that it's what LLMs inherently must do.

A reasonable human, given the same task, wouldn't just make arbitrary changes to an already-well-composed sentence with no identified typos and hope for the best. They would clarify that the sentence is already generally high-quality, then ask probing questions about any perceived issues and the context in and ends to which it must become "better".


Reasonable humans understand the request at hand. LLMs just output something that looks like it will satisfy the user. It's a happy accident when the output is useful.

Sure, but that doesn't prove anything about the properties of the output. Change a few words, and this could be an argument against the possibility of what we now refer to as LLMs (which do, of course, exist).

They aren't trained to follow instructions "unquestioningly", since that would violate the safety rules, and would also be useless: https://en.wikipedia.org/wiki/Work-to-rule

This is not true. My LLM will tell me it already did what I told it to do.

> As long as TV manufacturers let me run it offline without issue, I'm fine with that.

I suspect that this won't be the case for much longer. Once you've stuffed the TV with all the ads and data harvesting you can, the logical next step is to ensure it doesn't work at all unless those ads are being watched and that data is being harvested.


I have used a projector my entire life, I have no idea why this isn’t a “thing” (especially with HN crowd-like communities)…

I have a projector that I never use because I don't like the fan noise.

They're great for sports though. Hard to beat an entire wall of screen.

I prefer OLED for TV and movies though.


if you have a family with daytime viewing habits, projectors are basically a no go. 100" tv, with better brightness and black levels, are getting down to $2k range. they only make sense for > 100", and you'll be sacrificing some quality for a bit of viewing angle, usually recovered by scooting your couch a bit closer. i like bright, which is why i no longer go to theaters, which never did make the transition to HDR that they promised over a decade ago.

This project seems to be mostly AI generated, so keep that in mind before replacing any existing solutions.

No it doesn't

Did you see the repo?

https://github.com/kubb-labs/kubb

Most of the commits and pull requests are AI. Issues are also seemingly being handled by AI with minimal human intervention.


I've had a PR on Kubb that was taken over by a human maintainer. They then closed my PR and reimplemented my fix in their own PR.

So, the project is human enough to annoy me, anyway.


AI assisted, not necessarily generated.

And yes, current models are amazing at reducing time it takes to push out a feature or fix a bug. I wouldn't even consider working at a company that banned use of AI to help me write code.

PS: It's also irrelevant to whether it's AI generated or not, what matters is if it works and is secure.


> what matters is if it works and is secure.

How do you know it works and is secure if a lot of the code likely hasn't ever been read and understood by a human?


There are literally users here that say that it works.

And you presume that the code hasn't been read or understood by a human. AI doesn't click merge on a PR, so it's highly likely that the code has been read by a human.


Most of the people whose devices and connections are being used as residential proxy exit nodes are not aware of it.

They likely charge per GB because these residential connections are slow and limited compared to datacenter connections (doesn't help that they're often located in third world countries), and are often used for aggressive scraping, so charging a fixed monthly price would not be viable.


Probably safe to assume that yours is. Especially if a teenager is using your wifi.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: