Hacker Newsnew | past | comments | ask | show | jobs | submit | refactor_master's commentslogin

But that’s more of a theoretical truth than a practical one, isn’t it? High quality novels are easily found. TikTok videos of equally high quality and depth? Perhaps not so, or exceedingly rarely.

Infinite monkeys with infinite time could surely also produce something spectacular and eye-opening, statistically speaking. But umm, you’d have to wait infinite time for it to be done, so it’s not really efficient when time is a finite resource.


So now every fraudster with $5 appears legitimate?

Remember blue check marks? The EU is not happy about those.

https://ec.europa.eu/commission/presscorner/detail/en/ip_25_...


"On X, anyone can pay to obtain the ‘verified' status without the company meaningfully verifying who is behind the account, making it difficult for users to judge the authenticity of accounts and content they engage with."

As stated in you source the EU is (among other things) not happy about Twitter calling users 'verified' while the meaning of 'verified' switched from "we did sth. to make sure the account owner is indeed the thing/person they say they are" to "the account owner is paying a monthly fee".


They would appear no less legitimate then now?


When has the EU been happy about anything, ever?


I’m interested in earnings correlating with feature releases. Maybe you’re pushing 100% more bugs, but if you can sell twice as many buggy features as your neighbor at the same time, it could be that you could land more contracts.

It’s definitely a raise to the bottom scenario, but that was already the scenario we lived in before LLMs.


I second the persistence. Some of the most persistent code we own is because it’s untested and poorly written, but managed to become critical infrastructure early on. Most new tests are best-effort black box tests and guesswork, since the creators have left a long time ago.

Of course, feeding the code to an LLM makes it really go to town. And break every test in the process. Then you start babying it to do smaller and smaller changes, but at that point it’s faster to just do it manually.


The crusade against gluten probably did it. Tofu lives as un-refrigerated grey blobs and tempeh never even made it to the shelf, probably because of hormone-disrupting soybeans. But hyper-engineered single cell meat? Now that’ll sell.


Tempeh is pretty common at health food stores. More common than seitan, less common than tofu.


Gemini routinely makes up stuff about BigQuery’s workings. “It’s poorly documented”. Well, read the open source code, reason it out.

Makes you wonder what 97% is worth. Would we accept a different service with only 97% availability, and all downtime during lunch break?


I.e. like most restaurants and food delivery? :). Though 3% problem rate is optimistic.


I think group 3 is a bit of a reach. Most people just treat it as a commodity. You need a break after shopping? Coffee. Meeting someone to talk over something for 30 minutes? Coffee. Need a cozy place to sit and get some work done? Coffee. For none of these do people have to engage with the community or be caffeine addicts.


> We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years

Still haven’t gotten rid of work for work’s sake being a virtue, which explains everything else. Welfare? You don’t “deserve” it. Until we solve this problem, we’re not or less heading straight for feudalism.


> Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.

What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.


If you've ever read a blog on trading when LSTMs came out, you'd have seen all sorts of weird stuff with predicting the price at t+1 on a very bad train/test split, where the author would usually say "it predicts t+1 with 99% accuracy compared to t", and the graph would be an exact copy with a t+1 offset.

So eye-balling the graph looks great, almost perfect even, until you realize that in real-time the model would've predicted yesterday's high on today's market crash and you'd have lost everything.


if you feed in price i.e. 280.1, 281.5, 281.9 ... you are going to get some pretty good looking results when it comes to predicting the next days price (t+1) with a margin of +/- a percent or so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: