Hacker Newsnew | past | comments | ask | show | jobs | submit | iamleppert's commentslogin

If I was that kid I'd be suing the school, the AI company, the police, anyone and everyone who had to be subjected to the mistake.

It bears the same hallmarks as any other addict: the next hit has to be even bigger than the last, and everyday enjoyments in life are practically invisible to them. Their drug of choice may be different, but the outcome on their life, relationships and society is largely the same.

The absolute worst place to be right now is in a B tech startup. Not only do you need to build some kind of app or product, you also need to build some kind of AI feature into the product. The users don't want it and never asked for it. It sucks all the resources out of your actual product that you should be focusing on, doesn't actually work or works non deterministically, but you are held to the same standards if it was another kind of software. And the only lever you have to pull is a lengthy model re-training or fine tuning/development cycle. The suits don't understand AI or what it takes to make it successful. They were sold on the hype that AI is going to save money, and forgot to budget for the team of AI engineers you'll need, infrastructure for training, extensive data annotations and reams of data that most startups don't have.

Tell me again how this isn't pure hell and the cuck chair?


> And the only lever you have to pull is a lengthy model re-training or fine tuning/development cycle.

Is this really how professionals work on such a problem today?

The times I'd had a tune the responses, we'd gather bad/good examples, chuck it into a .csv/directory, then create an automated pipeline to give us a percentage of success rate for what we expect, then start tuning the prompt, parameters for inference and other things in an automated manner. As we discover more bad cases, add them to the testing pipeline.

Only if it was something that was very wrong would you reach for model re-training or fine-tuning, or when you know up front the model wouldn't be up for the exact task you have in mind.


Got it, professionals don't fine tune their models and you can do everything via prompt engineering and some script called optimze.py that fiddles with API parameters for your call to OpenAI. So simple!

It depends. Fine-tuning is a significant productivity drag over in-context learning, so you shouldn't attempt it lightly. If you are working on low-latency tasks or need lower marginal costs, then fine-tuning a small model might be the only way to achieve your goals.

Agree for the most part but at the SaaS company I'm at, we've built a feature using LLMs to extract structured data from large unstructured documents. Not something that's been done well in this domain and this solution works better than any other we've tried.

We've kept the LLM constrained to just extracting values with context, and we show the values to end-users in a review UI that shows the source doc and allows them to navigate to exactly the place the doc where a given value was extracted. These are mostly numbers but occasionally the LLM needs to do a bit of reasoning to determine a value (e.g., is this X, Y or Z type of transaction where the exact words X, Y or X will not necessarily appear). Any calculations that can be performed deterministically are done in a later step using a very detailed, domain specific financial model.

This is not a chatbot or other crap shoehorned into the app. Users are very excited about this - it automates painful data entry and allows them to check the source - which they actually do, because they understand the cost of getting the numbers wrong.


It's only a matter of time before ICE starts cracking down on Amazon Fulfillment Centers...

There was another comment that pointed out how unlikely this is to happen because Amazon is just too big to bring the law down on now.

When you're the, what. Second? Third? Largest employer in the US, enforcing the law now becomes a meaningful hit to economic velocity. And as much as Trump hates brown people, his administration has begrudgingly revealed that there are moves that his billionaire buddies Will Not Allow.

I'm no fan of ice or this administrations deportation strategy, but it's a serious problem that even enforcing the law on Amazon is now an economic liability so much that nobody dares to try


And that's exactly why they threw the book at her. It's paradoxical, all judges say they want you to come clean and show remorse. But every case I've ever seen, they always go much harder on those that admit their wrongdoings. I'm not agreeing either way, it's just an observation.

The best legal strategy is to show no remorse and to never admit any guilt even if you have been convicted.


I always just assumed it was Game Theory Optimal to simply deny, deny, deny, before, during, and after any trial or legal action. Never admit to any wrongdoing. Always "vigorously defend" from and "strenuously object" to anything damning. You never see a company's counsel advising admitting to anything bad whatsoever.

$500 a month for two environments with 4 CPUs and 8 GB of memory is diabolical. The only thing more expensive and with worse performance than AWS is Azure.

How is it slop? If you look closely and get over yourself for a moment, it has a powerful political message.

The conflict between the Skibidi Toilets (with human heads sticking out of toilets) and the Camera/TV/Speaker-headed humans can be seen as a metaphor for how people consume and spread media. The toilets constantly repeat a hypnotic song ("Skibidi dop dop yes yes"), representing mindless media repetition and viral trends. The Camera Men symbolize those who "watch" or document reality —- observers trying to preserve truth amid absurdity.

It has themes of media control, surveillance, and propaganda, a battle over who shapes what people see and believe.


The choice of making the “good guys” camera heads and such does give one pause about wholeheartedly rooting for them. Intended or not, it really did have that effect in me.

And one does wonder whether that has anything to do with their enemies being, basically, clever, organized zombies…


I think this is retcon. The thing started as dumb fun, and people tacked on episodes and meaning after the fact. Just pareidolia, but with symbolism instead of faces, and also the meta-game of explaining the deep meaning of things that are not that deep.

The symmetry of the heads without bodies being on one side, and the bodies without heads on the other side is nice too.


This is a really easy problem to solve. You simply fetch those documents and add them to the context, or use another LLM to summarize them if they are too large. Then, have another fact checking LLM be the judge and review the citations.


Anyone who claims something is easy to solve should be held responsible for providing the working solution.


A child of 5 could do this! Now fetch me a child of 5!


I know you’re trying to be funny, but:

You got the Groucho Marx gag wrong. He said “a child of five could understand this” and requested a child of five in order to help him understand it (implying he was stupid).


That latter part is a bit harder than you think.


There are more issues than just the lack of physical movement while you experience visual movement being uncomfortable and disorientating.

Most people who play video games do so as a leisure activity. It works because it allows you to be put into a world that requires little physical exertion - just eye/hand coordination. You don't need to to use your legs to jump in a video game. You don't even need to have legs.

In the vision of VR you're selling, it requires getting up and doing a lot of physical movement. The bulk of most gamers just want to play their game after work or school and relax. Most of my friends feel the same way about VR as they did when I was young and wanted them to play with the NES power pad when I was younger.

There are better, more entertaining options available on the 2D screen that don't require much physical movement. In fact, I'd say that anything other than eye/hand movement distracts from game play. "Jumping" in the video game isn't fun because you actually have to jump, it's fun because you don't. And that's a feature that appeals to people who will never be appealed to VR.


How does it make sense to trade one group of labor (human) who are generally loosely connected, having little collective power for another (AI)? What you're really doing isn't making work more "efficient", you're just outsourcing work to another party -- one who you have very little control over. A party that is very well capitalized, who is probably interested in taking more and more of your margin once they figure out how your business works (and that's going to be really easy because you help them train AI models to do your business).


It’s the same as robots in a factory.


Except that the people that make robots for factories, aren't interested in making whatever that factory is making.


That's not required. All that is required is becoming a sole source of labor, or a source that is the only realistic choice economically.

If you ask me, that's the real long game on AI. That is exactly why all these billionaires keep pouring money in. They know it's the only way to continue growth is to start taking over large sections of the economy.


Yes, that's the difference between robot makers (tool makers for others) and AI, which is not only trying to be a tool for other companies, but also take over their businesses, by acquiring their knowledge and then use a combination of capture through lack of visibility and (mis-)use of the information gathered to directly compete.

Classic enshittification combined with embedding internally to company operations to become indispensable.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: