They will hire anyone who can produce a model better than GPT5, which is the bar for fine tuning
Otherwise, you should just use gpt5
Preparing a few thousands training examples and pressing fine tune can improve the base LLM in a few situations, but it also can make the LLM worse at other tasks in hard to understand ways that only show up in production because you didn’t build evals that are good enough to catch them. It also has all of the failure modes of deep learning. There is a reason why deep learning training never took off like LLMs did despite many attempts at building startups around it.
> They will hire anyone who can produce a model better than GPT5, which is the bar for fine tuning
Depends on what you want to achieve, of course, but I see fine-tuning at the current point in time primarily as a cost-saving measure: Transfer GPT5-levels of skill onto a smaller model, where inference is then faster/cheaper to run.
This of course slows down your innovation cycle, which is why generally this is imo not advisable.
I agree this is the main case where it makes sense.
But a recent trend that cut into the cost savings is that foundation model companies have started releasing small models. So you can build a use case with qwen 235B, then shrink down to 30B, or even all the way down to 0.6B if you really want to.
The smaller models lose some accuracy, but some use cases are solvable even by these smaller and much more efficient models.
There is also a reason why you don’t have general purpose applications. Most users understand that Excel is for data tables and Paint is for images even though some people have fun playing with the boundary and creating Excel paintings.
Interesting you bring up Excel. ChatGPT's chat interface is going to be Excel for the AI era. Everyone knows there's a better interface to be had, but it just works.
It’s quite easy to produce a model that’s better than GPT-5 at arbitrarily small tasks. As of right now, GPT-5 can’t classify a dog by breed based on good photos for all but the most common breeds, which is like an AI-101 project.
Try doing a head to head comparison using all LLM tricks available including prompt engineering, rag, reasoning, inference time compute, multiple agents, tools, etc
Then try the same thing using fine tuning. See which one wins. In ML class we have labeled datasets with breeds of dogs hand labeled by experts like Andrej, in real life users don’t have specific, clearly defined, and high quality labeled data like that.
I’d be interested to be proven wrong
I think it is easy for strong ML teams to fall into this trap because they themselves can get fine tuning to work well. Trying to scale it to a broader market is where it fell apart for us.
This is not to say that no one can do it. There were users who produced good models. The problem we had was where to consistently find these users who were willing to pay for infrastructure.
I’m glad we tried it, but I personally think it is beating a dead horse/llama to try it today
I mean, at the point where you’re writing tools to assist it, we are no longer comparing the performance of 2 LLMs. You’re taking a solution that requires a small amount of expertise, and replacing it with another solution that requires more expertise, and costs more. The question is not “can fine tuning alone do better than every other trick in the book plus a SOTA LLM plus infinite time and money?” The question is: “is fine tuning useful?”
> How can you hire enough people to scale that while making the economics work?
Once you (as in you the person) have the expertise, what you need all the people for exactly? To fine-tuning you need to figure out the architecture, how to train, how to infer, pick together the dataset and then run the training (optionally setup a pipeline so the customer can run the "add more data -> train" process themselves). What in this process you need to hire so many people for?
> Why would they join you rather than founding their own company?
Same as always, in any industry, not everyone wants to lead and not everyone wants to follow.
The problem is that it doesn’t always work and when it does fail it fails silently.
Debugging requires knowing some small detail about your data distribution or how you did gradient clipping which take time and painstakingly detailed experiments to uncover.
> The problem is that it doesn’t always work and when it does fail it fails silently.
Right, but why does that mean you need more employees? You need to figure out how to surface failures, rather than just adding more meat to the problem.
I think you misunderstand what they are saying - doing a good job of fine tuning is difficult.
Training an LLM from scratch is trivial - training a good one is difficult. Fine tuning is trivial - doing a good job is difficult. Hitting a golf ball is trivial - hitting a 300 yard drive down the middle of the fairway is difficult.