>One of the key skills needed in working with LLMs is learning to ignore the hype and marketing and figure out what these things are actually capable of
This is precisely what led me to realize that while they have some use for code review and analyzing docs, for coding purposes they are fairly useless.
The hypesters responses' to this assertion exclusively into 5 categories. Ive never heard a 6th.
>tax reform needs to be progressive, meaning it hits rich people more, and it hits them when they spend not when they earn.
This misses the point. It needs to hit them when they own not when they spend.
>A progressive tax will tax them more when they buy luxury items
Which still wont be progressive it's the yield bearing assets Jeff Bezos buys which are the main problem not the fancy candlesticks he decorates his house with.
This was already true before LLMs. "Artisinal software" was never the norm. The tsunami of crap just got a bit bigger.
Unlike clothing, software always scaled. So, it's a bit wrongheaded to assume that the new economics would be more like the economics of clothing after mass production. An "artisanal" dress still only fits one person. "Artisanal" software has always served anywhere between zero people and millions.
LLMs are not the spinning jenny. They are not an industrial revolution, even if the stock market valuations assume that they are.
Agreed, software was always kind of mediocre. This is expected given the massive first mover advantage effect. Quality is irrelevant when speed to market is everything.
Unlike speed to market it doesnt manifest in an obvious way but I've watched several companies lose significant market share because they didnt appreciate software quality.
The thing I find interesting is that there is trillions of dollars in valuations hinging upon this question and yet the appetite to spend a little bit of money to repeat this study and then release the results publicly is apparently very low.
It reminds me of global warming where on one side of the debate there some scientists with very little money running experiments and on the other side there were some ridiculously wealthy corporations publicly poking holes in those experiments but who secretly knew they were valid since the 1960s.
Yeah, it's kind of a Bayesian probability thing, where the impressiveness of either outcome depends on what we expected to happen by default.
1. There are bajillions of dollars in incentives for a study declaring "Insane Improvements", so we should expect a bunch to finish being funded, launched, and released... Yet we don't see many.
2. There is comparatively no money (and little fame) behind a study saying "This Is Hot Air", so even a few seem significant.
This is precisely what led me to realize that while they have some use for code review and analyzing docs, for coding purposes they are fairly useless.
The hypesters responses' to this assertion exclusively into 5 categories. Ive never heard a 6th.
reply