> Insurance that is maximally responsive to patient health changes in terms of cost (ie making healthier people pay less) ends up being an inefficient way of just having people pay for their healthcare directly.
That's true for predictable costs, but not true for unpredictable ones - which is the point of most insurance (housing, car, etc). The point and use of insurance is to move risk to entities that can bear it.
Utility is non-linear with money, and so you easily have situations where spending X times more on something "costs" you more than X times if measured in how useful the money is to you.
Typically, as you have more money, each further dollar doesn't provide as much benefit as the last (sometimes things are lumpy, the difference between "not quite enough to pay rent" and "just enough to pay rent" is huge, but broadly this is true). Going from $1000 to $10000 is more impactful than $1001000 to $1010000.
That means that moving the other way, each additional dollar spent has a greater personal cost to you.
Therefore, sharing unlikely but high expenses can mean that your expected cost is the same (if there's no profit/middleman) or a bit higher, but your expected personal cost is lower.
Ideally IMO this would be accept headers. You're asking for the same semantic content but a different format. I'm not sure if there's a nice way of specifying html but in a minimal sense (we do quality with images, perhaps linked), however these could mostly be text/plain or text/markdown (and it'd be nice if that was then formatted properly by the browser).
This often makes a really nice API if you can do other formats too - the main page of cnn could respond to rss accept headers and give me a feed for example.
It is, the content loads first, then the js for the cookie banner, then the favicon. If the js fails to load (I blocked the request as a test) the page loads just fine, it isn't blocked by that.
I’ve often wanted a setup where it became physically harder to send an email to me the more unread ones I have to deal with. Like having to cram an extra letter into a pigeon hole that’s already full.
Or maybe make the Slack send button extra hard to press when sending messages after hours. Like you need to apply all the pressure and sustain that pressure over some time and only then the message goes. The emergency and anxiety is built into the interface X-D
It doesn’t need to do all of a job to reduce total jobs in an area. Remove the programming part then you can reduce the number of people for the same output and/or bring people who can’t program but can do the other parts into the fold.
> If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?
Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.
> You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.
These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.
And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?
Agree. I agree with many of the article's points, but not its conclusion.
> “You got way more productive, so we’re letting you go” is not a sentence that makes a lot of sense.
Actually, this sentence makes perfect sense if you tweak it slightly:
> You and your teammate got way more productive, so we’re letting (just) you go
This literally happens all the time with automation. Does anyone think the number of people employed in the field of accounting would be the same or higher without the use of calculators or computers?
> And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?
I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.
I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.
It’s about what you want to tie to which system. Let’s say you keep some data in memory in your backend, would you forbid engineers from putting code there too, and force it a layer out to the front end - or make up a new layer in between the front end and this backend just because some blogs tell you to?
If not, why would you then avoid putting code alongside your data at the database layer?
There are definitely valid reasons to not do it for some cases, but as a blanket statement it feels odd.
Stored procedures can do things like smooth over transitions by having a query not actually know or care about an underlying structure. They can cut down on duplication or round trips to the database. They can also be a nightmare like most cases where logic lives in the wrong place.
reply