The subject is productivity. Time to merge is as useful metric as Lines of Code to determine productivity.
I can merge 100s of changes but if they are low quality or incur bugs, then it's not really more productive.
LLMs are for producing work that should not be done. So it can definitely comment in the style of a comment that's superfluous.
I keep testing with LLMs and it's super bad in writing convincingly in the style of a good writer. That's because good writing is driven by intent, and LLMs don't have intent.
I'm baffled by AI fans who seem sceptical that writing styles exist, and that discerning styles is just part of reading any text at all.
AI fans seem to be people who literally can't tell good from bad, and get upset when you maintain that you in fact can. They think you're having them on.
I can't detail it much further than just this, but I understand that Tim O'Reilly has the AI virus bad and makes his reports tell him what they've done with AI that day every day. So I've got a first guess.
I'm an AI Moderate, with much more sensible opinions - you can tell, I used a sensible-sounding name for myself! - and it's clear that the moderate position is to set it on fire. The only reason it seems useful for search is because AI vendor Google deliberately made their search worse to increase time on the search page. It's not actually in any way a use case for "AI".
True, but at this point you're basically doing Windows-on-Linux-on-Windows. But why not anyway... applications will anyway run way faster than on the hardware they were originally thought for.
Because despite being more reliable and energy efficient the other costs associated with it were higher. It is one thing to dunk 14.3m L x 12.7m D in the ocean for a 240 kW setup in the ocean. It is another to scale that up to a "full scale" data center that is about 200x larger that has additional electrical supply challenges.
Let's say that pod needs to be serviced once every two years. That means having a ship that services one pod every 3 days when scaled up.
From the standpoint of a single pod data center and "does this work?" - the answer is "yes, it works better than we thought it would." From the standpoint of "can we scale this to a full data center?" - the answer is "we'd need a ship servicing a twice a week, with the logistics that entails for the ship (and backup ship)." That second part becomes less practical compared to building a data center on terra firma where it's much easier to walk into a building to service it and hook up the power.
it is hard to make a man understand something when his giant cash incinerator that just sets billions of dollars on fire depends on not understanding it
then they put the vibes on a graph, which presumably transforms them into data
reply