Hacker Newsnew | past | comments | ask | show | jobs | submit | zahlman's commentslogin

From my recollection, this is a quite common issue with studies in this topic area.

> 1.92 exabytes of total data transferred

That's something like triple the amount from 2023, yes?


Making predictions is useful even when they turn out very wrong. Consider also giving confidence levels, so that you can calibrate going forward.

I use predictions to prepare rather than to plan.

Planing depends on deterministic view of the future. I used to plan (esp annual plans) until about 5 years. Now I scan for trends and prepare myself for different scenarios that can come in the future. Even if you get it approximately right, you stand apart.

For tech trends, I read Simon, Benedict Evans, Mary Meeker etc. Simon is in a better position make these predictions than anyone else having closely analyzed these trends over the last few years.

Here I wrote about my approach: https://www.jjude.com/shape-the-future/


I'm not really convinced that anywhere leans heavily towards anything; it depends which thread you're in etc.

It's polarizing because it represents a more radical shift in expected workflows. Seeing that range of opinions doesn't really give me a reason to update, no. I'm evaluating based on what makes sense when I hear it.


Speaking of which, we never found out the details (strike price/expiration) of Michael Burry's puts, did we? It seems he could have made bank if he'd waited one more month...

I think they expire in March 2026 if the NVIDIA stock drops to $140 a share? Something close to that I think.

As much as I side with you on this one, I really don't think this submission is the right place to rant about it.

> leverage the ocean by gathering the seaweed and dumping it on marginal or desert land

Why would it gather more CO2 over there than it does where it already is?


They're suggesting to harvest it and sequester it elsewhere, making room for more to grow.

Presumably relocating it makes room for more?

This should be in Show HN.

> I do love getting into the details of code, but I don't mind having an LLM handle boilerplate.

My usual thought is that boilerplate tells me, by existing, where the system is most flawed.

I do like the idea of having a tool that quickly patches the problem while also forcing me to think about its presence.

> There isn't a binary between having an LLM generate all the code and writing it all myself. I still do most of the design work because LLMs often make questionable design decisions.

One workflow that makes sense to me is to have the LLM commit on a branch; fix simple issues instead of trying to make it work (with all the worry of context poisoning); refactor on the same branch; merge; and then repeat for the next feature — starting more or less from scratch except for the agent config (CLAUDE.md etc.). Does that sound about right? Maybe you do something less formal?

> Sometimes I simply want a program to solve a purpose (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable.

Yeah, that sounds about right.


> deep / fast / craft-and-decomposition-loving vs black box / outcome-only

As a (self-reported) craft-and-decomposition lover, I wouldn't call the process "fast".

Certainly it's much faster than if I were trying to take the same approach without the same skills; and certainly I could slow it down with over-engineering. (And "deep" absolutely fits.) But the people I've known that I'd characterize as strongly "outcome-only", were certainly capable of sustaining some pretty high delta-LoC per day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: