I think the parent comment is referring to its parent's question "Is it unreasonable to think Framework should be able to make a laptop competitive with the 5 years old MacBook Air M1?"
That the Framework 12 is not extremely lagging behind the M4 (subjective comparison) might lead one to believe that it would be competitive with an five year old M1 Air. Taking a quick look at "Cinebench R23" from 2020 [0], Macbook Air M1 comes in at 1,520 and 7,804, which compares favorably to 2025's "Cinebench R23" in which the Framework 12's i5-1334U scores 1,474 and 4,644.
The answer is it isn't competitive performance-wise. Given the M1 seems to have some native Linux support through Ashai, the Framework's advantages over the 5 year old MBA M1 seem to be user accessible hardware changes, touchscreen and longer hinge throw.
Except the M1 Air has no fan and will be dead silent doing that.
The framework won’t.
Once you get used to an inaudible laptop you really don’t want to go back. There’s nothing wrong with a fan you literally can’t hear without putting your head up against the laptop.
I would do anything to get rid of the hairdryers in my life pretending to be laptops.
The performance is great, and now there's a fully stable userspace graphics driver stack. Peripherials basically work. The battery life under load (i.e. development) is serviceable, not terrible, but in my (limited, "I turn on my laptop after some amount of time" testing) it's not even close to macOS especially when turned off. This is with a 13" M2 Air.
It's a really good Linux laptop if you can find a M2 somewhere, IMO.
Everyone I've talked to about this has consistently agreed that the convoluted multi-step interface required for scrolling and zooming is far more distracting than a simple pinch-to-zoom... so that intention would be misguided, to say the least.
It also seems like every in-car map interface except CarPlay supports pinch-to-zoom these days, including the OEM maps from manufacturers like VW and Tesla, Android Automotive, and Android Auto. VW won't even let you see your backup camera while you're driving because they think it might be too distracting to have another way to see what's happening behind you, but they think pinch-to-zoom is just fine.
I don't know the details, but I am sure it's not possible for any car to open the backup camera while driving (at least for every single car I've driven in my life)
I remember someone tried to flash the car settings & set the km/h "limit" on when the backup camera closes higher than default, but the system just encountered an error. Can't really say for sure why that's the case, maybe because it's mandated (in the EU?).
That quoted IOPS number is only with an 8-disk stripe (requiring the full instance), even if you don't need 488GB of RAM or a $3600/mo instance, I believe.
The per-disk performance is still nothing to write home about, and 8 actually fast disks would blow this instance type out of the water.
Historically, the term you're looking for might have been "patronage". Wealthy individuals supported artists, scientists, or explorers not purely for financial return, but because they believed in the person, the cause, valued the association, enjoyed the influence, or whatever else.
ChatGPT is the number one free iPhone app on the US App Store, and I'm pretty sure it has been the number one app for a long time. I googled to see if I could find an App Store ranking chart over time... this one[0] shows that it has been in the top 2 on the US iPhone App Store every month for the past year, and it has been number one for 10 of the past 12 months. I also checked, and ChatGPT is still the number one app on the Google Play Store too.
Unless both the App Store and Google Play Store rankings are somehow determined primarily by HN users, then it seems like AI isn't only a thing on HN.
Close to 100% of HN users in AI threads have used ChatGPT. What do you think the percentage is in the general population, is it more than that, or less than that?
Another thing you’re running into is the context window. Ollama sets a low context window by default, like 4096 tokens IIRC. The reasoning process can easily take more than that, at which point it is forgetting most of its reasoning and any prior messages, and it can get stuck in loops. The solution is to raise the context window to something reasonable, such as 32k.
Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.