Hacker Newsnew | past | comments | ask | show | jobs | submit | more coder543's commentslogin

Just a typo note: the flow diagram in the article says "Gemini 2.5 Pro Lite", but there is no such thing.


You are right, it's Gemini 2.5 Flash Lite


> The article shows a few charts where a Framework laptop is faster than M4 Air both in single and multicore CPU benchmarks.

Every single chart in the article showed the M4 MacBook Air beating the Framework 12 by a large margin.

I don't know what charts you were looking at.


I think the parent comment is referring to its parent's question "Is it unreasonable to think Framework should be able to make a laptop competitive with the 5 years old MacBook Air M1?"

That the Framework 12 is not extremely lagging behind the M4 (subjective comparison) might lead one to believe that it would be competitive with an five year old M1 Air. Taking a quick look at "Cinebench R23" from 2020 [0], Macbook Air M1 comes in at 1,520 and 7,804, which compares favorably to 2025's "Cinebench R23" in which the Framework 12's i5-1334U scores 1,474 and 4,644.

The answer is it isn't competitive performance-wise. Given the M1 seems to have some native Linux support through Ashai, the Framework's advantages over the 5 year old MBA M1 seem to be user accessible hardware changes, touchscreen and longer hinge throw.

0. https://arstechnica.com/gadgets/2020/11/hands-on-with-the-ap...


Except the M1 Air has no fan and will be dead silent doing that.

The framework won’t.

Once you get used to an inaudible laptop you really don’t want to go back. There’s nothing wrong with a fan you literally can’t hear without putting your head up against the laptop.

I would do anything to get rid of the hairdryers in my life pretending to be laptops.


Does Asahi actually maintain the Macbook's performance and battery advantage when running Linux though?


The performance is great, and now there's a fully stable userspace graphics driver stack. Peripherials basically work. The battery life under load (i.e. development) is serviceable, not terrible, but in my (limited, "I turn on my laptop after some amount of time" testing) it's not even close to macOS especially when turned off. This is with a 13" M2 Air.

It's a really good Linux laptop if you can find a M2 somewhere, IMO.


Does CarPlay Ultra add support for pinch-to-zoom?


I'm pretty sure carplay not having pinch-to-zoom, along with overall simplified UI was intentional to avoid being overly distracting to the driver.


Everyone I've talked to about this has consistently agreed that the convoluted multi-step interface required for scrolling and zooming is far more distracting than a simple pinch-to-zoom... so that intention would be misguided, to say the least.

It also seems like every in-car map interface except CarPlay supports pinch-to-zoom these days, including the OEM maps from manufacturers like VW and Tesla, Android Automotive, and Android Auto. VW won't even let you see your backup camera while you're driving because they think it might be too distracting to have another way to see what's happening behind you, but they think pinch-to-zoom is just fine.


I don't know the details, but I am sure it's not possible for any car to open the backup camera while driving (at least for every single car I've driven in my life) I remember someone tried to flash the car settings & set the km/h "limit" on when the backup camera closes higher than default, but the system just encountered an error. Can't really say for sure why that's the case, maybe because it's mandated (in the EU?).


That quoted IOPS number is only with an 8-disk stripe (requiring the full instance), even if you don't need 488GB of RAM or a $3600/mo instance, I believe.

The per-disk performance is still nothing to write home about, and 8 actually fast disks would blow this instance type out of the water.


> Privacy and Security by Default [...] you can also run custom models on your own hardware via Ollama.

That's nice for the chat panel, but the tab completion engine surprisingly still doesn't officially support a local, private option.[0]

Especially with Zed's Zeta model being open[1], it seems like there should be a way to use that open model locally, or what's the point?

[0]: https://github.com/zed-industries/zed/issues/15968

[1]: https://zed.dev/blog/edit-prediction


We definitely plan to add support for this! It's on the roadmap, we just haven't landed it yet. :)


That’s good to hear!


People have been perfectly capable of making that mistake themselves since long before "vibe coding" existed.


I've never actually tried them, but if you google "RPLIDAR", there seem to be some budget-friendly options out there.


Historically, the term you're looking for might have been "patronage". Wealthy individuals supported artists, scientists, or explorers not purely for financial return, but because they believed in the person, the cause, valued the association, enjoyed the influence, or whatever else.


This sounds right to me


ChatGPT is the number one free iPhone app on the US App Store, and I'm pretty sure it has been the number one app for a long time. I googled to see if I could find an App Store ranking chart over time... this one[0] shows that it has been in the top 2 on the US iPhone App Store every month for the past year, and it has been number one for 10 of the past 12 months. I also checked, and ChatGPT is still the number one app on the Google Play Store too.

Unless both the App Store and Google Play Store rankings are somehow determined primarily by HN users, then it seems like AI isn't only a thing on HN.

[0]: https://app.sensortower.com/overview/6448311069?tab=category...


Close to 100% of HN users in AI threads have used ChatGPT. What do you think the percentage is in the general population, is it more than that, or less than that?


Another thing you’re running into is the context window. Ollama sets a low context window by default, like 4096 tokens IIRC. The reasoning process can easily take more than that, at which point it is forgetting most of its reasoning and any prior messages, and it can get stuck in loops. The solution is to raise the context window to something reasonable, such as 32k.

Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.


I can't belive Ollama haven't fix the context window limits yet.

I wrote a step-by-step guide on how to setup Ollama with larger context length a while ago: https://prompt.16x.engineer/guide/ollama

TLDR

  ollama run deepseek-r1:14b
  /set parameter num_ctx 8192
  /save deepseek-r1:14b-8k
  ollama serve


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: