Hacker Newsnew | past | comments | ask | show | jobs | submit | thehamkercat's commentslogin

fwiw, maintainer of claude code has also said his december contribution for claude-code was 100% written by claude-code

which introduced so many bugs that people unsubscribed


Claude code has more than 5000 Open issues

Can put all data in the url itself :)

Just implemented it!


You could create a 3D printed case maybe

Next they would acquire and kill Cerebras. I hate every part of Nvidia


If i didn't work in tech, I would work for tech.

I'm a linux user, but I hate that majority of MacOS apps/tools are Paid, and I want to change that

If money was not an issue, and I had all the time in this world, I'd start giving it all to Open-source software.

Starting with rewriting all the paid/subscription apps created for macOS.

For example, I'll create an exact replica of: - Alfred, Raycast, Bartender, BetterDisplayTool etc, but completely FOSS


I think you should mention that LM Studio isn't open source.

I mean, what's the point of using local models if you can't trust the app itself?


You can always use something like Little Snitch to not allow it to dial home.


> I mean, what's the point of using local models if you can't trust the app itself?

and you think ollama doesn't do telemetry/etc. just because it's open source?


You're welcome to go through the source: https://github.com/ollama/ollama/


That's why i suggested using llama.cpp in my other comment.


Depends what people use them for, not every user of local models is doing so for privacy, some just don't like paying for online models.


Most LLM sites are now offering free plans, and they are usually better than what you can run locally, So I think people are running local models for privacy 99% of the time


LMStudio is not open source though, ollama is

but people should use llama.cpp instead


I suspect Ollama is at least partly moving away open source as they look to raise capitol, when they released their replacement desktop app they did so as closed source. You're absolutely right that people should be using llama.cpp - not only is it truly open source but it's significantly faster, has better model support, many more features, better maintained and the development community is far more active.


Only issue I have found with llama.cpp is trying to get it working with my amd GPU. Ollama almost works out of the box, in docker and directly on my Linux box.


>Only issue I have found with llama.cpp is trying to get it working with my amd GPU.

I had no problems with ROCm 6.x but couldn't get it to run with ROCm 7.x. I switched to Vulkan and the performance seems ok for my use cases


Desktop app is open-source now.


> but people should use llama.cpp instead

MLX is a lot more performant than Ollama and llama.cpp on Apple Silicon, comparing both peak memory usage + tok/s output.

edit: LM Studio benefits from MLX optimizations when running MLX compatible models.


> LMStudio is not open source though, ollama is

and why should that affect usage? it's not like ollama users fork the repo before installing it.


It was worth mentioning.


Note that there's also "LlamaBarn" (macOS app): https://github.com/ggml-org/LlamaBarn


Ollama did not open source their GUI.



Thanks, I stand corrected.


ik_llama is almost always faster when tuned. However, when untuned I've found them to be very similar in performance with varied results as to which will perform better.

But vLLM and Sglang tend to be faster than both of those.


Besides optimizations specific to running locally lands in lamma.cpp first.


suspicious comments



This is amazing, beat me to it, I had this in my TODO since last year.

I wanted to call it "Remind-me-when"

for example: "remind me when Weapons movie has less than 7 days to be released"

or "remind me when the site something.com goes down"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: