Hacker Newsnew | past | comments | ask | show | jobs | submit | cqql's commentslogin

I read this and think that if they see DeepSeek as a threat to national security due to sending personal data to another country, then isn't any US-hosted good LLM a threat to any other country?


I'm glad to hear that the opinion to move away from heavy client-side frontends is getting more popular. On a slow connection and a mid range device, they were almost unusable, often laggy on my smartphone.

Good to know that moving to a server rendered frontend cuts lines of code and development time.

These big clients were a fun and exciting experiment but they can go now, we're done.


What kind of resources do I need to run these models? Even if I run it on a CPU, how do I know what amount of RAM is needed to run a model? I've tried reading about it but I can't find a conclusive answer, other than downloading models and trying them out.


On a Mac with 16 GB ram you can rum the 8B models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: