Hacker Newsnew | past | comments | ask | show | jobs | submit | emersonmacro's commentslogin

Pulse MCP has a weekly email digest


What models do you get the best results with locally?


I like mistral:7b-instruct, yi:34b, and wizard-vicuna-uncensored:30b. I think the so-called "uncensored" models tend to work better for general purpose, but mistral and yi aren't available uncensored.

I have a M2 Pro 32G memory so I need to use 3-bit quantization to run mixtral: dolphin-mixtral:8x7b-v2.5-q3_K_S. In general I don't like to go below 4-bit quantization.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: