Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So where did you load up Qwen and how did you supply the pdf or photo files? I don't know how to use these models, but want to learn


LM Studio[0] is the best "i'm new here and what is this!?" tool for dipping your toes in the water.

If the model supports "vision" or "sound", that tool makes it relatively painless to take your input file + text and feed it to the model.

[0]: https://lmstudio.ai/


Jumping from this for visibility - LM Studio really is the best option out there. Ollama is another runtime that I've used, but I've found it makes too many assumptions about what a computer is capable of and it's almost impossible to override those settings. It often overloads weaker computers and underestimates stronger ones.

LM Studio isn't as "set it and forget it" as Ollama is, and it does have a bit of a learning curve. But if you're doing any kind of AI development and you don't want to mess around with writing llama-cpp scripts all the time, it really can't be beat (for now).


Thank you! I will give it a try and see if I can get that 4090 working a bit.


You can use their models here chat.qwenlm.ai, its their official website


I wouldn't recommend using anything that can transmit data back to the CCP. The model itself is fine since it's open source (and you can run it firewalled if you're really paranoid), but directly using Alibaba's AI chat website should be discouraged.


AnythingLLM also good for that GUI experience!


I should add that sometimes LM Studio just feels better for the use case, same model same purpose seemingly different output usually when involving RAG, but Anything is definitely a very intuitive visual experience




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: