Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
samspenc
on Aug 16, 2023
|
parent
|
context
|
favorite
| on:
Show HN: LlamaGPT – Self-hosted, offline, private ...
Ah fascinating, just curious, what's the technical blocker? I thought most of the Llama models were optimized to run on GPUs?
mayankchhabra
on Aug 16, 2023
[–]
It's fairly straightforward to add GPU support when running on the host, but LlamaGPT runs inside a Docker container, and that's where it gets a bit challenging.
stavros
on Aug 17, 2023
|
parent
[–]
It shouldn't, nVidia provides a CUDA Docker plugin that lets you expose your GPU to the container, and it works quite well.
dicriseg
on Aug 17, 2023
|
root
|
parent
[–]
See above if you're interested in that. It does work quite well, even with nested virtualization (WSL2).
stavros
on Aug 17, 2023
|
root
|
parent
[–]
I am, thanks!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: