Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Impressive for a small model.

Two questions / thoughts:

1. I stumbled for a while looking for the license on your website before finding the Apache 2.0 mark on the Hugging Face model. That's big! Advertising that on your website and the Github repo would be nice. Though what's the business model?

2. Given the LLama 3 backbone, what's the lift to make this runnable in other languages and inference frameworks? (Specifically asking about MLX but Llama.cpp, Ollama, etc)



I wonder how can it be Apache if it's based on Llama?


That's a good question - I was initially thinking that it was pretrained from scratch using the Llama arch, but https://github.com/canopyai/Orpheus-TTS/blob/main/pretrain/c... implies the use of 3.2 3B as a base.


Looks like only the code is Apache, not the weights:

> the code in this repo is Apache 2 now added, the model weights are the same as the Llama license as they are a derivative work.

https://github.com/canopyai/Orpheus-TTS/issues/33#issuecomme...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: