Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, in that sense I think one of the next logical steps will be providing on-demand lightweight learning/finetuning of LLM versions/forks (maybe as LoRAs?) as an API and integrated UX based on user chat feedback, while abstracting away all the technical hyperparameter and deployment details involved in a DIY setup. With a lucrative price tag of course.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: