Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get a lot of meaning out of weights and source (without the training data), not sure about you. Calling it meaningless seems like exaggeration.




Can you change the weights to improve?

You can fine tune without the original training data, which for a large LLM is typically going to mean using LoRA - keeping the original weights unchanged and adding separate fine-tuning weights.

it's a bunch of numbers. Of course you can change them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: