Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
antiframe
5 days ago
|
parent
|
context
|
favorite
| on:
Kimi K2 Thinking, a SOTA open-source trillion-para...
I get a lot of meaning out of weights and source (without the training data), not sure about you. Calling it meaning
less
seems like exaggeration.
mensetmanusman
5 days ago
[–]
Can you change the weights to improve?
reply
HarHarVeryFunny
5 days ago
|
parent
|
next
[–]
You can fine tune without the original training data, which for a large LLM is typically going to mean using LoRA - keeping the original weights unchanged and adding separate fine-tuning weights.
reply
danielmarkbruce
1 day ago
|
parent
|
prev
[–]
it's a bunch of numbers. Of course you can change them.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: