Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
_heimdall
1 day ago
|
parent
|
context
|
favorite
| on:
Feed the bots
I dotn think an LLM even
can
detect garbage during a training run. While training the system is only tasked with predicting the next token in the training set, it isn't trying to reason about the validity of the training set itself.
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: