Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Terminology sucks. There is an ML technique called "hallucinating", that can really improve results. It works, for example, on Alphafold, and allows you to reverse the function of Alphafold (instead of finding the fold that matches a given protein or protein complex, find a protein complex that has a specific shape, or fits on a specific shape).

It's called hallucination because it works by imagining you have the solution and then learning what the input needs to be to get that solution. Treat the input or the output as weights and learn an input that fits an output or vice-versa instead of the network. Fix what the network sees as the "real world" to match what "what you already knew", just like a hallucinating human does.

You can imagine how hard it is to find papers on this technique nowadays.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: