Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In not sure I understood your question.

This model should have much lower computational cost since only one out of eight layers is a traditional transformer layer with masked self-attention. Additionally, half of the Mamba layers are MoEs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: