Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The important argument in this paper is:

> We argue that systematic problem solving is vital and call for rigorous assurance of such capability in AI models. Specifically, we provide an argument that structureless wandering will cause exponential performance deterioration as the problem complexity grows, while it might be an acceptable way of reasoning for easy problems with small solution spaces.

Ie. thinking harder still samples randomly from the solution spaces.

You can allocate more compute to the “thinking step”, but they are arguing that for problems with a very big solution space, adding more compute is never going to find a solution, because you’re just sampling randomly.

…and that it only works for simple problems because if you just randomly pick some crap from a tiny distribution you’re pretty likely to find a solution pretty quickly.

I dunno. The key here is that this is entirely model inference side. I feel like agents can help contain the solution space for complex problems with procedural tool calling.

So… dunno. I feel kind “eh, whatever” about the result.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: