I'm not a 3D artist, but why are we still, for lack of a better word, "stuck" with having / wanting to use simple meshes? I appreciate the simplicity, but isn't this an unnecessary limitation of mesh generation? It feels like an approach that imitates the constraints of having both limited hardware and artist resources. Shouldn't AI models help us break these boundaries?
My understanding is that it's quite hard to make convex objects with radiance fields, right? For example the furniture in OP would be quite problematic.
We can create radiance fields with photogrammetry, but IMO we need much better algorithms for transforming these into high quality triangle meshes that are usable in lower triangle budget media like games.
"Lower triangle budget media" is what I wonder if its still a valid problem. Modern game engines coupled with modern hardware can already render insane number of triangles. It feels like the problem is rather in engines not handling LOD correctly (see city skylines 2), although stuff like UE5 nanite seems to have taken the right path here.
I suppose though there is a case for AI models for example doing what nanite does entirely algorithmically and research like this paper may come in handy there.
I was referring to being stuck with having to create simple / low tri polygonal meshes as opposed to using complex poly meshes such as photogrammetry would provide. The paper specifically addresses clean low poly meshes as opposed to what they call complex iso surfaces created by photogrammetry and other methods
Lots of polys is bad for performance. For a flat object like a table you want that to be low poly. Parallax can also help to give a 3D look without increasing poly count.