What kind of madman tries to deploy pytorch? If it's a product, convert the finished model to onnx. Otherwise it's probably for a dev and you can tell them to have fun because installing all the gpu libs sucks no matter what. In theory mambaforge should make most of these scenarios less painful (mostly portable faster version of anaconda), but you'll still need the right cuda runtime, rocm, etc.
What makes this stuff so hard? I could probably find a Python tutorial to render a spinning cube in like 5 minutes, but as soon as you want to do ML you spend an hour just getting libraries to work?
Is there ever going to be standardization, the way a WebGL shader just works everywhere?