Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AMD has also often said that they can't compete with Nvidia at the high end, and as the other commenter said: market segments exist. Not everyone needs a 5090. If anything, people are starved for options in the budget/mid-range market, which is where Intel could pick up a solid chunk of market share.


Regardless of what they say, they CAN compete in training and inference, there is literally no alternative to W7900 at the moment. That's 4080 performance with 48Gb VRAM for half of what similar CUDA devices would costs.


How good is it though compared to 5090 with 32GB? 5090 has double the memory bandwidth, which is very important for inference.

In many cases where 32GB won't be enough, 48 wouldn't be enough either.

Oh and the 5090 is cheaper.


AMD has more FP16 and FP64 flops (but ~1/2 the FP32 flops). Also the AMD is at half the TDP (300 vs 600 W)


FP16+ doesn't really matter for local LLM inference, no one can run reasonably big models at FP16. Usually the models are quantized to 8/4 bits, where the 5090 again demolishes the w7900 by having a multiple of max TOPS.


with 48 GB of vram you could run a 20b model at fp16. It won't be a better GPU for everything, but it definitely beats a 5090 for some use case. It's also a generation old, and the newer rx9070 seems like it should be pretty competitive with a 5090 from a flops perspective, so a workstation model with 32 gb of vram and a less cut back core would be interesting.


The FP16 bit is very wrong re: LLMs. 5090 has 3.5x FP16 for LLMs. 400+ vs ~120 Tops.


I’m interested in buying a GPU that costs less than a used car.


Jerry rig MI50 32GiB together and then hate yourself for choosing AMD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: