Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AMD GPUs using ROCm

Oh great. The AMD RX 580 was released in April 2018. AMD had already dropped ROCm support for it by 2021. They only supported the card for 3 years. 3 years. It's so lame it's bordering on fraudulent, even if not legally fraud. Keep this in mind when reading this news. The support won't last long, especially if you don't buy at launch. Then you'll be stuck in the dependency hell that is trying to use old drivers/stack.



> AMD RX 580 was released in April 2018

It was actually Apr 18, 2017 -- https://en.wikipedia.org/wiki/Radeon_500_series


And the 580 was just a rebranded 480, which was released in June 2016.


Speaking from having run tens of thousands of both 580 and 480, they weren't just rebranded. Maybe on paper they seemed similar, but they didn't run similar.


I’m curious, where would you run so many gpus? Quality control?


7 datacenters in the US


This just makes the case for supporting it even further, because then they'd be supporting multiple years worth of hardware with just the effort for one uarch


Nothing like having a lottery of 2-6 years of support for your hardware to make your customers confident they are getting value out of the products they are buying.

The manufacturer can smugly proclaim they offered six years of support for a product that was on the shelf four years into the driver's lifecycle.


tbh im not sure what amds plan is on ROCm support on consumer devices, but i dont really think amd is being fraudulent or something.

Both rocm and vulkan are supported in MLC LLM as mentioned in our blog post. we are aware that rocm is not sufficient to cover consumer hardwares, and in this case vulkan is a nice backup!


If you click the "Radeon" tab here[1], dated 27 Jul, AMD claim ROCm support on a wide range of consumer cards, with HIP SDK support on RX 6800 and up, under Windows. The Linux situation seems less clear.

1: https://rocm.docs.amd.com/en/latest/release/windows_support....


Given AMDs track record. The 6900 will be dropped next year or early 2025.


How does the performance with Vulkan compare to the ROCm performance on the same hardware?


We haven't done any comparison them yet, but generally we believe Vulkan as a more generic cross-vendor API should be slower than ROCm. Same for CUDA vs Vulkan.


There is also vulkan support which should be more universal(also included in the post), for example, the post also shows running LLM on a steamdeck APU.


You're not going to find rx580's with enough vram for AI. Typically 4-8gb.


I am currently using my RX 580 8GB for running large language models on my home computer using llama.cpp opencl (clBLAST) offloading of layers. I can fit up to 13 billion parameter llamas (1 or 2) if they're quantized at 4 bits. It's not super fast but at least my AI IRC bots aren't eating into my CPU time anymore.

But my attempts to get direct ROCm support were thwarted by AMD.


Great for home use, zero commercial value. Can't expect AMD to invest time/money into ROCm for that.


You can say the same thing about a 24GB consumer card. Going from being able to run 13B llamas to 33B doesn't really help you in a commercial sense. This holds true, generally, for other LLM foundational models as well. To do commercial work you're going to need more RAM than consumer cards have. You need at least two if you're going to run the 70B and even then the 70B (and similar) aren't useful commercially. Except in the gathering money from investors who don't know better sense.


is 70B not commercially useful because of the model storage requirements, or total inference performance, or additional memory per session that's inferencing, or what?

is the output better such that it's desirable, or is this just a case of "too much performance hit for a marginal gain"?


No one is arguing any of that. You're the one that brought up the 580 specifically.

By the way, still waiting for you to take me up on your 'bet'.


I was wrong. Sorry. Food trucks do accept cash most places.

Now it's your turn Mr. "You're not going to find rx580's with enough vram for AI. Typically 4-8gb." This is completely false. Rather than acknowledging that you then tried to move the goalposts (much like I did in that past thread saying, "Oh, but maybe it's just my region where they don't.") It looks like we both behave a bit silly when trying to save face when we're wrong.


> This is completely false.

It isn't completely false. You're doing super limited stuff as a hobbyist that barely works.


The parent article is entirely about running and benchmarking 4 bit quantized Llama2-7B/13B. This is the "super limited stuff as a hobbyist that barely works" and I've run them at entirely usable speeds on the AMD RX 580. You're either wrong or you didn't actually read the article and have been arguing correctly (from your ignorant perspective) about something random.


"entirely usable" is not the same as "roi efficient"

> from your ignorant perspective

no need for the ad hominem.


Ignorance is not an insult. It just became obvious that you were talking about a different concept (commercial use with big models) than the article itself and everyone else were talking about (7B/13B models). So I generously assumed you just hadn't read it (ignorance). I guess now that you've ignored that and doubled down I can assume you were/are just arguing in bad faith.


Home use is how you get employees that push your products at work. The lack of focus on home use is AMD's biggest ML weakness.


The lack of a place where you can hourly rent top of the line hardware from AMD is the biggest weakness. Nobody is going to buy and run a MI210/MI250 at home.


Having a community of people using your product has zero commercial value?

Do you even know how brand recognition works?

The amount of people swearing off of AMD because of bad drivers ten years ago easily cost them a billion dollars. More than the cost of developing a good driver.


> Having a community of people using your product has zero commercial value?

Not is not what I'm saying, I'm saying that if I buy up a bunch of rx580 cards, nobody is going to rent them from me.

Now, if I offered a bunch of MI250's on an hourly rate, you can absolutely bet people will rent them all.


i mean in AI specifically, you need your stuff to be usable by a small lab of prof/grad students, otherwise it will never get adoption.

usually at least some of the compute resources are "prosumer" workstations using commercial cards.


Agreed, AMD needs to get their high end cards into more schools. Short of that, they need a place where people can rent them by the hour (and give discounts to schools).


Do you have instructions for this ???. Got a Sapphire 580+ keen to use for more than doing the Windows UI.


ROCm is not just for AI.


I've been waiting for someone to tell me what I can profitably do with the 120,000+ 470/480/580's that I have sitting around doing nothing. It sounds like you have an idea?


Crack passwords, also there was this craze a few years ago, where people were using their GPUS to crunch numbers and earn money, I forgot what it was called ... something with mines.


Right... any legal ideas you have?


Cracking passwords is legal if you obtained the hashes legally as part of your pentest conttract. So is shitcoin mining.

But you seem dead set that there are no uses for ROCm so I'll leave you there.


If the best you can do is 'password cracking' as your answer, you're obviously not very well versed in things. Plus, you don't need ROCm to crack passwords.


Good luck trying to make enough money to pay for power let alone capex


I mean, I'm using ROCm for VFX rendering. But regardless i'm not sure that cards as old as your 470's can really be super competitive power usage wise to make them very profitable.


Correct, not profitable.


But just because you have old GPUs, doesn't imply there is a problem with ROCm. You'd have the same problem of economics with old Nvidia GPUs.


ROCm doesn't support old GPUs.

That said, people are finding hacks...

https://old.reddit.com/r/Amd/comments/15t0lsm/i_turned_a_95_...


You could sell them or give them away to hobbyists, but that could eat into the lucrative “shoehorning old crypto mining into unrelated conversations” business


I was talking about hobbyists. Who said anything about businesses?

CUDA also works on consumer NVidia cards, not just business ones.


I did not know that you meant “hobby profitable” (more brain cells, bigger Ideas) not “business profitable” (money) when you asked people how to use your old mining hardware profitably.


You're not going to find a customer for high end cards, when his entry level experience is this poor. I ran stable diffusion on my CPU instead, even if it took ten minutes per picture.


RustiCL to the rescue! Supposedly it's already faster than ROCm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: