AMD Mi50

Posted by aspirio@reddit | LocalLLaMA | View on Reddit | 10 comments

Hey all,

This question may have popped hundreds of times in the last months or even years, but as AI evolves really fast and everything surrounding it too, I'd like to have an up to date vision on something.

Is it still worth buying a MI50 today to run a local LLM ? I've read that Rocm support is long gone, that Vulkan is not that efficient, I am fairly new in the LOCAL LLM game, so no judgement please)). That some community patches allow the usage of Rocm 7.x.x but that running Qwen 3.5 with ollama.cpp crashes, and so on.

I don't need to run a big model, but I'd like to use the money in a good way, forget about the crazy 1000 dollars the GC setup, I can only afford hundreds of dollars and even there, I'd be cautious to what I buy.

I was initially going to buy a P40, as it seems like it should be enough for what I am about to do, but on the other side, I see the MI50 which has 3x the bandwidth of the P40, 8 more GB VRAM and for less than twice the price of the p40....

Any suggestions ?