Up to date guides to build llama.cpp on Windows with AMD GPUs?
Posted by Chimpampin@reddit | LocalLLaMA | View on Reddit | 11 comments
The more detailed it is, the better.
DreamingInManhattan@reddit
I don't have an answer for you, but maybe a suggestion.
I've been writing and building software for over 25 years, and when I first got into LLMs my thought was to run under windows because all my machines ran it.
But after reading horror stories about how difficult it is to build llama.cpp on windows, instead I dual boot into Ubuntu 24.04 to do LLM work. Now that I've done it a few (dozen) times, it takes less than an hour to set up a fresh Ubuntu install with CUDA & llama.cpp (sorry, don't have an AMD card).
Have you considered dual booting or using WSL2 for this instead?
Chimpampin@reddit (OP)
I have considered dual booting, It just sounds a bit daunting using Linux without knowledge. I don't know what WSL2.
DerpageOnline@reddit
windows subsystem for linux. you get linux without dual booting. Instead of diving straight into dual booting, which can "breaK' your system if you hit the wrong levers, look into WSL2, Docker or Virtual Machines running inside your Windows until you become more confident on that front.
DreamingInManhattan@reddit
WSL2 is a way to run Ubuntu directly in Windows.
Yeah, Linux can be very daunting, but the Internet is there to help, and the cutting edge of AI is all Linux. Could also use a LLM for figuring out problems you run into.
Suitable-Name@reddit
I think ROCm isn't working with WSL2. Maybe that changed in the meantime, but about one year ago, it didn't.
ForsookComparison@reddit
Install Ubuntu LTS and rip the band aid kdf
rorowhat@reddit
Do the vulkan build instructions. It's pretty easy.
Marksta@reddit
There's build instructions in the repo, did you try that already? For AMD Windows you want Vulkan. From what is said, rocm on Windows is way too hard to get working.
Scott_Tx@reddit
Why not just use the one they put up that uses vulkan?
disinton@reddit
Pls help me I just need a little 5 karma so I can post
segmond@reddit
use a search engine and figure it out. to install AMD drivers, go to AMD site, they have separate instructions for windows and linux and mac. Then to install llama.cpp, follow the instructions. If you can read, comprehend and follow instructions you can do it. Supposedly it's easier to get AMD drivers up and running in Windows than in Linux.