LM Studio unlocked for "unsupported" hardware — Testers wanted!
Posted by TheSpicyBoi123@reddit | LocalLLaMA | View on Reddit | 30 comments
Hello everyone!
Quick update — a simple in situ patch was found (see GitHub), and the newest versions of the backends are now released for "unsupported" hardware.
Since the last post, major refinements have been made: performance, compatibility, and build stability have all improved. The AVX1-only and AVX1 + Vulkan backends are now confirmed working on Ivy Bridge Xeons with older Tesla GPUs.
Here’s the current testing status:
- ✅ AVX1 CPU builds: working
- ✅ AVX1 Vulkan builds: working
- ❓ AVX1 CUDA builds: untested (no compatible hardware yet)
- ❓ Non-AVX experimental builds: untested (no compatible hardware yet)
I’d love for more people to try the patch instructions on their own architectures and share results — especially if you have newer NVIDIA GPUs or non-AVX CPUs (like first-gen Intel Core).
👉 https://github.com/theIvanR/lmstudio-unlocked-backend
My test setup is dual Ivy Bridge Xeons with Tesla K40 GPUs


Brief install instructions:
- navigate to backends folder. ex C:\Users\Admin\.lmstudio\extensions\backends
- (recommended for clean install) delete everything except "vendor" folder
- drop contents from compressed backend of your choice
- select it in LM Studio runtimes and enjoy.
IndicationUnfair7961@reddit
I have an Intel i7 3770k, with AVX, but not AVX2, does this works, and does it use AVX?
TheSpicyBoi123@reddit (OP)
Ivy bridge is confirmed working, try the newest release and or make your own for the latest llamacpp.
Tobim6@reddit
Where is download for First gen Intel Xeon X3450
TheSpicyBoi123@reddit (OP)
Hello! You can look on the github page, set the instruction set to your cpu and follow the instructions provided. If you have a vulkan supporting GPU, I would be quite curious to see if it works on it with the provided gpu files.
Tobim6@reddit
Your github says that nonAVX is not done yet
TheSpicyBoi123@reddit (OP)
Try building it from source on your system with llama cpp and copy the files, its a very easy procedure. I had one experimental non avx build but it might have been deleted. If you get stuck I can make you one too. Issue is if anything whether lm studio will launch at all on a non avx capable system. Is there a reason you want to use lm studio in patricular anyway? I personally recommend the new llama cpp web ui anyway.
Overall_Suit1062@reddit
Запустится, проверено на х2 1366 xeon x5675, но генерация естественно не работает, жалуется на отсутствие AVX по умолчанию
Tobim6@reddit
Yes I have an RX580 8GB and my CPU does not have AVX at all.
sjwjs@reddit
I could not get it to work on my Xeon E5-2697 v2 and GTX 1080
I really wish it would, I have 80GB of Quad-channel RAM to play with
TheSpicyBoi123@reddit (OP)
What specifically did not work? What steps did you take? Which version?
sjwjs@reddit
When attempting to load models I get the following error:
```
🥲 Failed to load the model
Failed to load model
error loading model: error loading model architecture: unknown model architecture: 'qwen35'
```
I have the AVX1 backend loaded
TheSpicyBoi123@reddit (OP)
Which backend did you use? This is most likely a bug with llama itself and you need to rebuild the newest backend yourself with provided instructions. (I believe it is version 2.something now)
sjwjs@reddit
I am able to run "smaller" models like gpt-oss-20 with pretty good performance (I tweaked the context size until it fills my 8GB VRAM GPU, and it works perfectly)
But I'm having issues running newer models like Qwen 3.5 27b or Qwen-coder-next
I'll try to build llama on the weekend if I get a chance.
TheSpicyBoi123@reddit (OP)
Well yes, because these models use new features. Build your own llama cpp backend and drop the files into your backend folder (and change the manifest). Its 3 steps, all in the github. I'll eventually update these too.
sjwjs@reddit
I tried all 3 compiled versions. I'll see if I have time to try to build my own flavor this weekend (need to install the VS toolset)
ikkiyikki@reddit
Aw shucks, switched to Linux a while back.
TheSpicyBoi123@reddit (OP)
Hello and thank you for your feedback. It would be interresting to see if the patched insturctions will work for you as well on the linux backends.
IDK_Arkan@reddit
no it doesn't....i just can't get it to work.
TheSpicyBoi123@reddit (OP)
If you are going to report a bug at least do it properly
Resident_Handle7346@reddit
I'm excited to give this a try. Was recommended LM studio where MSTY studio failed to load Kimi. Will also try deep seek R1. Since Msty worked seems like there's no reason LM studio won't work. This is on a dual intel xeon Sandy Bridge E5-2667 and 512 GB of RAM. I don't expect fast but I do expect the ability to load some decent sized models and quantized versions of really big models.
fuutott@reddit
This is super cool. Rocm for older amd cards? (mi50)
TheSpicyBoi123@reddit (OP)
Hello, thank you for your ideas. I sadly dont have any AMD cards especially older ones for this to test on. Does the Vulcan backend work on these gpus as is? Otherwise, you can try the simple patch in the json file as instructed for the downloaded backend. Probably however, this will not work and you will have to build it yourself from source. Could be worth a shot, please keep me in the loop.
fuutott@reddit
After 12 hours
I'm getting both rocm/therock built for gfx906 arch, and llama.cpp against that built with no errors
But then fails to detect gpu.
TheSpicyBoi123@reddit (OP)
Interresting, where exactly does it fail to detect? Can you run the llama cpp as is? Can you walk me through exactly what you did, what works and what doesnt please?
fuutott@reddit
Forgot to mention. This works with vulkan runtime but I got same card on rocm in Linux and it's just faster due to llama cpp optimisations for this card in rocm, especially for prompt processing.
At the moment I think it could be a driver issue, as hipinfo is not seeing it either. I'll keep you posted if I get any further.
TheSpicyBoi123@reddit (OP)
What CPU and GPU combination do you specifically have and which runtime? Please keep me posted.
fuutott@reddit
intel i7 12xxx + amd mi50 but mi60 for windows drivers. This card was a pita to get to work under windows in the first place, 4 bioses were tested for vulkan to see 32gb vram. I'll try the other 3 bioses and different set of drivers.
I am likely making it more difficult than i should as I'm targeting rocm 7.10(gfx906 technically depreciated but still compliable) rather than 5.7.1(last supported for this architecture under windows)
This is for performance benefits.
Aroochacha@reddit
Define “newer NVIDIA GPUs “ please.
TheSpicyBoi123@reddit (OP)
Newer constitutes Pascal and above.
bitdotben@reddit
Very cool!