Unlocked LM Studio Backends (v1.59.0): AVX1 & More Supported – Testers Wanted
Posted by TheSpicyBoi123@reddit | LocalLLaMA | View on Reddit | 13 comments
Hello everyone!
The latest patched backend versions (1.59.0) are now out, and they bring full support for “unsupported” hardware via a simple patch (see GitHub). Since the last update 3 months ago, these builds have received major refinements in performance, compatibility, and stability via optimized compiler flags and work by llama cpp team.
Here’s the current testing status:
✅ AVX1 CPU builds: working (tested on Ivy Bridge Xeons)
✅ AVX1 Vulkan builds: working (tested on Ivy Bridge Xeons + Tesla K40 GPUs)
❓ AVX1 CUDA builds: untested (no compatible hardware yet)
❓ Non-AVX experimental builds: untested (no compatible hardware yet)
I’m looking for testers to try the newest versions on different hardware, especially non-AVX2 CPUs and newer NVIDIA GPUs, and share performance results. Testers are also wanted for speed comparisons of the new vs old cpu backends.
👉 GitHub link: lmstudio-unlocked-backend


Brief install instructions:
- navigate to backends folder. ex C:\Users\Admin\.lmstudio\extensions\backends
- (recommended for clean install) delete everything except "vendor" folder
- drop contents from compressed backend of your choice
- select it in LM Studio runtimes and enjoy.
fiery_prometheus@reddit
Tangential question, do you support avx512?
TheSpicyBoi123@reddit (OP)
Hello, and yes! You can build a backend with AVX512 using the generator script. I would be quite curious in your performance on AVX512 using the optimizer levels from none to 03. I can make you a custom one later for you to try if you for some reason have difficulty.
fiery_prometheus@reddit
Here are some benchmark results for avx512 with different optimizer flags for MSVC. Also, the backend-manifest.json should have a unique name, otherwise lmstudio doesn't pick up on more backends, and this isn't written in the guide.
I ran benchmarks with both seed36b and gemma-3 12b.
I've attached the build script, it builds on windows and makes 3 versions with different optimizer flags.
https://pastebin.com/Hjs3q43Z
TheSpicyBoi123@reddit (OP)
I fully agree, the backend manifest needs some cleaner labelling. If you are feeling motivated, I can gladly make you a contributor on the github page too.
fiery_prometheus@reddit
Would be neat, always liked the project :-) https://github.com/Nidvogr
TheSpicyBoi123@reddit (OP)
Awesome, just added you. I would love your help with cleaning up the build scripts (and have a look at the new ones, I made)
TheSpicyBoi123@reddit (OP)
The script is also a lot cleaner then mine, great job lmao
kryptkpr@reddit
You've clearly put a lot of work into this so I am curious, what's the appeal of LM Studio that makes you bend over backwards to keep it vs just running upstream llama-server or koboldcpp?
fuutott@reddit
Not op but likely one or both of: 1) because he can 2) because someone said it can't be done
Aggressive-Bother470@reddit
/me looks at the Bloomfield he has on the floor that was going to the skip tomorrow...
Icy_Resolution8390@reddit
GOOD NEWS
Skystunt@reddit
Can you add one with the newly supported qwen3 next plsss ?
egomarker@reddit
Nice work