How to unlock Gemma 4 MLX support in LM Studio right now (it's already there, just blocked)
Posted by Artistic_Unit_5570@reddit | LocalLLaMA | View on Reddit | 10 comments
WARNING:It may come with risks, but in my experience it works perfectly on my M4 PRO.
If you're getting this error when loading Gemma 4 with MLX in LM Studio:
Failed to load the model
ValueError: Gemma 4 support is not ready yet, stay tuned!
Turns out the support is already fully bundled mlx-vlm 0.4.3, mlx-lm 0.31.2, the gemma4 model module, everything. LM Studio's mlx-engine even has the code to handle gemma4. But there's a manual block in generate.py that raises a ValueError before it even tries to load.
1. Backup
Update LM studio to the latest version before starting and everything related
cp -r ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21 ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21_backup
2. Comment out the block
sed -i '' 's/ if model_type == "gemma4":/ #if model_type == "gemma4":/' ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21/lib/python3.11/site-packages/mlx_engine/generate.py
sed -i '' 's/ raise ValueError("Gemma 4 support is not ready yet, stay tuned!")/ #raise ValueError("Gemma 4 support is not ready yet, stay tuned!")/' ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21/lib/python3.11/site-packages/mlx_engine/generate.py
3. Clear the Python cache
rm ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21/lib/python3.11/site-packages/mlx_engine/__pycache__/generate.cpython-311.pyc
4. Quit LM Studio (Cmd+Q) and relaunch
That's it. Gemma 4 loads and runs on MLX.
Tested on macOS 26.4.1 , Apple Silicon, LM Studio MLX v1.5.0.
One_Club_9555@reddit
Worked like a charm! Finally running Gemma 4 MLX on LM Studio on the Mac.
I’m lazy so didn’t backup the file or anything. Just opened the Python file with a text editor and commented those lines as per your instructions.
Thanks!!
Artistic_Unit_5570@reddit (OP)
happy that help you out
WillingnessDapper55@reddit
I followed your steps and commented out the exception, but a new issue has arisen.
bacon_king312@reddit
Tried this and it only changed the error to this:
```
🥲 Failed to load the model
Failed to load model.
Error when loading model: ValueError: Gemma 4 support is not ready yet, stay tuned!
```
Artistic_Unit_5570@reddit (OP)
PROMPT YOUR IA EXAMPLE CLAUDE OR CHATGPT"
"I found this trick on Reddit to enable Gemma 4 support in LM Studio by commenting out a check in the source code. Here's the original command:
WARNING:It may come with risks, but in my experience it works perfectly on my M4 PRO.
If you're getting this error when loading Gemma 4 with MLX in LM Studio:
Turns out the support is already fully bundled mlx-vlm 0.4.3, mlx-lm 0.31.2, the gemma4 model module, everything. LM Studio's mlx-engine even has the code to handle gemma4. But there's a manual block in generate.py that raises a ValueError before it even tries to load.
Update LM studio to the latest version before starting and everything related
Comment out the block
sed -i '' 's/ if model_type == "gemma4":/ #if model_type == "gemma4":/' ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21/lib/python3.11/site-packages/mlx_engine/generate.py
sed -i '' 's/ raise ValueError("Gemma 4 support is not ready yet, stay tuned!")/ #raise ValueError("Gemma 4 support is not ready yet, stay tuned!")/' ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21/lib/python3.11/site-packages/mlx_engine/generate.py
Clear the Python cache
rm ~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@21/lib/python3.11/site-packages/mlx_engine/pycache/generate.cpython-311.pyc
Quit LM Studio (Cmd+Q) and relaunch
That's it. Gemma 4 loads and runs on MLX.
Tested on macOS 26.4.1 , Apple Silicon, LM Studio MLX v1.5.0.
The problem is the file path is hardcoded and probably won't match my setup. Can you help me make this work on my MacBook Pro?
Here's my setup:
Please:
if found it depends on the Mac and the location you should copy and paste the post and ask claude for help "can you make sure that work on my own MacBook Pro I found on reddit""
Artistic_Unit_5570@reddit (OP)
do you have copy and paste all the code in the terminal in order and restart the Mac and then retry
qchuret@reddit
Nice share u/Artistic_Unit_5570 !
How come your fix worked, as you are saying that you are using macOS 26 ? I iterated with Claude that actually helped me, and depending on your MacOS you need to change the right files.
Hence here is the adapted commands to run for macOS 26
## macOS 26+
### 1. Backup
```bash
cp -r \~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac26-arm64@20 \~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac26-arm64@20_backup
```
### 2. Comment out the block
```bash
sed -i '' 's/ if model_type == "gemma4":/ #if model_type == "gemma4":/' \~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac26-arm64@20/lib/python3.11/site-packages/mlx_engine/generate.py
sed -i '' 's/ raise ValueError("Gemma 4 support is not ready yet, stay tuned!")/ #raise ValueError("Gemma 4 support is not ready yet, stay tuned!")/' \~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac26-arm64@20/lib/python3.11/site-packages/mlx_engine/generate.py
```
### 3. Clear the Python cache
```bash
rm -f \~/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac26-arm64@20/lib/python3.11/site-packages/mlx_engine/__pycache__/generate.cpython-311.pyc
```
### 4. Quit LM Studio (Cmd+Q) and relaunch
But thanks for the original fix, works perfectly now :)
rezwits@reddit
Man you the man on that one. I just couldn't bring myself to use the GGUF versions. I can't bring myself to that crap...
Artistic_Unit_5570@reddit (OP)
first time posting a post actually technical and fixing a big problem , happy I manage to help you up
eone20@reddit
66666