Gemma 3n Fine-tuning now in Unsloth - 1.5x faster with 50% less VRAM + Fixes
Posted by danielhanchen@reddit | LocalLLaMA | View on Reddit | 37 comments
Hey LocalLlama! We made finetuning Gemma 3N 1.5x faster in a free Colab with Unsloth in under 16GB of VRAM! We also managed to find and fix issues for Gemma 3N:
Ollama & GGUF fixes - All Gemma 3N GGUFs could not load in Ollama properly since per_layer_token_embd
had loading issues. Use our quants in Ollama for our fixes. All dynamic quants in our Gemma 3N collection.
NaN and infinities in float16 GPUs - we found Conv2D weights (the vision part) have very large magnitudes - we upcast them to float32 to remove infinities.
[Green crosses are large Conv2D weights](
Free Colab to fine-tune Gemma 3N 4B in a free Colab + audio + text + vision inference: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3N_(4B)-Conversational.ipynb
Update Unsloth via pip install --upgrade unsloth unsloth_zoo
from unsloth import FastModel
import torch
model, tokenizer = FastModel.from_pretrained(
model_name = "unsloth/gemma-3n-E4B-it",
max_seq_length = 1024,
load_in_4bit = True,
full_finetuning = False,
)
Detailed technical analysis and guide on how to use Gemma 3N effectively: https://docs.unsloth.ai/basics/gemma-3n
im_datta0@reddit
You guys keep cooking. High time we make an Unsloth Cooking emoji
yoracale@reddit
Thank you appreciate it! We do need to move a little faster for multiGPU support and our UI so hopefully they both come within the next 2 months or so! 🦥
Final_Wheel_7486@reddit
Sorry, I'm not quite inside the loop. What did you mean by "UI"? Are you planning to create a fine-tuning UI?
mycall@reddit
Could that include using both GPU and NPU when that's available?
yoracale@reddit
I think so? We're still working on it and tryna make it as feature complete as possible
im_datta0@reddit
Will be a canon event the day it lands
plztNeo@reddit
🦥 👨🍳
NoSpite3343@reddit
u/danielhanchen u are cooking, when will audio fine tuning be available?
yoracale@reddit
It already works for Gemma 3n. It'll just use much more vram.
Or are you talking about TTS finetuning? It's already supported: https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning
NoSpite3343@reddit
I want audio fine tuning on gemma 3n do u support that bro?
onlinetries@reddit
Anyone building anything cool with this, unsloth thank you
yoracale@reddit
Thanks for reading! FYI Google is also holding a competition if you're interested: https://x.com/UnslothAI/status/1940414492791468240
onlinetries@reddit
Awesome will give it a try, not sure impactful but something interesting/fun I hope atleast
Ryas_mum@reddit
I am using unsloth gemma 3n e4b q8 gguf on my M3 max 96g machine. For some reason the token per second is limited to 7 to 8 at max. One thing I noticed that these models seem to use lot of CPU, GPU utilisation is limited to 35% only. I am on llama.cpp 5780 brew version and using run params from article.
Is this because I selected q8 quants? Or am I missing on some required parameters?
Thanks for the quants as well as detailed articles, very much appreciate it.
danielhanchen@reddit (OP)
Oh interesting - i think it's the per token embeddings which are slowing everything down
But I'm unsure
__JockY__@reddit
Brilliant!
Ahem… wen eta vllm…
yoracale@reddit
FP8 and AWQ quants are on our radar however we aren't sure how big the audience is at the moment before we commit to them! 🙏
CheatCodesOfLife@reddit
AWQ would be great for the bigger models (70b+). Anyone with 2 or 4 Nvidia GPUs would benefit, and they're quite annoying / slow to create ourselves.
I'd personally love FP8 for < = 70b models but I'm guessing the audience would be smaller. 4 x 3090's can run FP8 70b, 2 x 3090's can run FP8 32b.
I'm guessing you guys would have more to offer with AWQ in terms of calibration where as FP8 is pretty lossless. And RedHat have been creating FP8 quants for the popular models lately.
That's my 2c anyway.
mxmumtuna@reddit
Would love that
__JockY__@reddit
Some nice juicy AWQ…
danielhanchen@reddit (OP)
On our to do list!
ansibleloop@reddit
This is excellent
Warning though: this is text only, so don't try to use it with images
yoracale@reddit
You can use it with images and audio but it'll use like a lot more vram!
Karim_acing_it@reddit
Hi, thanks for your incessant contributions to this community. I saw your explanation on Matformer in your docs and knew that Gemma 3n uses this architecture, but (sorry for the two noob questions), I reckon the submodel size S isn't something we can change in LMstudio, right? What does it default to?
Can the value of S changed independently of the quant, does one have anything to do with the other? Say, rather use a small quant at full S "resolution" or large quant but tiny S? Thanks for any insights!
danielhanchen@reddit (OP)
Im not sure if you can, you might have to ask in their community. Let me get back to you on the 2nd question
handsoapdispenser@reddit
Would one of these fit in an RTX 4060?
danielhanchen@reddit (OP)
For training? Probably not as the 2B one uses 10GB VRAM. For inference definitely yes
mmathew23@reddit
You can run the colab notebook for free and keep an eye on the GPU RAM used. If that used amount is less than the your VRAM capacity it should run.
Basileolus@reddit
Unsloth! I'm always proud 🦚 of you guys. Thanks
danielhanchen@reddit (OP)
Thank you appreciate the support :)
eggs-benedryl@reddit
Does this explain why they were so slow last night on my system? Interesting..
danielhanchen@reddit (OP)
Depends on your GPU mainly but probably yes. Actually they arent even supposed to work
SlaveZelda@reddit
How do I use unsloth quants in ollama instead of the ollama published ones ?
danielhanchen@reddit (OP)
Yep thats correct! :) All the instructions are usually in our docs
rjtannous@reddit
Unsloth, always ahead of the pack. 🔥
danielhanchen@reddit (OP)
Thank you we appreciate it!
mmathew23@reddit
Love when you analyze these arch details!