gemma 4 e4b won't finetune on kaggle, inference stays exactly the same

Posted by TimesLast_@reddit | LocalLLaMA | View on Reddit | 2 comments

i’m having a weird issue. i'm trying to finetune gemma 4 e4b on kaggle using a slightly edited unsloth notebook but the model doesn't change at all on inference. it’s like i’m still talking to the base model.

​what i’ve tried:

​qlora and regular lora

​increasing lora r and alpha

​switching up the dataset

​changing all the default settings.

​the training loss is dropping greatly but the outputs aren't changing