Pıtırcık

Posted by Connect-Bid9700@reddit | LocalLLaMA | View on Reddit | 2 comments

We fine-tuned the Gemma 0.3B base model using a LoRA-based training approach and achieved an average performance increase of 50% in our evaluation benchmarks; the standard deviation was ±5%. This improvement demonstrates the effectiveness of parameter-efficient fine-tuning in significantly increasing model capability while maintaining low computational overhead. You can try our model on HuggingFace: https://huggingface.co/pthinc/Cicikus_v4_0.3B_Pitircik