I can't find any LLM that is better than gemma-2-9b-it-SimPO...

Posted by pumukidelfuturo@reddit | LocalLLaMA | View on Reddit | 39 comments

... that you can drive, in a reasonable mamner, with 8gb of vram.

I've tried a lot of the new toys and i always end with the same.

I hope somebody tries to replicate the style (stop gptisms plox, enough is enough) and makes something better on the ballpark of 8 to 10 billion parameters that you can drive locally in the most humble (actually affordable) gpu's.

Or maybe we need gemma3.