Llama models: still valuable for finetuning or surpassed by everything new?

Posted by Silver-Champion-4846@reddit | LocalLLaMA | View on Reddit | 83 comments

Hello there people. So I have noticed that people are pretty much ignoring Llama 3 plus 3.1, 3.2, and 3.3 these days. They never mention how their experience goes with fine-tuning those models. But we haven't been getting many entries into the 70 billion space. So is, for example, Llama 3.3 70B the best thing available right now to be experimented with and fine-tuned? Or is it Qwen3 all the way?