What's the missing piece in the LLaMA ecosystem right now?

Posted by Street-Lie-2584@reddit | LocalLLaMA | View on Reddit | 32 comments

The LLaMA model ecosystem is exploding with new variants and fine-tunes.

But what's the biggest gap or most underdeveloped area still holding it back?

For me, it's the data prep and annotation tools. The models are getting powerful, but cleaning and structuring quality training data for fine-tuning is still a major, manual bottleneck.

What do you think is the most missing piece?

Better/easier fine-tuning tools?
More accessible hardware solutions?
Something else entirely?