Anyone fine-tuned facebookresearch/omnilingual-asr? Looking for guidance or codebase
Posted by Outside_Solid5371@reddit | LocalLLaMA | View on Reddit | 0 comments
Hi everyone,
Has anyone here fine-tuned facebookresearch/omnilingual-asr for a new language or custom dataset?
I’m trying to set up a full fine-tuning pipeline (data prep → training → evaluation), but the official repo doesn’t provide much detail on adapting the model. If you’ve done it before, could you share:
- Your training workflow
- Any scripts/codebase you used
- Tips on dataset formatting
- Hardware requirements
- Any issues you ran into during fine-tuning
Even a GitHub link or minimal training script would help a lot.
Thanks in advance!