Common and Obscure Models and Ways to Find Them [ Human Written ]
Posted by iMakeSense@reddit | LocalLLaMA | View on Reddit | 16 comments
I've been on a binge finding uses for local AI on my machine outside of general LLM usage as I'm not sure what other sub discovery of these things should go on. Here's a collection of my findings.
I'd appreciate other contributions that are off the beaten path or collections.
Somewhat "common" apps / models
invaluable voice to voice translation app. Was quite easy to find a voice online and map it from one to another. Used it to clean up some crappy lecture recordings. What you use if you want to make a recording sound like Obama.
great for converting any sort of text into audio using a variety of locally running models. Things like transcripts to ebooks. Comes with good tools to parse certain upload types. Used it to make an audiobook out of an EPUB.
I know lots of people use this, but there's also a Desktop version in beta. I hate running containers or severs or what have you so this eases a lot of the headache.
There are also settings that allow you to use TTS models and STT models so you can have a vocal conversational experience.
A good hosting program for a bunch of AI apps. Good for if you want to just click, try something out, and then dip. Irritating though as lots of apps crash. Look for something with a high amount of checkins. Also a good interface for running Open Web UI.
easy speech to text for vocal transcription.
Apps / Models I've seen less mentioned
Seems like a model pipeline manager, I just can't understand the ecosystem enough to use it with local models. I'm not sure if I have to do a lot of installation myself or how its plugin architecture works. Whenever I look at external plugins they seem to mostly be in chinese w/ english translations and have fewer stars than normal so I'm never sure if I'm doing the right thing. Spent an hour on it.
this one is good but a PITA. You have to look at your system monitor to see that it's actually using the GPU and you have to install the latest BETA from the site. The settings are also convoluted. Fails silently a lot.
Meetily - Oddly hard to find closed caption model.
You'd think this would be the first thing people would use STT for, but oddly it's hard to find something realtime. Handy is more for text input rather than closed captioning.
Neat package for voice upscaling, but I feel like something better ought to exist.
Long Form Speech Transcription
Parakeet 0.6b / VibeVoice / CohereTranscribe
I don't know why people keep touting whisper. These are more accurate, hallucinate less, and or run faster, or provide more features ( speaker tagging and voice activation ). Feels like GIMP vs. Krita. Whisper hallucinates because it's train off Youtube data.
It's odd that more leaderboards on hugging face aren't posted here. Oddly I feel as though most ASR frontends are geared towards smaller things.
Obscure Examples
Takes music, generates a midi file
Porn classification.
Speakr - Seems to require a lot of config as well
Might need a separate compose setup to spin it up with corresponding models and take it down. For OCD note taking essentially.
Things I've been looking for
Gallery to slideshow
I've found this feature a lot in google photos and Samsung gallery. Something like an AMV generator like the old 2000s youtube channels would ma
AI video editing
Something where I can put in clips and it gives me processing options. Things like action tagging, topic transitions, silence and vocal activity, etc.
Voice Cloning -> singing :
Applio seems great for that but I'm figuring out how to "train" a voice in the format it requires. I'd be nice to have a tool that uses 30 second one shots like other tools, but I don't know if that'll reduce quality.
Speech editing
I've had lots of recorded audio where I'd like to get a transcript and re-type a part of my speech to make it seem natural without having to re-record.
Good image / video / text search front-end
I just want to tag and organize things ideally through embeddings where possible. Just something I can double click, configure, and point at a folder.
Spoken Audio Cleanup
Also oddly hard to find? There are stem separation tools, but it feels like this needs its own unique pipeline. Not sure which models are best for this.
Batch transcription front-end with cleanup pipeline
Something that can go Audio cleanup -> voice activation -> asr -> transcription -> output format ideally but anything with batch transcription would be great. Odd that this doesn't exist.
Generally the "Ollama" for other means
General AI packages and pipelines for things like audio production, conversation analysis, etc.
Discovery Methods
Searching through AI related repository stats
- local-ai, speech-to-text, semantic-search, speech-enhancement
** Alternative To ** https://alternativeto.net/ Used to find open source alternatives to popular software
If you have any suggestions to discovery methods, obscure models, or other comprehensive model packaging tools I'd appreciate you sharing them! Ideally things with
- decent communities
- more recent / capable models
- alternatives to popular paid tools.
MedicineTop5805@reddit
for local dictation/transcription, i usually care less about raw benchmark speed and more about whether it stays local and handles messy audio well. whisper.cpp is still a pretty nice baseline for that imo.
JazzlikeLeave5530@reddit
Strange reporting on Ultimate Vocal Remover. I don't remember how I set it up anymore but it works perfectly to split stems in music every time without fail. Never had issues with it.
Genebra_Checklist@reddit
Audio cleanup enhancer is a need.
MrCatberry@reddit
I'm looking for ages for something like this for longer clips like 5min.
AudioSR is the closest i could find, but it hat insame VRAM needs for longer clips.
General_Service_8209@reddit
ClearerVoiceStudio can do this with comparatively modest VRAM, and has generally worked really well for me.
https://github.com/modelscope/ClearerVoice-Studio/
The setup process can be a bit of a hassle though, depending on your system.
brahh85@reddit
Not true for non-english. And for english, people already have working workflows using whisper , and when they see the alternative models making a mistake , they prefer the old (and predictable) one rather than the new. In my case, for english audio, parakeet wasnt able to catch a lot of the words that whisper did, probably because of the nature of the audio i gave it (not clean), so it was more useless than whisper(if parakeet was 70%, whispers was 95%). And for spanish audio , parakeet started to output english in the middle of the transcription , so it was trash for the use case.
thats how you make gimp users hate you for free
exactly my use case, transcribe podcasts from youtube
iMakeSense@reddit (OP)
> thats how you make gimp users hate you for free
I'm sure GIMP is cool. I've used its macros before, but, it's UI is hard to parse. Krita just looks like Photoshop. Easier sell.
> exactly my use case, transcribe podcasts from Youtube.
https://www.reddit.com/r/LocalLLaMA/comments/1rlqfd7/we_collected_135_phrases_whisper_hallucinates/
Yeah your model is eerily fitted to that use case. I'm surprised you're not frustrated with the silence hallucinations mentioned here though.
brahh85@reddit
VAD fixed many of my problems, also using -mc 0 in other cases
SM8085@reddit
I just have a script that sends batches of 20 frames at a time to the bot asking if whatever you prompt is in the frames or not. The wrapping program records what the timestamps are to feed into ffmpeg and make a clip.
For audio, we have the smaller Gemma4's and then Qwen3-Omni finally got llama.cpp support.
Could try editing out silence using those models.
iMakeSense@reddit (OP)
That's pretty cool. Have you tried that compared to other specialized models? I'm not sure what else exists in the space, but I can imagine the latency for that has to be pretty high and it could take a while for long videos
rakarsky@reddit
ComfyUI is the de facto standard tool for local image generation. Basically llama.cpp equivalent in the image space.
brahh85@reddit
well, this is not the standard , but the closes relative to llamacpp https://github.com/leejet/stable-diffusion.cpp
Borkato@reddit
It works great!
LeRobber@reddit
Lots of comfy UI you download a model from somewhere (often civitai[content warning] but sometimes another place)
DrawThings and DiffusionBee are a lot easier than comfyUI to use.
kingo86@reddit
Just to add, for meeting transcription with speaker diarization, I've been using a tool called OwnScribe. It lacks a little polish, but it works well enough if you want everything on device.
https://github.com/paberr/ownscribe
I forget why, but Meetily tingled my spidey-senses. Curious to hear if other people have had experience with meeting transcription tools on device.
iMakeSense@reddit (OP)
Oh, did you think they were sending data or so?