IndexTTS2, the most realistic and expressive text-to-speech model so far, has leaked their demos ahead of the official launch! And... wow!
Posted by pilkyton@reddit | LocalLLaMA | View on Reddit | 137 comments
IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech
https://arxiv.org/abs/2506.21619
Features:
- Fully local with open weights.
- Zero-shot voice cloning. You just provide one audio file and it will extremely accurately clone the voice.
- Zero-shot emotion cloning by providing a second audio file that contains the emotional state to emulate. This affects things thing whispering, screaming, fear, desire, anger, etc.
- Optional: Text control of emotions, without needing a 2nd audio file.
- Full control over how long the output will be, which makes it perfect for dubbing movies.
Here's a few real-world use cases:
- Take an Anime, clone the voice of the original character, clone the emotion of the original performance, and make them read the English script, and tell it how long the performance should last. You will now have the exact same voice and emotions reading the English translation with a good performance that's the perfect length for dubbing.
- Take one voice sample, and make it say anything, with full text-based control of what emotions the speaker should perform.
- Take two voice samples, one being the speaker voice and the other being the emotional performance, and then make it say anything with full text-based control.
So how did it leak?
- They have been preparing a website at https://index-tts2.github.io/ which is not public yet, but their repo for the site is already public. Via that repo you can explore the presentation they've been preparing, along with demo files.
- Here's an example demo file with dubbing from Chinese to English, showing how damn good this TTS model is at conveying emotions. The voice performance it gives is good enough that I could happily watch an entire movie or TV show dubbed with this AI model: (Click the download icon on the page) https://github.com/index-tts/index-tts2.github.io/blob/main/ex6/Empresses_in_the_Palace_1.mp4
- The entire leaked repository is here: https://github.com/index-tts/index-tts2.github.io/
- To download all demos and watch the HTML presentation locally, you can "git clone https://github.com/index-tts/index-tts2.github.io.git".
I can't wait to play around with this. Absolutely crazy how realistic these emotions are!
blackashi@reddit
How long until the chinese govt stop letting these guys publish breakthroughs?
pilkyton@reddit (OP)
Hopefully never. China is the reason we get cool things while the west acts hysterical.
JackStrawWitchita@reddit
I'm curious to know what the hardware requirements are. Chatterbox runs great on lower spec computers. If this IndexTTS2 runs on the same hardware it'd be awesome.
pilkyton@reddit (OP)
Text to speech usually doesn't require much VRAM. So I think it will be easy to run. :)
mpasila@reddit
It seems to have been trained on Chinese and English data, so AI dubbing would only work between those two languages, so anime wouldn't really be a use case for this model.
pilkyton@reddit (OP)
That just means that the languages it can output are English and Chinese.
So you can dub a Japanese Anime into English or Chinese.
Or you can dub a Hungarian Movie into English or Chinese.
But you can't dub an English movie into Japanese, for example.
zyxwvu54321@reddit
The real question is whether this TTS can handle Japanese speech as a reference without affecting the English output. Will the English sound natural, or will it have a noticeable Japanese accent like we see in Chatterbox when using Japanese reference audio?
pilkyton@reddit (OP)
It clones the timbre, tone and rhythm of the reference voice, so it will have a slight accent.
If you want to avoid this, use a native English voice as the reference voice instead.
You can still use the original non-English audio as the Emotion Reference, to control the emotion of the fully native English speaker voice.
SkyFeistyLlama8@reddit
It totally makes sense for Bilibili. Take an English-language movie and dub it into Chinese for the local market, do the reverse to get Chinese shows for a global audience.
Bad dubs will be a thing of the past!
mpasila@reddit
Did they show any examples of that (using non Chinese/English audio dubbed to English/Chinese)? The examples they had looked a lot like voice2voice type AI dubbing (Chinese audio to English audio) similar to Elevenlabs.
pilkyton@reddit (OP)
It's a text-to-speech model. You provide the exact text of what it should say.
The languages you can write your text in are: English, Chinese.
The voice audio clip you provide for the voice cloning can be any language.
The emotional audio clip to clone emotions can be any language.
IrisColt@reddit
Chatterbox can literally clone a voice in any language... but the pieced together cloned voice will be in English.
pilkyton@reddit (OP)
Yeah, cloning the rhythm and tone of a speaker doesn't require any specific language. You can provide voice in any language to IndexTTS.
IrisColt@reddit
Thanks for the info!
Trick-Independent469@reddit
Bro for voice cloning the person whose voice is cloned doesn't need to speak in the voice it is cloned with . It can speak in Telugu for that matter .
OC2608@reddit
...Again for the 100th time... I guess I'll continue sleeping in the local TTS dream. But it sounds amazing.
mpasila@reddit
If they provide the tools for finetuning then someone could train it to generate other languages. But currently it can only output either English or Chinese. So with the finetuning support you could expect more languages to be supported like it has been a thing for F5, Orpheus and XTTSv2.
BusRevolutionary9893@reddit
Um, there are plenty of STT models that can translate Japanese to English.
mpasila@reddit
I haven't found a good STT for transcribing Japanese yet though. Most of them skip or mistranscribe stuff frequently that it becomes not that usable.
oxygen_addiction@reddit
Donghua world.
mrfakename0@reddit
I don’t think it was leaked so much as a mistake in how they put up the GitHub Pages site I see this a lot - they named the repo index-tts2.github.io - in order to get that subdomain they would need to create a new GitHub org (called index-tts2), so I think this is more of a mistake than a leak
pilkyton@reddit (OP)
Yeah the repository name was definitely a mistake.
They contacted me though, since they haven't gone public and were surprised that I found these things. I posted the update at the bottom of their original post above.
freehuntx@reddit
Not the first tts rugpull
pilkyton@reddit (OP)
I still haven't forgiven Kyutai:
https://www.reddit.com/r/LocalLLaMA/comments/1ly6cg6/kyutai_texttospeech_is_considering_opening_up/
Or Sesame CSM releasing a nerfed model publicly, which loses coherence after just a few seconds.
But so far, IndexTTS1 and IndexTTS1.5 were totally open Apache 2 licensed models. No restrictions at all. I think IndexTTS2 will be the same.
JuicedFuck@reddit
Believe me, the chances of rugpull go up exponentially with the SOTA-ness of the model. Anyone can get hit with either the thought of "I could sell this", or even someone else saying "I will pay you (millions) to keep this exclusive to our company API".
pilkyton@reddit (OP)
I spoke to them today. They are considering uses for dubbing anime and movies and may therefore be doing a commercial license for commercial usage, but if they do that they will still also release it for free for non-commercial use. Nothing is decided yet, except that it will definitely be free for non-commercial use.
Silver-Champion-4846@reddit
looks like you're one of the tts-ophiles, just like me. I want something that works like gemini tts, where I can narrate my novels in peace. Gemini screws up sometimes and I can't get it to unscrew up.
Trysem@reddit
Same opinion
Silver-Champion-4846@reddit
It's better than anything I could find for free (completely free not just a jacked up demo), but still screws up.
MerePotato@reddit
You guys need a better name lol
zxyzyxz@reddit
ttsluts then
Silver-Champion-4846@reddit
the other user changed their username. I didn't. Numbers are the proof. Or are you talking about tts-ophile? Yeah that's a terrible name, an unintended insult to the Way of Speechism. Lol
MerePotato@reddit
The latter yeah lmao
Silver-Champion-4846@reddit
indeed. So yeah... this thing. The first version wasn't good, I hear, let's hope this one's better. I mean more apache stuff doesn't hurt right?
pilkyton@reddit (OP)
GAMEYE_OP@reddit
I have gotten pretty good results with CSM using the transformers version, but I did have to create voice samples/context
Dragonacious@reddit
Any idea when the github repo will be available??
pilkyton@reddit (OP)
They are still busy fine-tuning, so not this month. But very likely next month.
NoobMLDude@reddit
Wow, this is amazing !
mrfakename0@reddit
Note that while the codebase is licensed under Apache 2.0, the models themselves are licensed under a separate, restrictive, non-commercial license: https://github.com/index-tts/index-tts/blob/main/INDEX_MODEL_LICENSE
This is currently the license for IndexTTS 1 and 1.5, hopefully IndexTTS 2 will be a truly/fully open source release!
IrisColt@reddit
head asplodes
CommunityTough1@reddit
Ow! My head asplode
pilkyton@reddit (OP)
You are absolutely insane for referencing that with zero hints to anyone about what you mean, and I am more insane for understanding your reference. High five.
https://www.youtube.com/watch?v=R22zSrpeSA4
Evolution31415@reddit
Wow, the
Empresses_in_the_Palace_1.mp4
video is really impressive.Now single voice audiobook actors can provide only guidance and create as many voices as they want.
necile@reddit
Huh?I saw the entire video and I would never want to watch a dub with it, it just isn't that good.
pilkyton@reddit (OP)
Yeah it absolutely blew my mind. For the first time, this is approaching actual human acting instead of the "stilted corporate promo video where some terrible actor is reading a script and trying to pretend to be human" that other AI text-to-speech feels like more or less.
It's the first time I've actually felt like AI voices could be enjoyable for a full movie dubbing. I noticed that it even cloned the Chinese accent when it dubbed them. Very interesting. I can't wait to try it locally with good reference voices, trying different emotional reference audio clips, and re-running the generation as much as needed to get very believable acting. This is shockingly cool stuff.
There can be a market for people who provide voices and emotions as clips to be used as guidance for this type of AI.
SkyFeistyLlama8@reddit
I've watched a lot of dubbed Chinese and Japanese shows and the dubbed voices are always very different to the original actors, although the voice actors try to maintain the same emotional tone and cadence.
This demo almost nailed the emotional tone and cadence perfectly while still retaining the original actors' voices, for the most part. It's revolutionary and scary as hell. Dead actors will be brought back to life with this technology.
I might try making my own Hitchhiker's Guide to the Galaxy audiobooks using Douglas Adams' voice. Or I might not.
zxyzyxz@reddit
It might also be because of ADR (Automated Dialogue Replacement) dubbing, where the dub is recorded separately from the on-site location of the actors when saying a line. But perhaps we could actually fix that with TTS too.
SkyFeistyLlama8@reddit
That's also kind of funny because a lot of old Chinese and Italian shows use ADR for the original actors' audio, so the original actors are dubbing themselves. Sometimes lip movements aren't in sync with the audio.
This will become the new AI ADR. It still won't match a good human performance, not yet anyway, but it's good enough for smaller shows. This is the worst it will ever be so there's plenty of upside.
Or downside, if you're a voice actor.
zxyzyxz@reddit
Lots of countries use ADR natively still to this day, for some reason. Maybe we will even get a sort of reverse ADR, where we analyze the scene and construct a 3D model to predict the acoustics of the scene then use that information to inform our TTS.
remghoost7@reddit
That demo is freaking insane.
Man, I'd love to run a ton of anime though this model and generate English dubs for it.
Recently got addicted to that new horse girl gacha game (don't ask) and I was wanting to watch the anime.
I don't really feel like watching a subbed anime at the moment, but if this model works as well as it claims, I could just watch it dubbed...
What a wild world we live in.
JealousAmoeba@reddit
It’s very good, only issue I can hear is inconsistency in voice tone between lines. I assume the model can only do a small amount of speech at a time and there’s some voice instability across generations?
IrisColt@reddit
Aged like fine wine.
pilkyton@reddit (OP)
Yeah it absolutely blew my mind. For the first time, this is approaching actual human acting instead of the "stilted corporate promo video where some terrible actor is reading a script and trying to pretend to be human" that other AI text-to-speech feels like more or less.
It's the first time I've actually felt like AI voices could be enjoyable for a full movie dubbing. I noticed that it even cloned the Chinese accent when it dubbed them. Very interesting. I can't wait to try it locally with good reference voices, trying different emotional reference audio clips, and re-running the generation as much as needed to get very believable acting. This is shockingly cool stuff.
Freaky_Episode@reddit
!remindme 5 days
alew3@reddit
Is it just Chinese and English, or are there other languages supported?
Accurate-Ad2562@reddit
need french great support
rbgo404@reddit
Sound amazing!
Will add them to this Open Source TTS Gallary(Hugging face Space): https://huggingface.co/spaces/Inferless/Open-Source-TTS-Gallary
pilkyton@reddit (OP)
Nice. There's also this battle ranking page, which someone made with the older IndexTTS1.5 (not 2.0):
https://huggingface.co/spaces/kemuriririn/Voice-Clone-Arena
PurposeFresh6398@reddit
hihi, we are this Arena builder, shall we discuss more about the IndexTTS? Would u mind contact me as I can't directly chat with u
mitchins-au@reddit
I’ll believe it when I see it. Still sore from Sesame.
Ensirius@reddit
Yeah WTF happened there?
mister2d@reddit
!remindme 5 days
bloke_pusher@reddit
I need it on my computer right fucking now! Aaaah!
harlekinrains@reddit
What are you folks talking about here?
In the reel itself you hear autotune artifacts.
The emotional delivery doesnt map to whats going on on screen.
The pacing is stilted, with one time an emotional transition being rushed, because the half sentence was to short for the emotion prompt
The delivery is forced (well how couldnt it be with all those issues already mentioned)
The room audio is effed, I mean - ok they didnt had that on seperate tracks, and good karaoke software costs an arm and a leg...
The cloned voices feel like different characters.
Better pick "shouting in dispair" as the emotional delivery we want to highlight with our release.
Find 10 redditors that find that amazingly impressive?
How on earth...
AndroYD84@reddit
First came out Dall-E Mini. "Haha, look artists! Laugh at it!"
Then came Dall-E 2. "Pfft, not as good as humans! It looks so fake!"
Then came Dall-E 3, Stable Diffusion. "O-ok! B-but still AI can't draw hands!"
Then came community-made tools and models, ComfyUI, LoRas, etc. "That was made by an AI?!? B-but it still can't write text correctly sometimes!"
Then came the Ghiblipocalypse and perfect clear text, and so on.
I've seen a lot of promising projects die because no one supported or believed in it, it's really sad, arm chair critics look at the surface of a rock and say "it's only dirt", but an enthusiast look at the rock and say "Oh, it's only dirt now, but I KNOW there's a diamond hiding there". This is the state of the art now, potentially it will be free for everyone to develop on and improve, what will it be in the next 5 years?
FpRhGf@reddit
AI audio has always gotten way less development and community support compared to AI images throughout those years though. It bugs me how we have AI upscalers for image/videos since the 2010s, yet no AI exists to enhance general audio quality. The autotune-like problems of TTS/ or Veo3 wouldn't be an issue.
I wish we had gotten a ComfyUI ecosystem and community that didn't stop innovating. There were several competing SVCs within the span of half a year until RVC2 came and then people just... stopped. It's been 2 years since. There has been an amount of decent opensource song makers but outside of the initial release hype, it's crickets. Nobody's trying to train music Lora's with them.
There's so much potential to be had with the AI audio ecosystem.
PurpleNepPS2@reddit
I would think once video generation is at a good level, audio gen will have it's turn. Can't really have proper videos without sound after all.
SimultaneousPing@reddit
crazy seeing the reactions of all those live
GreatBigJerk@reddit
I'm glad I'm not the only one. I listened to the samples and thought they were... fine. Not the best, but decent I suppose. Maybe you have to be a Chinese speaker or something to hear quality samples, but the English dialog didn't match the ground truth very well and felt extremely stilted.
SkyFeistyLlama8@reddit
Not perfect but miles ahead of a bad human dub and light-years ahead of a typical lifeless corpo-drone TTS engine. If you can clean up the text to include proper pitch directions and phoneme spacing, the output would be much better.
pilkyton@reddit (OP)
I guess in the desert of shit that is all "AI text to speech", we're happy when an AI actually shows emotional range, yes.
harlekinrains@reddit
Fair.
sleepy_roger@reddit
!remindme 5 days
Lucky-Necessary-8382@reddit
!remindme 3 days
RemindMeBot@reddit
I will be messaging you in 5 days on 2025-07-18 17:52:02 UTC to remind you of this link
CLICK THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)
Robert__Sinclair@reddit
and where is the model?
vk3r@reddit
!remindme 5 days
Unfair-Enthusiasm-30@reddit
Is there even a fine-tuning code for the 1.5 version to train new languages?
Virtamancer@reddit
Is there a book-length TTS app yet?
I would kill to be able to convert ebooks to audiobooks using modern voices, free and locally, with an intuitive simple GUI that actually installs reliably. Like LM Studio but for audiobook-length TTS.
Dragonacious@reddit
Can we install this locally?
Emport1@reddit
Didn't even read the title
neOwx@reddit
You have my hope up. So I've watched the demo and even though it's good I'll never watch an entire movie with this dub quality.
pilkyton@reddit (OP)
I watched a bunch of HUMAN dubbed asian movies as a kid, such as this:
https://www.youtube.com/watch?v=GRyxn2w6GAk
This AI dub is on par with that human dub. So I'd happily watch that.
But I am actually sure that IndexTTS2 can do a lot better dubbing than what their demos show. Because their page (see the link in my post) also contains a lot of other pure text-to-speech examples that sound very natural. I think their dubbing examples suffer a bit because they are using a Chinese voice for the tone + emotions. I think it will sound 5x better if you give it an English voice + English emotional reference.
SkyFeistyLlama8@reddit
Human dubs can range from excellent to pour-molten-lead-into-my-ears-please. I like how they're using the original Chinese actors' voices to generate English audio, as if those actors are doing the dub themselves. You could use a native English speaker's voice to generate better sounding audio but it won't be as realistic.
National_Cod9546@reddit
Pretty soon, we won't be able to believe anything we see or hear on TV. Already pretty close, but this gets it closer.
u_3WaD@reddit
I call them "YAET" - Yet Another English-only TTS. No matter how good they are, unfortunately, they're unusable for real use cases in most of the world.
rm-rf-rm@reddit
Yeah have to wait until its actually in our hands and we can try it out. Easy to make demos look good
robertotomas@reddit
Wait what is the input? Text or video? That seems impossible
Mahtlahtli@reddit
Please let us know how well the text control of emotions goes!
Valuable_Can6223@reddit
I’m impressed can’t way to check it out
Turkino@reddit
Hoping this actually releases as I'd love to try this out
IrisColt@reddit
How about comparing it with Resemble's Chatterbox?
pilkyton@reddit (OP)
Chatterbox is great but can't do emotional control. So you'll have way better acting / emotions with IndexTTS2.
IrisColt@reddit
Thanks!!!
Crinkez@reddit
I hope it can adhere to instructions to use tonal declination. I tested Gemini TTS for an audiobook (for self use) and it was maddening how difficult it was to get tonal declination. There's a constant tonal uplift towards the end of most sentences as if the speaker is asking a question. Horribly inappropriate for audiobook usage.
reart_ai@reddit
Is it multilingual?
pilkyton@reddit (OP)
https://www.reddit.com/r/LocalLLaMA/comments/1lyy39n/comment/n2y3wth/
mintybadgerme@reddit
Wow if that comes out it's gonna be a game-changer. Literally.
kellencs@reddit
it's not leaked, https://index-tts.github.io/index-tts2.github.io/
pilkyton@reddit (OP)
What the hell, I've never seen a github.io link inside another github.io link like that before.
Nice discovery. I'll edit the post to link to the demo page.
bsenftner@reddit
I've tried building the github repo. The command line app built, but the gradio UI failed with a cuda pytorch mismatch. Tried to fix it, and unsuccessful.
pilkyton@reddit (OP)
The IndexTTS1.5 code repo is here:
https://github.com/index-tts/index-tts
The IndexTTS2 code repo is not released yet.
nikitastaf1996@reddit
Wait a second. Voice cloning + emotion cloning. Almost holy grail for dubbing.
BusRevolutionary9893@reddit
Can we ban leaks of future announcements along with announcements of future announcements?
SquashFront1303@reddit
How many languages it supports?
pilkyton@reddit (OP)
Its text-to-speech is trained on generating English and Chinese. Pretty much all TTS models these days are English + 1 more language, usually Chinese since they're the best at Open AI.
a_beautiful_rhind@reddit
My holy grail is when it can infer the emotions from the provided text on a clone. Not writing tags like (happy) but a decent approximation from just context.
Guess we won't know how it is outside of dubs until the weights drop.
Environmental-Metal9@reddit
I think this is the territory of multimodal LLMs, since it requires some level of “understanding” of the text. I’m mostly musing to myself here, but so far we have LLMs with extra heads that produce tokens that become Mel spectrograms in the model processing pipeline, and you have the grapheme to phoneme to Mel spectrograms pipelines. There are plenty of other tech out there but of the models I’ve seen talked about this year so far, those two families of tech are the prevalent ones. I can’t wait to see what indextts2 is doing with their model!
pilkyton@reddit (OP)
I suspect that it will do a good job giving natural readings without any emotional prompts at all, since it was trained to do emotions. The control over emotions will most likely give the best results though.
Well, you could also train a text model that can take your script and automatically insert relevant emotion tags.
a_beautiful_rhind@reddit
True, for static content that would work great. I hope the weights really come out and it doesn't take a whole lot of resources.
pilkyton@reddit (OP)
So far they've released IndexTTS1 and IndexTTS1.5 with a fully open, commercial-allowed, modifications-allowed, you-can-do-anything license (Apache 2). I think this will be the same.
kataryna91@reddit
That could be revolutionary.
I love Chatterbox, but it does not support emotional directives and that somewhat limits its practical applications.
IrisColt@reddit
Thanks for the insight!
Black-Mack@reddit
Cinema
[Absolute Cinema Meme]()
pilkyton@reddit (OP)
Can't wait to see what cinematic scripts you guys use it for. "Oh no... step... step-ChatGPT... why... why am I stuck in this washing machine... and where is my skirt... oh noes UwU..."
Black-Mack@reddit
No man that's pathetic. If I will use RP, I'll use it for language learning.
My feelings are for a real wife.
Emport1@reddit
Will this actually be open weights or will they do a Sesame and open weights for just their smallest model of the series?
harunandro@reddit
It is really hard to understand this attitude dude. Why dont you engineer up yourself and create a better one than what sesame published and donate it to us all?
Emport1@reddit
mb it wasn't meant to be that serious, should've probably just shortened it to "hopefully they don't do a sesame lol" lol
pilkyton@reddit (OP)
IndexTTS1 and IndexTTS1.5 were Apache 2 fully open, fully unrestricted. I don't see why this wouldn't be.
sage-longhorn@reddit
Looks interesting, I'll have to check it out
So one-shot then, not zero-shot
pilkyton@reddit (OP)
Definition:
Zero-shot voice cloning AI refers to artificial intelligence that can replicate a person's voice using little or no training data - sometimes just a few seconds of audio - without requiring the AI to have seen that specific voice during its training phase.
Tricky_Reflection_75@reddit
but they didn't leak anything tho..... its all apart of their paper and they said they're going to open source it eventually
pilkyton@reddit (OP)
Yeah I didn't see that they had published the URL in the paper. And their new page is hosted at a very strange URL that violates expectations of github.io hosting by putting the new page inside the old page despite their repositories being separate. So it looked like the page wasn't ready to be public yet.
Anyway, the fact that they've open-sources all previous versions of IndexTTS with the totally unrestricted Apache 2 license is super exciting, because it means they'll most likely do the same with IndexTTS2. This is gonna be super fun to play around with!
dankhorse25@reddit
!remindme 5 days
the_other_brand@reddit
Auto-regressive?
Is this similar to how image generation AIs use iterative steps to get the result closer and closer to an expected result?
pilkyton@reddit (OP)
Nah, autoregressive means that it uses all previous tokens to generate the next token. So this means it can maintain coherent speech. This enables fine-grained prosody control and more natural timing and rhythm, because each decision can be influenced by what’s been said so far. They also added emotional control and duration control to this. It's awesome.
Ryas_mum@reddit
!remindme 10 days
Traditional_Tap1708@reddit
Interested
Beautiful-Essay1945@reddit
!remindme in 2 days
evilbarron2@reddit
Is this free as in beer and open source or is this just an ad in disguise?
pilkyton@reddit (OP)
Free as in Apache 2:
https://github.com/index-tts/index-tts/blob/main/LICENSE
Specific_Dimension51@reddit
Amazing ! I think the work of film dubbers (well, setting aside all the strikes, the pressure, and the corporate lobbying) is really going to die out soon. It’s kind of crazy. We’ve reached a point, in my opinion, where there’s absolutely zero friction in enjoying a dubbed performance. We’re getting a perfect transcription of the original actor’s performance.
pilkyton@reddit (OP)
That's what blew my mind. I can actually enjoy this kind of acting/performance by an AI. It doesn't sound robotic. It also doesn't sound like the best actors in the world, at least not in this demo, but it sounds good enough that I can totally watch this and wouldn't even know that it was AI generated.
And when I see AI, I often think "this is the worst it's ever going to be". It will always get better. So yes, the work of dubbing/narration is definitely going to be taken over by AI soon.
djtubig-malicex@reddit
Oh my
pilkyton@reddit (OP)
That's my feeling too:
https://www.youtube.com/watch?v=yicbvWwQ_MA
Can't wait to make funny audio with emotional depth!