Netflix just dropped their first public model on Hugging Face: VOID: Video Object and Interaction Deletion
Posted by Nunki08@reddit | LocalLLaMA | View on Reddit | 198 comments
Hugging Face netflix/void-model: https://huggingface.co/netflix/void-model
Project page - GitHub: https://github.com/Netflix/void-model
eugene20@reddit
"VOID removes objects from videos along with all interactions they induce on the scene — not just secondary effects like shadows and reflections, but physical interactions like objects falling when a person is removed. "
That is really impressive.
False-Difference4010@reddit
Pretty sure it will be used for censoring their shows in some countries
ArcadiaBunny@reddit
Second this
xienze@reddit
I bet it’s used in conjunction with a model that adds/replaces objects for the purposes of advertising (it’s always about advertising). For instance, take away the can of Pepsi sitting on the table and put a Coke in a character’s hand.
SmartCustard9944@reddit
Personalized TV show variants with personalized ads🤦♂️
MrAHMED42069@reddit
Uuf
TuxRuffian@reddit
This is my guess. Why does everyone on every Netflix Show have the same snack and beverage preference as me?....oh righ.
DreddKrilov@reddit
lol for sure it will be used for personalization, but no advertiser is paying to reinforce what you already prefer/do on your own. Ads are used for behavior modification, so it won't work like that.
ger868@reddit
"if people are going to fall for this" - that hesitant "if" might be the most hopeful sentiment I've seen in a while.
Vivarevo@reddit
personalized advertising too
and in some countries, removing characters, changing skin colors, items, gender etc.
Reasonable_Ad719@reddit
Plenty of cg animated films have localized elements, as it is easy to make and pleases the audience. Often it is a simple translation, like for "bakery". Now, im not sure it will serve the same purpose if a bar in NYC will have translated "bar" over it in a real movie 🤔
KadahCoba@reddit
Famous examples of element removals and swaps in media is the removal of cigarettes and smoking in many anime series that target younger audiences and that whole guns-to-radios thing from ET.
Reasonable_Ad719@reddit
It makes perfect sense - there're various legal restrictions on movies in different countries too. Although, I yet to see those removals. Prior, it was cheaper to just cut the footage out.
KadahCoba@reddit
Or chroma key recolor, like the unfortunate red (blood) to white it some shows recently. Several places have restrictions on blood and the results of workarounds are often pretty silly.
ghulamalchik@reddit
If you remove the main character what happens?
Seakawn@reddit
ever gone lucid in a dream? all the action stops, and if there're people around, they just kinda.. idle. it's creepy.
for some reason that's my first thought.
megacewl@reddit
never realized it but the few lucid dreams i’ve had, it was only me during the lucid parts
anime_forever03@reddit
Yeppp ive had a couple ones but by the time I realize its lucid i just wake up before i could do anything 🥲
megacewl@reddit
I had my first one in years several months ago, and it was the first one ever where I both caught on quick enough and managed to start trying to do things. I was even able to ‘recall’ the strategy, during the dream, that I had read about for forcing something to happen. Blew my mind that it worked.
It was SO COOL. I recommend to you to have some more of them.
pissoutmybutt@reddit
you can kinda train yourself to have lucid dreams. i read that if you can build a habit of pinching the back of one hand when you wake up, eventually it will carry into your dreams. when you pinch your hand and it isnt met with the tiny amount of pain like normal, it can trigger your brain to recognize you are not awake.
lucid dreaming is WEIRD, and not what I expected. I could never talk, or fight or do anything involving more than the most basic motor functions. usually i could “fly” in the sense i could go straight up but my only “safe” way down was to interrupt my fall with little short bursts of “flying” to slow me down. sorry if thats confusing to read, but its hard to describe since its not relatable to the anything in reality so im trying my best to describe it
senobrd@reddit
I wouldn’t recommend pinching yourself. You certainly can feel pain in a dream. Better signals are trying to turn on or off the lights (usually doesn’t work) or focusing on a clock or written text (usually looks garbled or nonsensical).
Old_Cantaloupe_6558@reddit
Just check the time twice. You know you're in a dream if the time is not consistent.
tkenben@reddit
I go lucid frequently, but if I try to stay there, I wake up, so instead I try to sit back and "watch the movie". That's more interesting anyway, because it's more dynamic and has unexpected things happen.
Frosty-Cup-8916@reddit
Lotr but only gollum
ticktockbent@reddit
Imagine the awkward silence as everyone sits around with no one to talk to
n00b001@reddit
Big bang theory without laughing track
positivitittie@reddit
I always hated seeing the laughing track.
Borkato@reddit
I’ve never seen a laughing track while watching the big bang theory. Do they visit a film studio or something?
positivitittie@reddit
Yes. That’s the one they want removed with the vision model remover.
n00b001@reddit
Urine luck:
https://youtu.be/jKS3MGriZcs
milanove@reddit
I can’t wait to see all the meme videos people will make with this technology. I wanna see Seinfeld without Jerry.
BriansRevenge@reddit
Garfield Minus Garfield: The Movie
seanthenry@reddit
That's the first place I went with it also.
isademigod@reddit
r/garfieldwithoutgarfield
LinkSea8324@reddit
Just call it Stalin
thawizard@reddit
First thing I thought as well.
hyperdynesystems@reddit
Someone needs to use this to fix Villeneuve's Dune by removing Zendaya.
EfficientWinter8592@reddit
How tf did they do that?
ghulamalchik@reddit
With examples of what would happen when something is censored vs not censored. Probably took a ton of time and effort since it's not just text, you have to show it example videos.
So they basically record the same scene thing twice.
At least this is how I would approach it.
PANIC_EXCEPTION@reddit
I just wonder how you perfectly recreate a scene with just the removal difference. I guess if you just have enough data, they don't need to be perfect? Or use photorealistic CGI instead?
Nice_Database_9684@reddit
You don't have to. You just generate the training data yourself. Film a room. Remove something. Record it again. Boom, training data.
PANIC_EXCEPTION@reddit
If it's just still images, that's easy. But you have to perfectly recreate the motion for a scene pair, and if that involves anything short of a robot, it's impossible. People can't just perfectly recreate movements. Try holding your hand under a desk lamp with no support. See how the shadow is shaky no matter how hard you try to keep it still? Now scale that up to whole body motion and irregular gait. Even facial expressions.
If the training data can handle irregularity without issue, then that's fine, but if the difference signal must be precise, then that's the question.
dvztimes@reddit
Actually I bet its better for training if it isnt a perfect recreation. That builds in flexibility.
alphaclass16@reddit
all they need is any content they own that used any removal in post. they'd have access to the plates w/and without the element in question
code-garden@reddit
Yes, they use CG, you can see the paper here https://arxiv.org/pdf/2604.02296 . They generated many physically simulated CG scenes with and without a particular object and a mask for that object in the initial scene. These are used to fine-tune a video model that already can do object removal but not the physics.
SmartCustard9944@reddit
Transformers don’t just learn sequences. Some architectures learn how to fill gaps.
Mountain-Pain1294@reddit
This is horrifying you mean with how people will misuse it
Snoo_64233@reddit
That is a whole lot of temporal understanding and cohesion the model has to deal with.
SupernovaTheGrey@reddit
IDK why this makes me think of that Stalin photo
mylAnthony@reddit
Almost sounds like april fools model 🤔
eugene20@reddit
I did have to check the date on the project page before commenting.
Mayion@reddit
"What if we remove mosaic?"
Neither-Phone-7264@reddit
why did they translate it like that
tophology@reddit
It's a meme. It's not a real fansub
tavirabon@reddit
No, this is literally a screenshot during a time where the fansub community was overly concerned with respecting the original Japanese meaning in the translations. You would have to pause the anime to read all these notes to understand what was going on because every word that didn't translate 1:1 to an English concept had its own note. They were eventually phased out because it made anime less accessible.
This became a meme because it was useless, even during such a time.
FpRhGf@reddit
This keikaku thing sounds too overly excessive and useless, but I do wish I can see more translators notes because I love reading about more context. Maybe it's because I haven't watched enough anime, but I've only across TN in English subs one time.
tavirabon@reddit
It's not as common these days (probably because official subs/dubs are more common) and ones that do will limit it to a sentence or so, but it really was a problem in the 00's. There would be paragraphs covering the entire screen multiple times per episode and most subs were hard subs back then.
They aren't too uncommon though, at least if it's pirated and you flip through the various subtitle tracks since they are rarely the default.
ArcadiaNisus@reddit
The nostalgia reading your comments hit me like a truck. I remember watching almost everything from Ani-Kraze / Shinsen-Subs.
HydraVea@reddit
I thought it was a real fan translation that became a meme.
Frosty-Cup-8916@reddit
I don't remember the group but I'm 90% sure it's a real fansub.
This one isn't a memesub though, and that does exist where someone tells either a completely different story with the subtitles or do a "abridged style" of subtitles that is still ridiculous.
Frosty-Cup-8916@reddit
Fan subs can be super weird
philmarcracken@reddit
so basically this...
Ylsid@reddit
Then you remove everything that interacts with it too
cantgetthistowork@reddit
For science?
Worried-Ad-7351@reddit
Well we now get censored content more ig. nice engine
ForestyForest@reddit
Theres this sci fi book called Star Carrier by BV Larsen where the concept of an "unperson" is introduced. It is a power that a few people have to erase a person from everyones memories and also all media.
s101c@reddit
So, censorship model to remove cigarettes from older movies?
Wiktor1975@reddit
Hopefully.
ElementNumber6@reddit
Or sponsors that don't re-pay up.
SluttyRaggedyAnn@reddit
Yup Netflix isn't generating models for the goodness of the community. They'll be using it to dynamically insert ads based on the viewer's ad interest.
kris206@reddit
that’s so dystopian! product placements that change based on advertisers and who is watching.
ElementNumber6@reddit
Where do you think we are, exactly?
TopChard1274@reddit
in a local LLM Utopia?
fuck_cis_shit@reddit
yes. and in the long run, thermodynamics demands all utopias be local
IrisColt@reddit
You nailed it!
yaboyyoungairvent@reddit
This is already what's happening when you visit websites. If you live in USA you will get different banner ads compared to someone living in Brazil. If you visit AI subs you're going to be more likely fed ai products in the reddit ads.
s101c@reddit
This is happening with Pixar movies as well. There are multiple examples where they altered a specific scene multiple times for different markets
Poromenos@reddit
Netflix is generating models for themselves to use. They're releasing the models for the good of the community. They didn't have to release.
harpysichordist@reddit
Censorship model to remove races (light-skinned) from all movies--unless portrayed as the bad guys or stupid, of course. They've been doing it manually, so they want to automate it.
WoodCreakSeagull@reddit
Yes, of course, can't forget that white people are the real victims while Trump is sending gestapo to round up brown people.
WhateverOrElse@reddit
"If you can convince the lowest white man he's better than the best colored man, he won't notice you're picking his pocket. Hell, give him somebody to look down on, and he'll empty his pockets for you."
You have found the lowest, whiny little white man. In this thread. So far.
harpysichordist@reddit
真是可笑.
Reddit never misses opportunity to disparage white men. I am neither white nor man, yet right about little. But you are the racist.
OkDoor726@reddit
So I just came back to Reddit after 3 years being away, it's posters like you that made me leave
Yaawwwwnnn
WoodCreakSeagull@reddit
Couldn't identify anything wrong with what I said, just chiming in with "omg woke" after I replied to someone imagining anti-white racism
You were not missed
OkDoor726@reddit
Living up to the neck beard redditor I see
This place is just a sess pool of soytards
Axxhelairon@reddit
no one cares
harpysichordist@reddit
* "Gestapo" is one way to indicate how badly you're trying to distort reality. The U.S. has laws related to immigration and border crossing. Enforcement of these laws is the duty of the executive branch. If you have a problem with the laws, you should speak with the lawmakers. And your race-baiting is another indication of how badly you're trying to distort reality. Trump has offered illegal aliens thousands of dollars to leave the U.S., rather than be deported, and tell them to re-enter the country legally. He's done this multiple times. He didn't have to do this. He could have limited actions strictly to deportation. But there are people in the U.S. illegally who continue to remain in the U.S. illegally. They will have to face the consequences of their actions. But there are people, like criminals, who are upset when laws are enforced.
* Whites have been the victims of systemic racism in the U.S., yes. Ignoring it or trying to hide it doesn't make it less true. And downplaying it as only applying to the entertainment industry is another distortion you're trying to make. Is Netflix racist? Yes; explicitly so. They make race-based decisions heavily throughout their operations.
WoodCreakSeagull@reddit
The same executive branch has made it repeatedly clear that the unqualified savages they employ in ICE face basically no scrutiny or accountability for their actions, as demonstrated when they lied and smeared American citizens as domestic terrorists when ICE were shown on video murdering them without cause. Not to mention all of the rapes and abuses that go on in the camps where they hold migrants. Not to mention that there is little recourse if ICE decides to just lie and grab legal citizens who just "look foreign." This is something the American government, including SCOTUS, has only decided to make much easier for them to do based on racial profiling.
Everything I just mentioned is 10000x more harmful and dangerous to non-whites than the "systemic racism" you laughably imagine.
White people enjoy by far the most systemic advantages of anyone else in the U.S, enjoying these advantages after centuries of subjugating and killing every other race of people in and out of the country, until the modern day. The country is governed right now by a white nationalist administration that is hell bent on filling concentration camps with people who speak Spanish. All of the "systemic" shit you're complaining about is literally trying to smooth over the brutality.
ZombieTesticle@reddit
They do that with casting because the historically important characters they want to re-cast tend to be speaking parts which you couldn't handle with this.
What this is more likely for is replacing product placement on a regional basis as already mentioned, removing no longer culturally acceptable actions like smoking and probably removal of darker skinned people from the background to make shows more palatable in Asia, China especially.
The people already furiously typing would be well served to compare movie posters in the west and in China some time.
MaycombBlume@reddit
If only AI could remove the giant rock you're living under.
ticktockbent@reddit
I was thinking how amusing it would be to rewatch old movies with central plot points simply removed. Godzilla, but you remove the big lizard and everyone just stops looking panicked and goes back to their business and stuff
Long_Pomegranate2469@reddit
You can use it on that Blue girl getting railed by a thousand dudes and she'll just go and do a boring retail job.
Cultured_Alien@reddit
Except that the girl will be removed and dudes having gay party.
Borkato@reddit
Now that’s my kind of party!
MisterDalliard@reddit
Wasn't this part of an Arthur C Clarke novel?
Effective_Olive6153@reddit
People will start filming videos with generic "product" package. Once the show it published and distributed, they will be able to replace the generic "product" with targeted advertisement at point of distribution - like youtube, theater, or streaming service.
The real power is for streaming - you may have a million people watching the same show, and all of them see different targeted product placement depending on their data profile
daedalus1982@reddit
or evidence
Perfect_Twist713@reddit
Or memoryholing people and events. At least it's out in the open instead of behind closed doors.
johnfkngzoidberg@reddit
Probably great at removing ex-boyfriends from insta posts. So useless for normal people.
TechNerd10191@reddit
When it comes to LLMs, Netflix is more open-source than Anthropic.
thrownawaymane@reddit
Netflix has been posting cool open source shit for a long time. Here’s the first one I ever heard of, 10 years ago:
https://github.com/netflix/chaosmonkey
That’s my kind of party.
Zeeplankton@reddit
that's hilarious
iMakeSense@reddit
Didn't they stop using it internally at a point? I always thought it was a good idea.
thrownawaymane@reddit
That’s what I heard, no idea why. It’s not suuuuper inactive commit wise though
buttplugs4life4me@reddit
I remember when it took the company I worked for, which was a competitor to Netflix, a couple of years to actually want to do chaos engineering after massive pushback.
When we finally got a QA guy (yes, one!), his first action was to implement chaos engineering.
So his first act was to get buy-in from the higher ups for chaos engineering. There was a lot of publicity around it.
And then only his favourite time could do it while the rest of us looked on.
Shit engineering culture honestly, all the people supposed to push for that were wet noodles that bended over for the higher ups faster than a hooker.
HopePupal@reddit
remember when they invented AWS autoscaling before Amazon did? Netflix software people are not to be underestimated
TuxRuffian@reddit
Unfortunately it hasn't been updated in over 2yrs, but they also created MetaFlow (Open-Source Framework for ML, AI, & DS), although I noticed that the GH Repo says it's now maintained by Outerbounds, even though it's still under Netflix's GH Account. I wonder if the NF owns Outerbounds?🤔
Terrible-Detail-1364@reddit
yeah, theres a system I know of that still uses eureka and zuul
cantgetthistowork@reddit
First time I'm hearing about this
Seakawn@reddit
this sounds like what god does to my life in general, the difference is I don't build the resiliency.
Heavy-Focus-1964@reddit
hahahaha. what a great idea
Competitive-Ill@reddit
Fucking love chaos monkey! They introduced me to chaos engineering ❤️
pigeon57434@reddit
literally everyone is more open source than anthropic
ReachingForVega@reddit
Even OpenAI? Lol
trombolastic@reddit
well yeah, codex is open source https://github.com/openai/codex
Claude Code on the other hand just accidentally open sourced itself for a minute
ReachingForVega@reddit
The convo is about models not harnesses.
trombolastic@reddit
Anthropic has zero open models lmao
bernaferrari@reddit
OpenAI has GPT OSS, Anthropic has not
Educational_Note6910@reddit
Can't be accurate. They have opensourced Calude code twice in the last two years.
skadoodlee@reddit
Anthropic is pretty open source recently
TechNerd10191@reddit
That's why I said "AI models"
reddit-369@reddit
Some people say Anthropic doesn’t do open source.
Turns out their Claude accidentally did—
just… open-sourced itself.
Ylsid@reddit
Netflix is from the old guard of tech bros, made from people who believed that open source isn't dangerous
daniel-sousa-me@reddit
Not surprising since the whole company was built around the idea that model creators should be gatekeepers of its capabilities
Howdareme9@reddit
Not really hard tbf
HugoCortell@reddit
Too bad that it's for removing objects and keeping the background, rather than the other way around. I'd really love an AI to help with tedious greenscreen work.
I know there's a few (as in 2) models out there already, but the quality isn't great, and the set-up process is hard from user friendly.
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
LengthinessHour3697@reddit
Zepp k99
PromptAfraid4598@reddit
Now we can edit video surveillance footage just like in the movies, where no one kidnapped the girl waiting for the bus.
YRUTROLLINGURSELF@reddit
What a coincidence, wow, I just dropped my new public model and it happens to be called VOID too: Viewer Offboarding via Invoice Deletion
YRUTROLLINGURSELF@reddit
wait I got a better one Value Optimization through Immediate Departure
Shockbum@reddit
Netflix/Custom-model
redditer129@reddit
Studios need to take a stand first
hugganao@reddit
it's the model made by the company that was owned by ben affleck from what I remember lol
and netflix acquired them.
MerePotato@reddit
I shiver at the thought of what this might be used for, but on the other hand, this is going to be very helpful for cleaning robotics datasets
RegisteredJustToSay@reddit
Video is a bit misleading. You have to use a 4 value mask for every frame of the video: the object, object overlap, what was affected by it, and background. Results are cool but I think they're making it sound easier and less work intensive to use than it is.
PrysmX@reddit
Someone else can take the next step and create an auto mask. Maybe they open sourced it so someone would do that for them haha.
RegisteredJustToSay@reddit
Not a bad theory. Definitely a missing part of the workflow at this moment!
NoAim_Movement@reddit
Whats the vram req
unkz0r@reddit
Neat
Grouchy-Line-4045@reddit
Wonder how long it would take to remove Jar Jar Binks from the 142 minute Attack of the Clones.
Sioluishere@reddit
:(
Lanceward@reddit
Luv me 64gb m4 max mac studio
toptier4093@reddit
Thing goes hard! Just wish I had opted for 128gb but can't complain.
Hood-Boy@reddit
AMD Strix Halo FTW. Who needs Speed anyway
EveningIncrease7579@reddit
Waiting for quantizations and kj nodes to supports it in low vram
pivotraze@reddit
I just want to Netflix or others to use some kind of AI to lip sync when changing languages. If I am watching a natively English movie in German, I want it to fix the lip sync to match. Bonus if they can make the subtitles actually match.
Sliouges@reddit
Netflix leading the way into efficient and thorough censorship. Imagine what could be done if they spent this money on ADDING objects from videos along with all interactions they induce on the scene.
Kurcide@reddit
bruh… it’s a green screen model for film making. You can’t be serious
Sliouges@reddit
It's the opposite.
Kurcide@reddit
Right, the model is meant to remove things the same way you would in film production when you have a green screen and/or actors or participants that need to be cut out
Like when someone in a green body suit is playing the role of a CGI character that hasn’t been edited in yet
Sliouges@reddit
If you remove the objects to videos along with all interactions they induce on the scene, what's the point of having a dude in a green suit at all? I have a dude in a green suit in Harry Potter, moving chairs in the pub, to simulate the chairs being moved by magical force. I use this VOID to remove the green guy AND the chairs are never moved... what's the point in that?
Kurcide@reddit
That’s exactly what you would want if someone is in a green suit or if a camera car was following the scene subject and you don’t want artifacts in the film that need to be cleaned up in post… They are only there for the actor to interact with as a representation of what will be in the final film or to get an additional camera angle.
It’s ok you don’t know anything about film making but it’s asinine to think Netflix would publicly release this with the intent of “censorship” as an open source model when it has a very clear and useful purpose in film production.
Sliouges@reddit
my phone number are code is 310...
Django_McFly@reddit
they were the person calling for a ban on all cars as soon as the first traffic accident ever took place.
mailslot@reddit
Can you imagine the ad placement opportunities? In Star Wars, every alien at the bat could be drinking Red Bulls.
Sliouges@reddit
Netflix marketing team furiously taking notes...
International-Try467@reddit
That's cool but where's Steel Ball Run Netflix?
BakaPotatoLord@reddit
What's the deal with this picture? I see it everywhere on Netflix insta comment section
International-Try467@reddit
They released Steel Ball Run, praised as one of the GOATs of manga ever written, but only one episode. With no fucking release date on the next episode or if it'll be in batches or a new episode is going to come out every single year
Frosty-Cup-8916@reddit
Not even available in Japan or a sail boat?
BakaPotatoLord@reddit
That is quite strange
International-Try467@reddit
You can even say it's... Bizarre.
Neun36@reddit
Interesting, it’s base is CogVideoX
Budget-Toe-5743@reddit
What could Netflix posibly need a model like this for? and what did they train it with? hahaha
TuxRuffian@reddit
It looks like may have used CogKit to build it on top of CogVideo. (ZhipuAI's video generation model) This is how Open-Source Software is suppose to work!
Coompa@reddit
can I use this to temove tattoos from my favorite pornstars??
Background-Ad-5398@reddit
if it gets small enough can be great for ai videos to remove the weird people that show up in otherwise good output
TurnUpThe4D3D3D3@reddit
Very cool, but V2V models are insanely computationally expensive. Maybe it’s cheaper than a VFX artist though, who knows. Very cool tech regardless.
BrianScottGregory@reddit
Those GPU Requirements. 40GB VRAM. I won't be using this any time soon with my paltry 6GB.
gurkburk76@reddit
Too bad this wasent dropped on april fools, would have been fun.
tiredgeek@reddit
As someone with kids, I could see this as a pipeline to create a "clean" version of content. Or maybe I'm the only one who has ever meticulously edited out a gratuitous scene.
ArguablyMe@reddit
You are not. We edit for ourselves too, not just for children who may be watching.
relmny@reddit
can it be run fully local? or does it require Gemini?
THEKILLFUS@reddit
Taduuumm
neuralnomad@reddit
OK PornHub, your turn…
Bolt_995@reddit
Wow, Netflix jumping into the fray.
Soft_Match5737@reddit
The interaction-aware part is what makes this actually interesting rather than just another inpainting model. Most video object removal just fills the pixels where the object was — VOID is modeling the causal chain of what that object was doing to the rest of the scene. Remove a ball bouncing off a table and the table stops vibrating. That is a fundamentally different problem than texture synthesis. It means the model has some internal representation of physical causality in the scene, not just visual appearance. Curious how it handles ambiguous cases where an object has both visible and implied interactions — like removing a person who was blocking light from reaching another surface.
the_bollo@reddit
Using this to remove pesky watermarks that jump around on videos would be interesting.
CaptainAnonymous92@reddit
So what happens if you use this to remove a main character from a live action show/movie? Do the other characters that interact with said removed character still have dialogue with them or do actions they do with them even with the character not being there anymore? Lol.
jinnyjuice@reddit
What engine (vLLM but for video) would you need to run this for Nvidia?
Live-Crab3086@reddit
just the thing for winston smiths to remove unpersons from youtube videos at the ministry of truth
seamonn@reddit
GGUF?
Nbdyhere@reddit
Holy shit 😂
This means this skit could actually come true!
https://youtu.be/68Z2ngl719Y?si=NsluLXNRZyvYiMii
ElectricalTraining54@reddit
Woah that’s actually super cool
VolandBerlioz@reddit
"Correction is in play"
AcidicAttorney@reddit
👏 Elite reference. Enjoying S3?
VolandBerlioz@reddit
Not exactly to be frank. I find it much more Hollywoody
Enthu-Cutlet-1337@reddit
Nice, but video inpainting still eats VRAM fast; 24GB barely covers 1080p with sane batch sizes.
Candid_Koala_3602@reddit
They’ve been using similar tech to do English dubbing and mouth matching if anyone has noticed weird shit lately
jadhavsaurabh@reddit
That's so amazing man
disgruntledempanada@reddit
Requires a GPU with 40GB of vram yet puts out results that look like they were rendered on a system with 4GB vram.
FusionBetween@reddit
So this can wipe a thing from the timeline
nazgut@reddit
so you need to make mask yourself? Why not to use SAM 3?
TanguayX@reddit
Wow! This is really amazing. And how cool of them to share it.
RetiredApostle@reddit
Ben Affleck filling the VOID.
marlinspike@reddit
Very impressive. This will make film making even easier and cost effective for even amateurs. Nice!
dupekela@reddit
Attaboy.