Intel shows Texture Set Neural Compression, claims up to 18x smaller texture sets
Posted by KolkataK@reddit | hardware | View on Reddit | 51 comments
Posted by KolkataK@reddit | hardware | View on Reddit | 51 comments
AnechoidalChamber@reddit
Remember folks, this only applies to textures, there's a lot more going on in VRAM than textures nowadays. This won't save 8GB GPUs.
EdliA@reddit
That makes no sense. Because it's not a solution to everything then it doesn't matter? Textures are still the largest consumer of VRAM going up to 60-70%.
AnechoidalChamber@reddit
Where did I say it doesn't matter?
I said it wouldn't save 8GB GPUs, not that it doesn't matter.
It most certainly will, but it won't save 8GB GPUs.
EdliA@reddit
Sure I guess but if this works the way I hope it does it can only help. It might not save the 8GB but it might save the 12 or 16GB in the future.
TBH save is still a weird word to use even for those. There is no final solution to anything, there is no limit to how much a virtual world can expand. Even if this tech were to halve the VRAM usage games will just throw twice as more textures on it for more objects and higher res. No matter the tricks and hardware, at ultra setting games will always try to go for the absolute maximum usage of it.
hepcecob@reddit
Where is this info from? Just basic google search, most of the VRAM is used specifically for textures, and higher the resolution the more you need. Assuming this technology can work on older GPUS, this will literally save 8GB GPUs
porcinechoirmaster@reddit
Well, you have your render buffers, your RT acceleration structures, vertex data... it adds up pretty quickly.
A good rule of thumb is that you can spend up to 60-75% of your VRAM on textures. The rest of it needs to be kept in reserve for everything else the GPU is doing, and this goes up the more fancy things (RT, DLSS, etc.) you're trying to do.
capybooya@reddit
Yep, I'm happy to see a migration to neural textures as long as they can keep them faithful to the artistic intent. But even if that reduces texture VRAM footprint by 90%, I can not envision that we will need less VRAM. Both because for the next 5+ years you'll need the VRAM for current games and games in development, and because there is a major shift going on toward more ML/AI graphics integration where you depend on RT/DLSS and probably larger models loaded into VRAM.
AnechoidalChamber@reddit
For that to be true, said GPUs need enough ML power to decompress said textures without impacting performance.
For example, will it work on 3070s without crippling performance?
accountforfurrystuf@reddit
It at least allows people’s old hardware from going obsolete on games like GTA 6 hopefully
dorting@reddit
It's something for the future, not GTA VI which is already almost in the release state
batter159@reddit
PC release is far enough in the future for this to be included
AnechoidalChamber@reddit
I wouldn't bet on that...
LastChancellor@reddit
if it at least saves 2GB itd be good for most people (who got 8GB vRAM cards) tbh
Iirc the notorious vRAM hogging games like Indiana Jones/MH Wilds are eating like 10GB
SignalButterscotch73@reddit
So all 3 manufacturers now have a new texture compression in the works. From my understanding all 3 require a new file format... will it be a shared format or will games have 3 copies of the same textures in different formats for the 3 different compression techniques?
gb_14@reddit
Pretty sure the industry will lean towards NVIDIA’s implementation and the other two options will be forgotten.
nittanyofthings@reddit
Hopefully the major game engines will implement their own and just look at the vendor stuff as tech demos. NTC is implementable in shader model 6.8.
angry_RL_player@reddit
Nah, AMD owns the console market so this time around developers will work with AMD hardware in mind.
Funny to see the pandora box the green team opened, this will not go how they planned.
Calm-Zombie2678@reddit
Only if nvidia's is easily used on consoles or is at least easily converted to amds equivalent
SignalButterscotch73@reddit
Hopefully they standardise base on what one is the best rather than any kind of brand loyalty.
The most common current texture compression technique is based on S3's technology and they're basically dead and gone from graphics and were never a leading player.
Glad-Audience9131@reddit
pretty sure will be only one file and convertors as needed
SignalButterscotch73@reddit
Not gonna lie, I'm already imagining COD ending up over a Tb with 4 copies of every texture (Nvidia, AMD, Intel and BC7)
StickiStickman@reddit
The neural version of the texture is much smaller though. So even if you have a bunch of them it'd be still smaller than just the normal BCn.
deathentry@reddit
Nothing stopping them letting you download only the supported ones for your gpu
mulletarian@reddit
Time is money friend.
letsgoiowa@reddit
That'd be nice, but they don't usually bother with stuff like that at the moment even when it's performance critical. See how many games use older DLSS or FSR for example, especially before DLL upgrades were easy and possible for MP games
Glad-Audience9131@reddit
dont give them ideas lol
jsheard@reddit
They're all different formats but they're not necessarily vendor locked, Nvidias NTC is built on the generic DirectX/Vulkan cooperative vector extensions so it can run on anything (whether it runs well on hardware with weaker ML cores is another question, but it will run).
ShogoXT@reddit
Just like with the original S3 Metal texture compression technique, it will ultimately depend on what has universal usage and what gets adopted into directx.
N2-Ainz@reddit
So Intel and NVIDIA both have a solution to texture compression
Where is AMD?
Crazy that Intel is literally more advanced than AMD now
QuietSoup337@reddit
AMD has its own, called "neural texture block compression".
jsheard@reddit
AMDs version is much less interesting because it can't be sampled directly, it's designed to decode into regular BC textures first. So it saves space in disk but doesn't save any VRAM.
Inprobamur@reddit
So the same thing that Nvidia is promising for older gen cards?
jsheard@reddit
Kind of, but AMDs format is more tailored for that use-case because it decodes directly into BC textures. Nvidia's isnt designed to do that, so the texture has to be fully decompressed and then recompressed to BC at runtime in the "old GPU" mode.
StickiStickman@reddit
The question is if that's even a that big performance penalty though.
jsheard@reddit
The quality of BC varies widely depending on how much effort you put into compressing it, and it takes a ton of compute to max it out, so I assume NTCs fallback encoder will sacrifice quality in the name of speed. That seems like the main downside of NV's approach when running on older GPUs which can't handle the infer-on-sample mode.
RedIndianRobin@reddit
I guess they're still in the "Raster and VRAM are enough" train.
Such-Control-6659@reddit
They could if they wanted to, they focusing on AI market as thats where the money is right now. But if they betray all loyal customers from PC market good luck selling new CPUs/GPUs in few years. Peoples remember and just will choose Nvidia/Intel next cycle.
StickiStickman@reddit
lol
Inside-Ad2984@reddit
Or they still in "GPU is a hardware piece" train. By the way they still the best in this part.
SHAYAN4T@reddit
AMD announced this neural compression even before Nvidia. Last month, they also announced another feature that affects VRAM usage.
Henrarzz@reddit
When did AMD and Nvidia announce neural texture compression?
kuddlesworth9419@reddit
https://gpuopen.com/download/202407_MAM-MANER_NeuralTextureBlockCompression.pdf
Henrarzz@reddit
And Nvidia talked about it during SIGGRAPH 2023
https://research.nvidia.com/labs/rtr/neural_texture_compression/
Beanstiller@reddit
AMD has NTBC. do your research
Affectionate-Memory4@reddit
IIRC their work with Sony included some compression work as well. Given they've also been talking about neural rendering for the future PS6, I think we'll see something on UDNA as well.
kimi_rules@reddit
Ironically, Intel has MFG before AMD. AMD just caught up with XESS Upscaling with FSR 4 after many years without AI Upscaling that AMD users had to resort using XESS for better image quality in games rather than FSR 3.
Intel GPUs are ageing better than AMD fine wine rn.
binosin@reddit
Seems both Intel and NVIDIA are working hard to make this tech viable, lots of progress the past few years. More competition is good but I have to wonder what Intel's plans for wider support are - at least with NTC you get cooperative vectors for faster execution, this falls back to shader path, great for compatibility but might leave performance to spare on competing hardware. It does seem like a free for all with no standardization other than the usual engines maybe integrating it. We do need a better solution that won't end up with vendor-specific (or advantageous, I guess) compressed files.
Out of curiosity I was looking at what AMD was doing with Neural Block Texture Compression, it's using NN to encode a bundle of BCn textures. Some space savings in the tens of percent but intended really only to help file sizes with decompression with "modest overhead" back to BCn before using the texture. Could be a good first step but mostly eclipsed by Intel and NVIDIA here.
Igor369@reddit
Bragging about whose is smaller...
Sopel97@reddit
Looks like a similar approach to NVIDIA's NTC, but the details are extremely important here. I'd love a more detailed writeup so that we can compare these technologies. It's not like there's anything secret here.
got-trunks@reddit
Yeah intel has been talking about this for a while but the scene takes notice where it likes.
AutoModerator@reddit
Hello KolkataK! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.