Intel XeSS 2.1 released, brings support for other vendors GPUs
Posted by reps_up@reddit | hardware | View on Reddit | 69 comments
Posted by reps_up@reddit | hardware | View on Reddit | 69 comments
darkbbr@reddit
XeSS-FG (Intel Frame Generation) and XeLL (Low Latency) for RTX 3000 and RX 6000
Really nice But is there a technical reason that XeLL doesn't work standalone in non Arc GPUs? Strange that it needs XeSS-FG enabled on those for it to work
69enjoyerfrfr@reddit
so this won't work on RTX 2000 gpus?
darkbbr@reddit
According to the site, it's just a recommendation, not a requirement. But they probably have a good reason for that (dp4a or tensor performance in rtx 2000 not suficient maybe?)
Take a look at this thread
https://github.com/optiscaler/OptiScaler/issues/667
steve09089@reddit
They’ll never bring FG to 30 series because then they can’t sell you new GPUs
iron_coffin@reddit
30 series is barely strong enough for dlss4 upscaling
theholylancer@reddit
the whole point of all these tech is to stretch the GPU a bit farther
and people have been using lossless scaling for a while now with 30 series, it looks like ass and feel like ass at times, but if that is the HW you got, anything helps. and the performance impact there is not great TBH, but its enough to say make a 45-50 fps game look better (not so much feel better).
PossibleHoney8615@reddit
u either using lossless scaling at 30 fps or u have wrong settings because lsfg over 60 fps with correct settings is great
iron_coffin@reddit
Yeah but Nvidia's branding is high quality/it just works. I don't see them bothering with something that will give less frames than native on a 2060. There's already losslessscaling or optiscaler as far as a mixed bag solution.
SummonerYizus@reddit
Dlss4 is over rated. I wouldn't use over 2x. They should work on making sure that you lose less raw fps when the frame generation is on. Frame generation makes gaming smoother at the loss of latency.
dparks1234@reddit
That’s not true. Even on a Turing card like the 2080 Ti the DLSS 4 Transformer model only has an 8% performance hit vs 5% on Ada.
The misinformation comes from the Ray reconstruction benchmarks where the transformer model has a drastic 35% hit on older hardware. For regular DLSS upscaling the transformer performance impact is minimal.
iron_coffin@reddit
It depends on the game, framerate and resolution. I was getting near native TAA framerates in Forza Horizon 5 at 4k. It's worse at higher framerates also, which makes sense.
60% more overhead and a 35% hit on another feature doesn't really help your case that fg on pre-ada would work well.
amwes549@reddit
You mean MFG? Because Ampere has optical flow hardware that is persumably meant for frame generation.
dparks1234@reddit
The new Framegen model doesn’t even use the optical flow accelerator. They came up with a faster traditional algorithm that enabled the 4x Framegen on 5000 series
Warskull@reddit
That also relies on the improved tensor cores in Blackwell. They would probably have to create a different framegen that behaves more like AMD's or Loss Scaling's framegen to get it working on the 30-series.
amwes549@reddit
Didn't know that, thanks for informing me!
Jaznavav@reddit
Ada ofa is significantly faster
F9-0021@reddit
And 2x FG no longer uses it, so there's no technical reason it can't run on Ampere.
iron_coffin@reddit
Except fp4 support
Strazdas1@reddit
Nvidia cannot retroactively change the hardware in the 3000 series.
darkbbr@reddit
DLSS 4 frame generation doesn't need the hardware optical flow anymore
https://www.eurogamer.net/digitalfoundry-2025-bryan-catanzaro-interview-dlss-4-and-machine-learning
Vb_33@reddit
Yes but it does need strong tensor cores (Blackwell) or optical flow on Ada. Ampere has neither
TechExpert2910@reddit
Ampere has nearly 2x the tensor core perf of Turing. I'm sure that high-end Ampere has more tensor core grunt than a laptop RTX 4050 or 5050, which would support frame-gen.
bubblesort33@reddit
They lied the whole time. It was never really needed like people thought.
uzzi38@reddit
It doesn't use optical flow on Ada either. The new FG is tensor core only, and runs lighter than the old FG model too. Almost half the frametime cost of the old model on a 4090.
I don't see a good reason why it shouldn't be possible to get working on older RTX GPUs aside from market segmentation.
Helpdesk_Guy@reddit
So … Does that imply that ARC Graphics may be safe from being axed, at least for now?
Scion95@reddit
I mean, Intel have had graphics since 2010.
Arguably, debatably 1998, when it was built into the Northbridge.
And they did keep improving and updating their graphics architectures over time. For compatibility with operating system features, and new API standards if nothing else. Video codecs and the like. New versions of directx, opengl, opencl, Vulkan. Accelerating things, like quiksync.
Like, even before the first Xe graphics, before Tiger Lake, they had GPUs that went up to like 72 EUs.
As long as they sell CPUs for laptops, it doesn't make sense for them to kill off their iGPUs. Not entirely.
And given that Raytracing and some of the noise and AI upscaling are considered a requirement for the DirectX12 and Vulkan standards. I think that even if their graphics software and hardware goes back to being the bare minimum. The bare minimum would now include those things.
I can't guarantee that they're going to keep releasing discrete graphics, but. I think they're more likely to go under or stop selling CPUs entirely before they stop making integrated graphics, and with the AI and GPGPU markets, and the way that their graphics tiles work, it does actually make some sense to at least keep trying to scale up their graphics solution to sell as discrete cards.
If nothing else, the Xe3 and Xe4 architectures (or, parts of them in 4's case, the media and codec parts if not the shaders themselves) have been confirmed to be part of the iGPUs for upcoming CPUs, so I think those architectures are pretty safe.
Helpdesk_Guy@reddit
Well, to put it mildly, Intel has done a LOT of rather … nonsensical moves, especially lately.
That's why I'm asking and why a lot of people are still unsure, if Intel wouldn't still end up knifing their graphics-cards.
SherbertExisting3509@reddit
They shouldn't cut it at all.
Please, Intel, like Linus tech tips said, let the hardware guys cook you have a very popular, in demand product here.
As long as you release SKU's with die areas that make sense it will be very profitable considering the unexpectedly high demand for Arc Battlemage.
Helpdesk_Guy@reddit
I can understand Tan's take too – A product has to be profitable, in order to pay salaries.
Alive_Worth_2032@reddit
ARC isn't going anywhere, what you should ask if the discrete cards are going somewhere.
Which software improvements and launches doesn't really tell us much about. Because said software side will live on to serve the iGPU part of ARC no matter what.
Helpdesk_Guy@reddit
I don't know … The constant silence and non-news regarding it, still puts it on shaky grounds.
Yes, this release here is another non-news about ARC itself - Neither prevents an axing nor signals a future of dGPUs.
6950@reddit
LNL/PTL/NVL/ARL/MTL have Arc Graphics do you think they would stop selling theses products?
Helpdesk_Guy@reddit
I expressly wrote dGPUs, which stands for dedicated Graphics Processing Unit and is understood as Add-in Graphics-cards. No-one cares about their iGPUs, those have nothing to do with dGPUs!
Scion95@reddit
The first thing you wrote was ARC, which refers to both the iGPUs and the dGPUs, and the iGPUs and dGPUs use the same hardware architectures and a lot of the same software, the main difference is scale. They have a lot to do with each other.
Helpdesk_Guy@reddit
Please stop the nonsense. Basically everyone associates Intel's ARC Graphics with *dedicated* discrete GPUs!
You may go on harping on about principles, names and whatever, yet basically everyone knows, that the term Intel ARC is 99% referencing to discrete GPUs alone (as in Add-in cards). Period.
No, it does not. Intel ARC exclusively refers to *dedicated* graphics, either as discrete graphics in laptops or as dedicated cards in desktop.
Intel's old iGPUs are still called Intel Graphics (Technology) or just Intel Iris Graphics — The age-old architecture from the 2000s. They're still sold as low-power iGPU in different segments, even IF these are more and more phased out to be replaced with GPU-tiles of the X^e Graphics-based GPU-architecture.
The newly integrated as GPU-tiles with LNL/PTL/NVL/ARL/MTL are called ~~ARC Graphics~~ (and just as their ARC-cards), are based solely upon the Intel X^e GPU-architecture – As dedicated graphics-cards, these are named under the Intel ARC under the ARC Graphics-moniker (Mid-range—Performance—High-end). Whereas GPU-tiles as discrete graphics (entry-level), these are running under the Iris X^e Graphics naming-scheme.
Nevertheless, just because it looks like Intel recently broadened their ARC-moniker (as a term formerly exclusively referring to dedicated GPUs!) also over to cover for iGPUs (with units of their X^e Graphics in laptops) a while ago, doesn't change the fact that most people stick to the fact that ARC is first and foremost dGPUs as dedicated graphics-cards.
Heck, up until recently, despite given laptops-graphics as well as dedicated graphics-cards are based upon units of their X^e Graphics-based architecture, the term ARC Graphics was still exclusively used for DEDICATED graphics-cards, while Intel marketed their laptop-based stuff as just Intel X^e Graphics-based and doesn't even used anything ARC (despite both, discrete GPU-tiles as well as Add-in cards, are based on the very same Intel X^e Graphics-architecture).
Also, even their own homepage refers to ARC as dedicated discrete GPUs – Most understand discrete graphics as Add-in cards as the typical graphics cards the like of AMD or Nvidia.
There's no other point on their homepage referring to anything ARC Graphics. It's the 1st hit on Google.
And consequently enough, Intel itself still refers to anything dedicated graphics based upon their X^e Graphics as just Intel X^e Graphics-based.
So with all due respect, please spare us this dumb hair-splitting already. Intel ARC means dedicated discrete graphics.
That's why I asked my question in the first place (if the software-release may assure a continuation of their dedicated graphics-cards) to begin with … So ARC means discrete/dedicated and has nothing to do with iGPUs.
… and it's not my fault, when people are too daft to understand the difference between iGPUs, discrete X^e Graphics-tiles in laptops and graphics-cards. Both have nothing to do with each other.
Vushivushi@reddit
I actually wonder how viable a strategy it'd be to completely cancel GPU development and instead license a chiplets/tile from Nvidia to be manufactured at Intel Foundry.
Low stakes for Nvidia, high stakes for Intel. Win-win since Intel gets a "leadership" product and external customer and Nvidia finally leads the entire PC GPU market, not just dGPUs. Intel also gets to shift resources back to CPUs.
I mean back in the day Intel didn't have integrated graphics, Nvidia made chipsets for them...
Scion95@reddit
I mean, I'm not sure NVIDIA would go for it, at least, not for a price higher than Intel was willing to pay.
If nothing else, Intel's iGPUs require a level of Linux support that's higher than the support NVIDIA has ever given any of their dGPUs.
Granted, for their SOCs with CPUs and GPUs, NVIDIA actually has done some open source driver work, but that brings up the fact that. NVIDIA is designing their own SOCs with their own ARM cores, meaning they're actually in competition with Intel. For both the client and the server side.
Maybe before the Windows on ARM push, or before some of the attempts at ARM servers and AI clusters it might have made sense, but. At this stage, partnering with NVIDIA makes the least amount of sense.
Vushivushi@reddit
Nvidia continues to sell GPUs for x86 systems in the datacenter despite having Grace.
They care more about expansion than cornering the market, though they do it if it doesn't decrease their TAM, which is pretty much when they have a near monopoly.
I don't believe Nvidia thinks about Intel seriously at all. They are less of a competitor than AMD or Broadcom at this point. If Intel exits the GPU market, they are no longer a significant competitor as CPU spend is shrinking in comparison to compute spend.
Nvidia is starting in ARM PCs from near zero. A partnership with Intel immediately gives them majority market share and accelerates their ambitions with GPU-accelerated software as the majority of the market now has access to CUDA. This is an expansion opportunity.
And because it's a licensing deal where Intel is building the chip, it's basically pure profit and accretive to Nvidia's margins.
It is also way more beneficial for Nvidia to foster an alternative foundry than it is risky for them to foster what is just one of many chip designers they are crushing in the market.
Nvidia has to pay TSMC tens of billions a year to manufacture their chips. The opportunity to save billions and diversify their supply chain is very substantial to Nvidia.
If there's a difficult decision to be made, it's on Intel's side.
It's basically going all-or-nothing on their process technology. It's the real IDM 2.0, but if it works they bring back billions worth of wafer starts back in house.
Dealing with ISVs and the future of software support, that's manageable.
This move could eliminate foundry losses and return Intel to net profit, even if the product margins shrink from paying Nvidia.
Not even accounting for savings from not paying GPU engineers or the market opportunity from having more resources dedicated to CPUs.
Remarkable_Fly_4276@reddit
Not really, XeSS DP4A has been available for other GPU since the technology launched.
steve09089@reddit
Not for Xe Frame Generation
empty_branch437@reddit
It was going to happen anyway because that's what happened with xess
steve09089@reddit
I mean, it wasn’t a guarantee, since XeSS came out with DP4A, while XeFG did not, so I was under the assumption that XeFG couldn’t be done due to hardware limitations
imaginary_num6er@reddit
Surprised when Intel just laid off 5 "GPU Software Development Engineers" at their Oregon plant:
Source: https://katu.com/news/local/more-than-500-intel-corporation-employees-laid-off-in-oregon-hillsboro-aloha-jobs-money-economy-business-local-portland
https://katu.com/resources/pdf/32e15aae-0951-486b-859b-b2b671e6d6c3-WARN9293OregonJobListing070725.pdf
You would imagine that GPU software development will be the cream of the crop of any AI hardware company, but not for Intel. Also a lot of AI jobs are cut too.
PastaPandaSimon@reddit
They've got a large driver team, and 5 jobs is a drop in the bucket considering the layoffs across the board. They'll need the expense of having a GPU software team regardless.
Exist50@reddit
Their driver team has had many prior rounds of layoffs. It's a skeleton crew at this point.
PastaPandaSimon@reddit
Across their Software teams, Intel still has got at least 15000 freaking people. That's more than Nvidia. I'm not sure how many work on GPU drivers these days, but they had lots of great talent there and it was certainly the opposite of a skeleton crew.
Exist50@reddit
Where are you getting that number from? I have a hard time believing it's anywhere close to that big after the latest layoffs.
Well that is indeed the topic being discussed.
PastaPandaSimon@reddit
I was affiliated with Intel. They had about 25 thousand software engineering roles in 2020.
The discussion is about it being a skeleton crew because their driver team recently fired 5 people 🙃
Exist50@reddit
Well the Intel of today is a much smaller company. Tan is cutting headcount almost in half vs Intel's peak.
As I said, that's far from the total. They've had like 8 rounds of layoffs in the last 2 years, nevermind attrition.
PastaPandaSimon@reddit
Yes, and they could (and likely should) lay off thousands likely without much of an impact to product quality. The level of bloat and process overhead was astronomical. Move the best talent to the most important projects instead of having them wasting time on unproductive loose ends. I'm sorry about the people losing jobs, but their lives should have been led peacefully away from Intel in the first place.
Exist50@reddit
So first you claimed the team was plenty well staffed and very talented. Now you say the people laid off weren't doing anything. Pick a lane.
Intel has decided that does not include graphics. Those teams have been disproportionately impacted by layoffs. As I said, just a skeleton crew now.
You claim half of Intel, including foundry, is just software devs? Complete bullshit, and shows you're just transparently making shit up.
SherbertExisting3509@reddit
Intel still needs to develop their graphics IP and software stack since they want to compete in handhelds.
I think the most reasonable explanation is that up to 20% of the team got laid off with Lip Bu's hatchet job.
It wouldn't be surprising if layoffs in that team were light, though, considering they executed well with Xe2, and the Xe3 graphics IP has already been done 8 months ago
The hardware team is likely in the middle of developing Xe4 which looks like it will compete with RDNA5/UDNA and Ruben
UDNA seems like a MASSIVE uarch rework considering how long it's taking.
The driver team is still releasing weekly updates, so that's a good sign, although I don't know what's in their future roadmap
PastaPandaSimon@reddit
The issue is that Intel didn't pick a lane. You had extremely experienced senior people with 10-20 years of experience hired to idle around.
I will stop engaging with you here as in the absence of any real-sounding arguments beyond "Intel bad", you resort to name-calling. It was clear to me that you had nothing productive to say on the topic once you said that laying off a single digit number of driver devs is a serious hit to Intel that we should read into as something that will somehow make their GPU division come crashing.
For the posterity, Jensen said that about half of Nvidia are software dev roles. At the current peak of Nvidia's head-count, that'd be around 11-12 thousand.
Intel had around 25 thousands software devs in 2020. The most aggressive post-layoff number would put them still at a notably higher number than Nvidia.
The total head-count of Intel is still nearly 4 times higher than that of Nvidia.
reps_up@reddit (OP)
I don't think you know / realize just how many GPU software developers Intel has if you think 5 leaving is a big deal...
Exist50@reddit
Their teams are quite small by this point. 5 is actually a significant number, and that's only from one location.
hardware2win@reddit
This is a joke?
Nuck_Chorris_Stache@reddit
Not interested in frame generation. I'd much rather have Asynchronous Reprojection.
BySaka@reddit
I tried it with my RTX 3070. XeSS FG 2.1 worked without any upscaling.
I enabled it with DLSS, but even though it was enabled in the settings menu, it still gave me the same FPS as if FG wasn't enabled. This problem is similar to the one I encountered before with Nukem FG Mod/Other FG Software. It seems like a minor bug and will be fixed soon i gues
Also It feels smoother than FSR.
Pleyer757538@reddit
XeSS on an amd radeon rx 9060 xt
standartmountain---@reddit
Xess on xfx xboxSX
AndreVallestero@reddit
Just a reminder that Intel still hasn't open sourced xess like they promised more than 5 years ago now.
bubblesort33@reddit
So I don't think I fully understand. Up to this point XeSS was using the slower, and lower quality DP4a path on the RTX 3000 series and RX 6000 series for upscaling. Can it now leverage better hardware like Tensor cores in competitors? Or is this only adding frame generation support for competitors?
dparks1234@reddit
It’s just Framegen. Up until now Framegen was Intel exclusive
jasmansky@reddit
Something FSR4 doesn't do.
HatchetHand@reddit
Glad to hear it.👍
XeSS looks pretty and I have enjoyed using it.
Noble00_@reddit
Whaat? This is interesting. Hopefully this garners interest on the same level of deep dives and analysis on upscaling and FG like with DLSS and FSR. Also, the distinction between XMX and DP4a.
DuhPai@reddit
Yeah as a RX 6000 owner it will be interesting to see how it compares to FSR 3.1 in both upscaling and FG
kuddlesworth9419@reddit
https://github.com/intel/xess/releases/tag/v2.1.0
steve09089@reddit
Pretty impressive stuff, can’t wait for the DLSS3 to XeSS FG mod to come out