Is it time to completely ditch frame rate as a primary metric, and replace it with latency in calculating gaming performance?
Posted by Automatic_Beyond2194@reddit | hardware | View on Reddit | 158 comments
We have hit a crossroads. Frame gen blew the problem of relying on FPS wide open. Then multi frame gen blew it to smitherines. And it isn’t hard to imagine in the near future… possibly with the rtx 6000 series that we will get “AI frame gen” that will automatically fill in frames to match your monitor’s frame rate.
So, even today frame rate has become pretty useless not only in calculating performance, but also for telling how a game will feel to play.
I posit that latency should essentially completely replace frame rate as the new “universal” metric. It already does everything that frame rate accomplishes essentially. In cs go if you play at 700 fps that can be calculated to a latency figure. If you play Skyrim at 60fps thus can be calculated to a Latency figure. So, latency can deal with all of the “pre frame gen” situations just as well as framerate could.
But what latency does better is that it gives you a better snapshot of the actual performance of the GPU, as well as a better understanding of how it will feel to play the game. Right now it might feel a little wonky because frame gen is still new. But the only thing that “latency” doesn’t account for is the “smoothness” aspect that fps brings. As I said previously, it seems inevitable, as we are already seeing that this “smoothness” will be able to be maxed out on any monitor relatively soon… likely next gen. The limiting factor soon will not be smoothness, as everyone will easily be able to fill their monitor’s refresh rate with AI generated frames… whether you have a high end or low end GPU. The difference will be latency. And, this makes things like Nvidia reflex as well as AMD and intel’s similar technologies very important as this is now the limiting factor in gaming.
Of course “quality” of frames and upscaling will still be unaccounted for, and there is no real way to account for this quantitatively. But I do think simply switching from FPS to latency as the universal performance metric makes sense now, and next generation it will be unavoidable. Wondering if people like Hardware Unboxed and Gamers Nexus and Digital Foundry will make the switch.
Regular_Tomorrow6192@reddit
AI frames aren't real frames. It's always degrading the image vs the real thing. FPS without frame generation will always be the most important metric.
Qesa@reddit
Imagine the conniption reddit would have had if nvidia suggested this when they came out with reflex.
Just compare FPS without frame gen. It's not that complex.
IMO frame gen is best viewed as a sort of temporal anti-aliasing. By which I don't mean TAA, but rather helping with the judder you get from the sample-and-hold nature of monitors. Particularly OLEDs with their extremely fast pixel response times.
zghr@reddit
You just kicked the problem down the road - soon you won't be able to turn off frame generation.
fray_bentos11@reddit
No. You could have low latency and a low framerate.
Automatic_Beyond2194@reddit (OP)
How?
Strazdas1@reddit
if game logic is decoupled from frame rate (which theoretically should be 100% of cases, in practice is actually rare) you could have low framerate while game logic actually adapts to change faster.
NeroClaudius199907@reddit
Not all games process latency the same way.
267aa37673a9fa659490@reddit
You can estimate quality by calculating the SSIM score of the AI-generated frame with reference to its rendered counterpart.
Strazdas1@reddit
Kinda. But SSIM score can miss a lot of things that will look visually unappealing.
TheCatOfWar@reddit
sort of? but a lot of modern games use effects and rendering methods that are temporally noisy, relying on TAA or upscaling to smooth them out or denoise them. it's hard to use a 'raw' rasterised image as an objective truth in these games when itself could be suffering from visual artefacts that upscaling methods would 'fix'
f3n2x@reddit
The solution to this is obvious and has been done for many years: use a (downscaled) very high resolution frame as the ground truth. This is not a new concept.
VenditatioDelendaEst@reddit
If the game is designed intending TAA, you would also need to mock the clock so that animated effects and would be drawn in slow-mo for the ground truth.
iDontSeedMyTorrents@reddit
Yeah, this would punish a better-than-native image the same as it would an actual regression in image quality.
Automatic_Beyond2194@reddit (OP)
But they shouldn’t be the same. That would mean if you simply displayed the same exact frame over and over instead of actually changing it, you would get a better score.
What you would need to do is render the game with the engine slowed down so you can render frames faster than you can in real life. Then compare those “real” frames to the AI generated frames. While possible in theory, it would require devs to add this capability to their engines, just for benchmarking, which seems very unlikely.
Strazdas1@reddit
No because the vast majority of people do not feel latency when playing, even in games with 140ms+ latency like RDR2.
nday76@reddit
1% low is more important for me.
I rather have bad latency than stuttery good latency.
ChickenMcNublet@reddit
Screen tearing for me. I could play Cyberpunk at over 70-80fps if I turned on frame gen, but it tears like a mfer, so I turn it off and play it capped to 40 (in 120hz on my VRR tv) where the card is barely managing to put that out before frame-gen.
Yearlaren@reddit
It depends on the game and on how much latency is "bad latency"
Zarmazarma@reddit
Most of the posters here probably don't realize that a lot of the games they play run at something like 40-50ms latency, even at 60fps...
Spyzilla@reddit
Cyberpunk and Indiana Jones both feel awful for me, I’m really curious what their latency is
They’re both so bad it’s hard to want to play them, especially Indiana Jones
Yearlaren@reddit
Modern gaming sound pretty depressing. I'm happy playing old games like TF2 and modern indie games like Balatro, and emulating old console games.
Spyzilla@reddit
It’s not really, plenty of great modern games out there.
Zarmazarma@reddit
Including Balatro, humorously.
Yearlaren@reddit
Hopefully future games won't be as laggy as the Indiana Jones game
Cute-Pomegranate-966@reddit
Without reflex?
Cyberpunk used to have 90-120 ms latency for me at 50 fps without reflex lol.
Beawrtt@reddit
Yup, there's some really bad latency in some games but nobody cares because it was never benchmarked
Automatic_Beyond2194@reddit (OP)
Well yes. Just as we have average framerate, 1% low framerate, 0.1% low framerate, we would now do average latency, 1% low latency, 0.1% low latency. It is virtually identical to how FPS operated before frame gen. The only difference is the “smoothness” aspect of fps is removed… because there is no need to compare it anymore if everyone is generating AI frames to make it smooth across the board.
Darrelc@reddit
Smoothness is subjective anyway. Apparently 45 FPS > 120 is just as smooth as 120 native according to some of the comments I've seen here.
Automatic_Beyond2194@reddit (OP)
It is. Smoothness I am using as a metric to describe framerate. How many frames you have determines how smooth it is. 120 frames with multi frame gen should be just as smooth as 120 native frames. The difference betwen the two is latency and graphical fidelity… not smoothness.
Zednot123@reddit
1% lows are also a flawed metric. What is more important to be measured is frame consistency, not lows.
Without context you don't know if that 1% number just represent a especially demanding camera pan or scene change during the benchmark run. Or if it is from frame time spikes happening every couple of seconds with a couple of high latency frames.
The first case is often where the improved 0,1% and 1% lows comes from adding CPU performance. Rather than eliminating stutter. The bad stutters and frame inconsistencies are the majority of the time game/gpu related. Unless you are running a very unbalanced system where the system/CPU just chokes on the load.
Darrelc@reddit
https://i.redd.it/0mscbbg4qk9e1.png
RAM bandwidth and latency matters especially, if you're CPU bound. Makes almost no difference when in a GPU limited scenario though.
ydieb@reddit
You can do avg, 1% high and 0.1% high for latency as well.
PM_me_opossum_pics@reddit
Yeah, running 5600G gave me terrible stutters in a lot of games because cache was struggling with what system could generate on average. 150 fps highs or even average make it worse when you get drops to 30 or even 50. After switching to 5700x those problems were gone.
stainOnHumanity@reddit
It’s only the primary metric for noobs. Gamers have always cared about latency.
Zarmazarma@reddit
The fact that you seem to be confusing frame time and latency means you're probably not in a good position to posit anything...
Latency is the time it takes for you to see a reaction on screen from a given input. This is effect by much more than just the frame rate. Even if your game is running at 60 fps, it is almost certainly not hitting 16.6ms input latency (which would be very good even for a game running at, say, 200fps).
As an example, here is a HUB video testing latency with DLSS (I'm picking this one because I just watched it for another post)- you'll see that Metro Exodus has about a 40ms latency at 130 fps, and a 24.1 ms input at 214 fps. Yes, you get the "higher FPS = better latency" correlation here, but it's not tied only to the FPS, and isn't something you can convert back. Then, if you skip forward a bit in the video and look at the COD Warzone numbers, you'll see it hits 201-225 fps, and the latency is around 30ms regardless.
If you're at 60fps, your frame time is 16.6ms. This is a totally different metric from latency, and many reviewers already measure this and show frame time graphs (Gamer's Nexus is a big proponent of this).
I'm in a bit of a rush, so I can't exhaustively explain everything wrong with this right now, but here's an abridged list:
If you look only at latency, you're basically giving no consideration to FPS. A game might run at 200fps with a 30ms latency, while another will run at 100fps with a 20ms latency. Which one is actually a better experience? Many people might not even feel the difference between 30ms and 20ms (most people here seem to have no idea what either feels like, seeing as they expect 16ms latency to be normal).
Probably against your intentions, it would give a massive boost to Nvidia GPUs. Tons of games support Reflex, which vastly reduces input latency, and Reflex 2 will drop it even further. Very few games support Anti Lag 2, and AMD has not announced a competing technology for Reflex 2 yet.
You would be measuring something other than GPU performance. Input latency varies more by the game's engine than it does by frame rate, as shown with the difference between Rainbow Six Siege and COD Warzone.
campeon963@reddit
There's also the case to be made that, seeing the continous development of high refresh-rate OLED panels in the last couple of years, a technology that's known to suffer a lot from sample-and-hold blur at low FPS because of it's extremely fast response times, it makes sense that we're now seeing the development of new, multi-frame generation technologies in order to improve the perceived visual fluidity of a game, even if the latency itself doesn't change that much from the baseline FPS!
tukatu0@reddit
Psssh. I got news for you. Mark reihjon is developing a shader to blend in pixels. Removing judder from oleds.
This link isnt about that one https://blurbusters.com/crt-simulation-in-a-gpu-shader-looks-better-than-bfi/ but you can probably find info about it somewhere by browsing.
And again. This is better for now. Unless you are one of those pwm sensitive people.
campeon963@reddit
I tried to browse the web for what you mentioned, but unfortunately I cannot find the specific shader that you mentioned, only the announcement about refresh cycle shaders from BlurBusters and the respective GitHub repository shared in the article.
Reading your comment more closely though, the pixel blending shader that you mention is only meant to alleviate the OLED judder effect that you can see from low frame-rate content like movies. But because of the way it works (by blending previous frames with something like an alpha mask, which I suspect is what MadVR Smooth Motion uses based on my own observations while watching movies with it), that technology reduces judder but doesn't do anything for motion blur, unlike something like Frame Generation, BFI or a CRT Beam Simulator implemented in a shader. I verified my response with this post by Mark Reihjon in regards to pixel blending techniques like MadVR Smooth Motion.
The benefit of interpolation-based frame generation (as found on DLSS4 MFG) and especially reprojection techniques (like the ones found on VR headsets or NVIDIA's most recent Reflex 2 Technology) is that, by creating actual discrete frames out of a base-line framerate (even if some people call them "fake frames"), you can in turn max out the refresh rate of these displays and heavily reduce the sample-and-hold motion blur inherint to display technologies like OLED, while also taking advantage of the near instantaneous response times of these technologies!
tukatu0@reddit
My apologies. I must have misremembered and mis interpreted something along the way. I think the cheif was saying someone could develop it. Rather than he was making it himself. Just for judder removal yes.
And yeah. I definitely like the idea of frame gen. Since it allows near native levels of quality by putting more info. Plus no need to decrease brightness or give up hdr if you want. But unfortunately it is still stuck on about 100 games 2 years after it's launch. Unlike afmf. Though that also only doubles the fps at most.
Considering i do not really expect comapnies to go update their old games. It really is a shame. Something like battlefield 3 would be able to run at 8k 800fps on a 5080 with 4x image generation. It's a damm shame really. Even a more modern game like read dead redemption could run at 8k 60fps native on a 5070ti medium settings. Then propelled to 240fps. But alas
Darrelc@reddit
Feels pedantic when he's clearly talking about latency as "GPU receives frame data" to "GPU sends drawn frame to monitor". Interpolation will always inherent add latency to the completed frame output time.
Latency is a measure of delay of a time, frametime is a measurement of time. Frametime has a latency value
VenditatioDelendaEst@reddit
But there is no reason to care about that exclusive of total system latency.
Darrelc@reddit
What has changed in terms of latency measurements between the pre and post FG era?
Pre FG era:
Post FG Era:
VenditatioDelendaEst@reddit
Game engine latency, the time from input events being readable from the OS API to the time that frame data that accounts for those events appears on the GPU. Software tweaks for reducing that are collected under the "Nvidia Reflex" brand.
The most recently announced, asynchronous whatever warp, is adapted from methods used for years by VR renderers, and works by forwarding input events around the rendering process, and warping the oldest undisplayed frame right before it is sent to the monitor.
TSP-FriendlyFire@reddit
But we cannot measure this directly, you'd basically be computing it indirectly from a combination of frame rate, system latency and either some way of getting which frames were generated (if FG becomes dynamic) or visual analysis.
It also becomes a roundabout way to measure base frame rate without frame gen, but with the overhead of frame gen thrown in without its benefits being measured. The GPU would receive frame data and output N frames, but your latency metric would only capture the first frame. It's counterproductive.
Darrelc@reddit
The more I think about this the more perfect it actually is. Shows you what actual objective performance degredation you're getting by using these technologies while disregarding the subjective image quality / smoothness argument.
Darrelc@reddit
Like FPS has become.
We are literally asking for base frame rate so we know it's caveated as raw performance, not so that "our number is bigger/better than the other side's number". Keep your FPS measurement and we'll all start comparing our 1400 FPS gtx1460s vs our 1600 FPS gtx1470s.
bladex1234@reddit
This is why we need Intel’s frame extrapolation technology to come out so we can put this whole latency mess to bed.
Darrelc@reddit
Are you on about this?
For reference, 6.95ms to complete a frame is the performance target I look for in a GPU.
Zarmazarma@reddit
You say this:
But then immediately follow up with this:
Interpolation adds to latency... actual latency, like the responsiveness of the game. It doesn't add to frame time.
tukatu0@reddit
Its because they are only thinking of this because of digital foundry. Who for some odd reason (relationship with engineers idk.) keeps trying to find a way to potray these half tricks as superior to previous methods. Instead of just accepting. It is inferior. End of story.
reg0ner@reddit
Latency and framerates are two completely different things bro. I see where your visions at but we're like a few gpu generations from that.
DYMAXIONman@reddit
Latency is lower at GPU usage below the max fps though.
Immediate_Character-@reddit
Wat
DYMAXIONman@reddit
Reflex is useful because it lets you receive low latency when you have a GPU bottleneck. However, you can induce the same benefits by capping your FPS before you reach your GPU bottleneck. Uncapped with reflex will always have lower latency, but generally for a smooth experience you should be capping your FPS anyway.
UltimateSlayer3001@reddit
That’s a lot of typing to say that “I want the future to be a blurry AI slosh fest of created fps, what do you guys think?”.
How about, we test things that perform NATIVELY, under their NATIVE performance metrics without introducing upscalers and AI as we’ve literally always done?
HyruleanKnight37@reddit
While I agree, showcasing latency and relating that to a better gaming experience will be more difficult than using framerate. It's an uphill battle.
Cable_Hoarder@reddit
There is also not just one kind of latency and they require different scales and understanding. Network latency, total system latency, input latency, and more.
Input latency is very forgiving in all but the most twitchy frame perfect games, competitive FPS, RTS/moba and Fighting games mainly. For decades (especially on consoles) 60+ ms has been the norm, and even with frame gen modern pc games are lower than that.
So long as it is a stable latency our brains easily compensate for it and Pre-time inputs. Which is why you can get frame perfect inputs on old games with 100ms input latency. Our eyes, muscles and such all have more latency (only hearing and pain are sub 50ms sensitive for us).
Anything more RPG or single player is perfect for 4x frame gen, and the perceived motion clarity offered by high frame rate is almost always worth it. 60-80fps to 144fps for the sake of 10ms input latency in Cyberpunk is an easy trade for the motion clarity. It's an outright no brainer for something like flight simulator.
10ms added input latency in marvel rivals or Valorant is more questionable (if not outright bad), especially if you can lower graphical fidelity to achieve 120fps in real frames.
I'd rather play with potato graphics and 144+ FPS in those titles.
MonoShadow@reddit
Then there's Reflex 2 with Async SpaceWarp. Camera movement latency will be massively reduced, but action latency won't change much.
IMO latency is a good measure. But it's not as straightforward.
Also with Reflex there is a possibility nVidia card will exhibit similar latency with lower frame rate, leading to better overall experience on a competitor card. I won't go deeper into FG rabbit hole in this post.
So basically good luck to the reviewers in figuring this out.
IshTheFace@reddit
I have full confidence in Nexus to go above and beyond with their testing, as they always do.
Cute-Pomegranate-966@reddit
Their benchmarks are literally the least interesting thing they do and are almost never the ones i watch lol.
IshTheFace@reddit
Care to elaborate?
Cute-Pomegranate-966@reddit
the exposes and tear downs are the most interesting things they do.
His attitude and demeanor make the benchmarks boring by comparison.
bubblesort33@reddit
I got some big doubts about this tech. If your curser, and gun in code isn't even aiming where the camera has tricked you into thinking you're aiming, is it really latency reduction at all? It just hides latency. The one that has the biggest impact to "feel" of the game, but lowest impact, or really no impact to how lower latency actually gives you an edge in game.
Bluedot55@reddit
That is an important question- if they can get the movement perfectly synced up with where it would actually be in that half frame, then it's useful. If it guesses wrong, it may be a problem.
The big thing with the frame warp, imo, is how it will work with frame gen. If you are at a 30 fps base, you have 33 ms per frame. Frame gen, at minimum, would add an extra frame of delay in there, for a minimum of 66. We've all played at 30 fps, the latency is rough. But if you scale it up via frame gen, you still have that rough latency, even if movement looks smooth.
That's where frame warp may be interesting. Can it essentially nullify the 66 ms minimum latency from the frame rate down to essentially zero by bending the image to have you looking where you'd expect to be looking at the last moment? I think it could. There's an argument to be made about how if won't mask latency from inputs at all, but I don't think that's too important. For slower paced games, it'll show your cursor would reach the target mid real frame, and allow you to stop moving the camera and click during the fake frame. The input doesn't care about waiting for when it's displayed, it's registered instantly. So the feedback to you may not get the delay mitigation, but it doesn't really matter.
The big issue would be with fast moving objects on top of fast camera movements. Since with frame warp you'll have extremely precise mouse movement, while tracking targets that still have the delay between where you see them and where they actually are that you would have at 30 hz.
ZubZubZubZubZubZub@reddit
Even competitive games can be less responsive than people think. DotA 2 and LoL has 30 hz tick rates and Starcraft 2 is 22.
Eli_Beeblebrox@reddit
Mabas are different than the titles mentioned. TTK is measured in seconds in mobas, even if it can get down to one second towards the end of a game. TTK in shooters is measured in milliseconds, and can get as low as 100, and that's not even factoring in one-shots. The responsiveness requirement is not the same. RTSs have even less requirement.
aminorityofone@reddit
got any new games? Both of those are over a decade old now. Whats the tick rate of something released in this decade?
79215185-1feb-44c6@reddit
This post is amazing in expressing why I struggle to understand the whole input latency / high refresh rate argument. I guess I'm just not as sensitive to it as others are.
Kozhany@reddit
I'd argue that input latency can be far less forgiving for some people who are more sensitive to it, to the point of rather not playing a game at all than trying to deal with it.
It's very subjective, and akin to FoV - many people can play all day with a 60-degree FoV while sitting 2 feet from the screen, but others get very nauseous very quickly.
Cable_Hoarder@reddit
It's worse with a mouse camera in a fast game for sure.
Personally I can handle input latency up to 50 or 60 ms no issue even with a mouse in a fast FPS and flick shots, but I cut my teeth on FPS like UT99 and Q3A in the dark early days of LCD screens with 30ms+ response times.
For me it is an inconsistent input delay that ruins me, same with FPS for that matter, even 10 to 15ms fluctuations I can feel.
CrzyJek@reddit
It will have to be done though
Automatic_Beyond2194@reddit (OP)
Sure compared to when FPS made sense it isn’t as good. But that is the past. The question isn’t whether things were easier when we could use FPS to accurately assess both smoothness AND latency near perfectly, simultaneously. The question is whether it is better to use latency in the here and now, when FPS has become virtually meaningless on its own without qualifiers.
mmkzero0@reddit
I think the only real good “primary”metric of overall performance is a combination of significant metrics: - average framerate - 1% and 0.1% lows - frame time pacing - input latency - power draw
I believe the set of these individual metric is a good baseline for a primary metric set in that it accounts for average performance as well as worst cases, consistency, overall “feel” when playing and efficiency.
burnish-flatland@reddit
You are right that fps limit is to be hit very soon, but it's not only that. Latency, should it become a primary metric, will also be quickly "faked" with AI. Furthermore, even the "real" frames will be AI-enhanced to make them more realistic. The whole problem of real-time on-device rendering incredibly realistically looking video feed with AI is very close to be fully solved, not in the next couple of gens, but in 10 years very likely. And at that point there will be not much else to do for the graphics part of the "GPU".
AntLive9218@reddit
The problem isn't with FPS, it's with what being measured.
A monitor could have been doing panel self-refresh already (VRR LFC or DP PSR), a fancy TV (by default, bypassed by "gaming mode") tends to do interpolation, but correctly none of those were measured so far in benchmarks.
Not sure how FPS measurement measurement is done nowadays, but back when it was working properly, it measured how often the program signaled being ready for the frame to be presented. If that was 70 times a second, then "70 FPS" would be shown, and anything else would be incorrect. Additional info like "70 FPS (100 FFPS)" is fine, but showing "100 FPS" is simply incorrect, as it's not measuring what's expected.
Obviously ignoring potential technical difficulties as of course as usual in this scene, there isn't even some source code to look at to simply see what needs to be changed, but it's not a huge surprise issue, just a bug in measurement obfuscated with marketing and money in general.
Note though that even "real" FPS alone was often not enough. Nvidia GPUs were caught rendering lower quality frames in some cases, obviously inflating measured FPS, and people kept on buying them anyway. This is one of the reasons why synthetic tests are still common to measure raw performance, but then of course that doesn't necessarily translate to similar game performance, especially with replacement shaders in the driver, and company partnerships biasing which architecture is favored by the code.
CrzyJek@reddit
Lol I had completely forgotten about the low quality frame rendering thing
ArdaOneUi@reddit
Can you elaborate on it never heard
CrzyJek@reddit
This is bringing me back, but 10ish years ago people were claiming that Nvidia frames... essentially the frame images themselves, were of lesser quality when compared to Radeon when using the same game settings. I think at the time it was more subjective on image representation and color reproduction or something. I experienced it personally as I had cards from both manufacturers and games using Radeon cards did look better.
However, if we want to go further back...back to the early days of Nvidia, I believe it was said they were not rendering full scenes during certain benchmarks in order to improve FPS numbers.
Lifetime ago. Talk about nostalgia.
steik@reddit
This sounds like some urban legend tbh. At most this would've applied to specific games, not across the board. This would've been extremely easy to prove too. Do you have any evidence or source for this? Do you remember what game(s) you experienced this in? This would've been extremely easy to prove so forgive me for being hesitant to believe this at face value.
CrzyJek@reddit
Like I said the more recent thing was basically subjective. It was my personal opinion that Radeon imaging looked better to me (across the board). I just found it intriguing that I wasn't alone in thinking that.
However the older stuff from the early FX and GeForce days...that was true. Another commenter under my post went into more detail. It was so long ago I didn't remember some of it and how.
lowlymarine@reddit
I remember waaaaaay back when during the GeForce FX/Radeon 9x00 era, nVidia's performance in the new Shader Model 2 was leagues behind ATI's. In order to look more competitive in benchmarks, they were caught having the driver intercept SM2 calls and replace them with simpler (and consequently worse-looking) SM1.1 effects that were easier for their GPUs to handle. Both companies also routinely degraded anisotropic filtering quality to improve performance in the early days of the tech, though nVidia continued to do so for many more years than ATI/AMD. In fact, I think the GeForce driver still defaults to a slightly reduced texture filtering quality (though it's doubtful you'd ever be able to see any difference, and like most of those legacy settings it probably doesn't apply to modern APIs).
CrzyJek@reddit
Man, them were the days
steik@reddit
Been a graphics programmer for 13 years at a AAA studio and I've never heard about this or experienced this myself (and I always rock dual GPUs, amd and nvidia so I can test both on the same computer).
NGL this sounds like bs. Do you have any sources to back this up? This would've been extremely easy to prove and I would imagine this would've been big news.
reddit_equals_censor@reddit
no, it wouldn't be good enough.
explanation:
we can use reprojection frame generation to UNDO the render latency.
so i can render 10 fps, but reprojection all 10 frames ONCE and discard the source frames.
that would mean a 1 ms latency or less (however long the reprojection takes), but it would of course feel HORRIBLE, because you got instant frames, but only 10 per second.
nvidia's reflex 2 does just that and thus would completely break the idea to use latency as the only way to measure performance.
important to know, that reprojection frame gen creates REAL frames with full player input, it is NOT just visual smoothing and it has latency reduction and not latency increases as part of it.
also if we'd actually use absolutely clear graphs with interpolation fake frame gen, then there wouldn't be a big problem.
so 60 fps turns into 50 fps + 50 visual smoothing insertions.
no "100 "fps"" graphs.
even people, who are quite outspoken about interpolation fake frames still use misleading graphs when showing it.
so of course in the future THIS should be part of how things are shown and latency and possibly source frame amount + visual comparisons shown at least a bit.
we HOPEFULLY in the future have advanced depth aware, major moving object including reprojection frame generation with ai fill-in.
so you would have let's say 100 source fps all on average reprojected 10x to get 1000 hz/fps LOCKED and TRUE 1000 fps/hz experience, because those are REAL frames and not nvidia interpolation marketing bs.
and it would be an amazing responsive experience.
and interpolation fake frame gen should be DEAD DEAD by then as it has always been nonsense.
so latency is part of the solution, but not the whole solution.
advester@reddit
There is more to input latency than reprojection. Reprojection can't calculate that the zombie's head exploded two "frames" ago,
reddit_equals_censor@reddit
technically the parts of the exploded head could have positional data, that gets reprojected, BUT that would be a future reprojection version.
but yeah an instant action without previous visual data could not get reprojected either way.
now theeoretically for quite deterministic or fully deterministic animations like most of gun shot visuals, it is feasible down the road to have "ai" insert the self gun shot or enemy gun shot animation into the reprojected frame. but again future ideas.
now thankfully for your gaming competitive performance what seems to matter is your aiming position and movement position most of all and your shooting having a source fps latency visually (the shot already happened in the game logic, that is not a problem) isn't a real issue.
of course enemy shooting having a source fps latency is a bad thing.
so you would have several different latency numbers to look at in the future with reprojection ideally, BUT it would none the less be a massive upgrade.
and the core of my comment was, that REAL FPS =/= latency sometimes.
you test character movement latency by hitting the mouse very hard automated to move left, well that can get reprojected. you hit the shoot button, well that won't (or at least for a long time be able to do sth about that in the reprojected frame), so 2 different numbers.
so we certainly need more than just latency data to show performance and more than just 1 way to test latency as well.
qwert2812@reddit
absolutely not. You can't just let them legitimize fake frames as real frame.
haloimplant@reddit
Definitely need to do something to set a standard or we're going to be looking at demos and marketing with 1000fps of AI interpolated frames and 1s of latency
Next_Estate8736@reddit
you just compare them like you always do
cathoderituals@reddit
What you should be doing is addressing frame time variance, since large spikes or drops are what mainly contributes to perceived latency. You want consistency above all else. Turn V-Sync on and limit max FPS to around 4-7fps below your monitor’s refresh rate in the GPU settings, or if you can’t get near that frame rate, just below the max FPS you can attain. Turn in-game V-Sync off unless you can’t, in which case disable it for that game only in GPU settings.
If a game has an adjustable in-game FPS limiter or hard lock to a set value like 60fps, disable the fps limiter in GPU settings for that game only, use the in-game limiter.
szczszqweqwe@reddit
I honestly prefer frame pacing graph, but it's a difficult thing to analyze for more than 2, maaaybe 4 GPUs/CPUs.
Saying that I agree, FPS is getting less and less relevant, sure, in some situations in some games FG is great, but it's not a universal improvement.
AstroNaut765@reddit
Imho comparing values of average distance from the mean in chart would be good replacement for frame pacing graphs.
szczszqweqwe@reddit
Generally I agree, I can see a problem with visualising spikes.
There is also an issue if average or mode should be displayed.
MuffinRacing@reddit
Frame rate is fine, just requires reading past the marketing nonsense from the manufacturers and look at independent reviews
Zerokx@reddit
Why not have 3 metrics at the same time? FPS, Latency, and an image quality indicator. Image quality could be measured for example by calculating a bunch of frames, taking out the one in the middle and letting the AI generate it, and then measure how much accumulated difference in all the pixels between the originally calculated frame and the AI generated one. The only problem I see with this is that maybe some day AI might improve the look of games like a realism filter over a simplified programmer art game. Then it would not really mean anything anymore.
zig131@reddit
Frame Generation frames look great: https://youtu.be/2bteALBH2ew?si=eHHbwez2QYT3RN05
The issue is it results in a higher number being shown, without the experience improving as would be expected (and in fact it regresses latency slightly).
Even if people (sensibly) just turn it off, NVIDIA and AMD will continue to use FPS numbers with frame gen on in thier marketing.
If we switch to demanding a different metric - like frametime - then everyone is on the same page again.
StickiStickman@reddit
Almost no one would notice the latency hit since Reflex + FG still has lower latency than most games without.
Huh? The framerate goes up significantly and the image looks much smoother. That's exactly what you'd expect.
zig131@reddit
But you can enable Reflex 2 (in a limited selection of e-sports games) without Frame Generation 🤔. As you could Reflex/Anti-lag before it.
Reflex 2 is a cool, useful technology in its own right. It does however do nothing to make frame generation useful, or justified.
Gaming is an interactive medium. 24 FPS is considered "unplayable", not because it LOOKS bad - we watch films and TV like that with no complaints - but because it FEELS bad.
60 FPS with frame gen, will LOOK like 60 FPS, but it will FEEL like 30 FPS, because the latency is the same as the rendered source with a regression of one frame.
There is an expectation that high frame rates feel better, but frame generation won't meet that.
It's only useful if your game is almost entirely in-engine cutscenes.
StickiStickman@reddit
The point is that the vast majority of games already run at worse input latency than games with FrameGen do. No one cares.
iDontSeedMyTorrents@reddit
Upscaling can already make some games look better than native. These games would already be unfairly penalized by such a comparison metric for deviating from native.
cloud_t@reddit
Frame pace: exists.
That said, I believe a combination of metrics is necessary to arbitrate a truly positive user experience.
torvi97@reddit
nah 'cuz frame gen results in blurry, ugly graphics
sure y'all are using it left and right but I'd still rather have a 60fps sharp looking game than a 144fps blurry mess
the_dude_that_faps@reddit
Isn't the new Reflex doing timewarp like VR but for desktop games and using AI infill to fill the missing parts? We're already at the point where games can present themselves at the refresh rate of the monitor whatever that is.
A very different thing will be clicking the mouse and seeing a bullet come out of your gun. That can't be AI predicted or inferred. However, I still think that as long as FG or MFG is a toggle, we can continue to benchmark games without them enabled to compare GPUs as that framerate will directly correlate to latency and also to stuttering when adding the 1% lows into the equation.
This isn't to say I don't think we shouldn't measure input latency though. We absolutely should. I'm sure even now without FG and MFG we will likely find differences between GPUs thaks to driver overhead, reflex and reflex-like techs like AMD's AL2 and Intel's (which I don't remember the name of).
Far_Success_1896@reddit
Not all frames are equal now. Latency is only part of the equation and not actually all that important in most games where you are cranking up graphical settings.. within limits.
More important than latency is going to be visual quality. Similar thing as DLSS. You're going to have to go all digital foundry and examine frames in these benchmarks to see how many visual artifacts you get and what exactly you are sacrificing by turning it on. Each game is going to be different so it's going to take a bit for reviewers to adjust but they will.
Plank_With_A_Nail_In@reddit
Image quality and latency at an agreed on framerate would be better. But what would that be?
sump_daddy@reddit
Why was this question deleted and reposted?
Beawrtt@reddit
Just latency isn't enough. FPS is still a quantitative measure, there's just more configurations that need to be tested and latency will need to be part of it. Benchmarking will have to expand there's no other solution.
The weird side effect of this is some games will get hard exposed for their base latency. There are some very unresponsive games out there but nobody knows about it/cares because it wasn't in benchmarks. I remember RDR2 on PS4 was one of the highest latency games I've ever played
chargedcapacitor@reddit
It's all about consistent frame time. Latency is important, but as an example, 20ms of latency is barely noticable. 20ms frame times is 50fps, which is terrible for certain (most) games.
Like everyone else on this sub who's never seen true 240hz+ in action, you need try it out to understand. Framerate and frame time are very important, and the trade-off for a small amount of latency is more than worth it in most games.
For games like csgo and overwatch, where players want the lowest latency possible, the argument is moot since these games are optimized enough to be CPU bound in most cases.
self_edukated@reddit
Not being snarky here, genuine question:
Can you explain what you mean when you say that nobody here has experienced true 240hz+ in action?
chargedcapacitor@reddit
I was directing that statement to OP. My main point was that latency is not goin going to be "the" metric to judge the performance of future hardware on.
People like OP who have never seen 240hz+ in person (which happens to be the majority of users on this sub) will often not understand this.
As a final point for OP, pushing 4K/6K (and HDR!) widescreens to 240hz+ is still an extremely difficult task, and will still be difficult for the 6090. As good as all of these new AI features are, there will still be artifacts if pushed too hard, and an increase in all performance metrics will still be needed on future hardware.
self_edukated@reddit
Ah okay so I just misinterpreted what you meant. More to be read as “for those that have never experienced…” I was thinking for a moment that there is some technicality where true 240hz doesn’t exist or something and I’d been duped!
Thanks for clarifying!
No-Relationship8261@reddit
Well I still see a lot of artifacts and can't bear anything worse than DLSS quality(Which is great tbh, free frames)
So I would say not yet. But each to their own I suppose. I am barely able to tell the difference between 60 and 120 hz and completely fail at blind tests for 120 vs 240.
So maybe my eyes are more keen on details and less keen on refresh rate.
basil_elton@reddit
What you are talking about has existed since NVIDIA allowed you to run FrameView.
It is called RenderPresentLatency - which is what the NVIDIA overlay shows.
It is a measure of how quickly the render queue is emptied.
You can see it in action - cap your in-game frame rate to say 60 FPS, and then see what happens to this metric when in the menu or viewing the intro movies. It will be less than 1 ms on any moderately powerful GPU.
When you are in-game, it will show values anywhere from a few ms to 16.67 ms as long as your GPU isn't getting choked by the game and the graphical settings you apply.
PiousPontificator@reddit
I don't think so because we now have to consider the quality of the frames being rendered.
Jeep-Eep@reddit
Hell no, quality ain't there yet, it's it's your frames with fucking filler.
ThatOnePerson@reddit
With Reflex 2 and async warping, probably not. You can even do 15 fps and it'll "feel" fine with async warping: https://youtu.be/f8piCZz0p-Y?t=181
zig131@reddit
That's a moot point because, in the few games where Reflex 2 is supported, you could use it without frame gen for a better experience.
Frame Generation will always regress latency - it's intrinsic to how it works.
Currently Reflex 2 is only supported in E-sports games where enabling frame gen would be a TERRIBLE idea.
advester@reddit
Async warping (reprojection) IS a type of frame generation.
zig131@reddit
The way it is done in VR it is - yes. The frame is shown once, and then shown again warped.
From what I understand, with Reflex 2 every rendered frame is warped to some extent (assuming mouse moves), and shown only once in a warped state. The FPS number is not inflated at all.
Both of these are far superior to DLSS Frame Generation, as they actually have a positive effect on perceived latency, and can generally be left on and forgotten about.
Whereas DLSS Frame Generation is only good for inflating numbers to make things look better than they are, and I guess for a hypothetical game that was mostly comprised of live rendered cutscenes.
ThatOnePerson@reddit
Yeah I'm just saying it'll make a bad benchmark because of the inconsistencies. It separates input latency from game performance, so you can't use latency as a benchmark for performance
SignalButterscotch73@reddit
It makes sense from a technical stand point to move away from frames per second since its no longer representative of actual performance anymore, but to change we would also need to re-educate all the non-technical people in the world from the easy "biggest number best" model of average fps to a more complex "smaller number best" of average latency that even many gamers struggle with.
Inertia will be fighting against this. Not to mention all the potential flaws or exploits in latency measurements since latency can mean many different things.
tilted0ne@reddit
It literally depends on the game. Frame time graphs, averages, lows, highs are all good, there's no need to mention latency before more FPS = less latency unless you're using FG and nobody is doing comparisons where they use it and don't use it on another card. There's nothing wrong with using frame gen in certain games, people need to realise it's a smoothening tech for the most part.
From-UoM@reddit
I thinks frame consistency should be taken into factor.
A smooth 60 fps feels much better than an erratic 100 fps frame rate.
1% and 0.1% lows doesn't quite capture it.
I think gamernexus are the only ones who do a frame time graph for some games.
advester@reddit
In theory, frame gen could generate a variable amount of frames to smooth out those stutters. Eventually, the gpu will always output a constant max frame rate and it will be the input latency that stutters.
From-UoM@reddit
Blackwell has flip meter in the display engine to help have consistent frame rate at high fps with fg
Automatic_Beyond2194@reddit (OP)
Ya I guess you could add in “standard deviation of frames” or something. I’m a bit rusty on my math, but I am sure you could make an equation where you calculate the standard deviation over a given interval then average it.
Problem is you don’t just want to use any old standard deviation, because it is acceptable for latency to change in less or more demanding scenes. As you point out the problem is acute spikes in latency… which 0.1% lows and 1% lows IMO do a decent job of capturing without getting too complicated.
SceneNo1367@reddit
Problem is with Reflex 2 you'll also have fake latency with butchered frames.
GARGEAN@reddit
Kek. So we already are moving away from fake frames to fake latency. Gorgeous!
zig131@reddit
The metric you are looking for is frame time i.e.how long it takes for a frame to be rendered measured in milliseconds. To achieve 60FPS of rendered frames, each frame needs to take 16.67 milliseconds or less to be rendered.
Total input latency isn't really practical as a metric for reviewers, as it is influenced by monitor, mouse/keyboard, maybe even USB controller.
Frame Time is just CPU+GPU+Game+Game Settings+Resolution - same as frame rate.
You can also determine 1% high, and 0.1% high. It's just an inversion of frame rate in that lower is better.
Zarmazarma@reddit
The figure typically measured by tools like FrameView is system latency, which doesn't include peripheral or display latency, but would also not be a good metric to determine GPU performance either.
NeroClaudius199907@reddit
Daniel owen type of reviewers are going to be more beneficial in the future.
GenZia@reddit
I actually mostly agree.
However, it's easier to say '60FPS' than 16.66ms. An average layman barely understand frame times, after all, or at least that's the impression I get.
In fact, some random fellow on Reddit was cross with me, merely because I maintained that law of diminishing returns applies to refresh rate and there isn't a 2X difference in perceptible smoothness between 240Hz and 500Hz.
The jump from 240Hz (4.16ms) to 500Hz (2ms) is like going from 60Hz (16.66ms) to 69Hz (14.49ms) in terms of actual frame time latency.
The simple fact of the matter is that people will buy whatever they want to buy, regardless of how much logic or data you present to them.
And as the saying goes; arguing with a fool only proves there are two.
Darrelc@reddit
In fact, some random fellow on Reddit was cross with me
https://old.reddit.com/r/hardware/comments/1i24y64/what_is_the_future_of_graphics_benchmarks/
This sub is a fickle beast lol
Zerokx@reddit
Saying people are going to buy whatever anyway is a non argument. FPS spelled out is frames per second which isn't fair to compare to just a time unit that is measured in milliseconds. It would need the latency suffix or have a different abbreviation. But yes these differences are barely noticeable however it is called.
Dackel42@reddit
Blurbusters found that there needs to be at least a quadroupelling of frametimes for a Big noticeable effect. The Goal in the End is still 1000Hz, so Frame time latency of 1ms. So there are more benefits to lower Frame time latency than just the latency itself.
gusthenewkid@reddit
Frame times and frame time deviation would be the best way now I think.
Signal_Ad126@reddit
Great discussion to be had. It's like the bullshots from the gaming magazines in the 90's all over again. You get the game home and it ran at 15fps... The marketers have realized that the normies have figured out 4k and 60hz isn't all that and need a way to show graphs with bigger numbers, they are just ahead of that curve. By the time the mainstream figure all this latency stuff out, have no fear, the rtx60xx will have the answer with Nvidia Reflex 3.0!
Automatic_Beyond2194@reddit (OP)
Reflex 3.0 now with ai inputs using precognition technology.
jaaval@reddit
Actually why even do the game logic at all, just have an ai guess what the screen should show at any time.
zig131@reddit
Reflex 2 is not precognitive.
It takes normal rendered frames, and shifts/warps them based on mouse movement that has happened since the frame started rendering.
The only AI involvement is a quick and dirty filling in of the gaps created at the edge of the frame by the shift. The goal there is just to make it not distracting.
szczszqweqwe@reddit
Now let people play on PCs that are in some wherehouse and let them access them with a shitty slow laptops, I bet it's gonna be a blast.
crystalpeaks25@reddit
I propose Frame Latency Score or FLS
FLS = FPS/Latency x k
k is scaling factor here.
with this new metric essentially it will be clearly visible that frame generation with higher latncy will be penalized in this scoring.
Lets see how it works.
Scenario1: raw raster FPS=240 Latency=10ms FLS becomes 24000
Scenario2: (Framegen) FPS=240 Latency=30ms FLS becomes 8000
Scenario3: raw raster FPS=120 Latency=10ms FLD becomes 16000
with this even a 120fps raw yields better than framegen with 240fps.
jaaval@reddit
I think you need some kind of tuning factors for the individual scores. Maybe weight functions that take into account of subjective desirability of the range.
I have thought about this quite a bit some time earlier. If you want to actually score GPUs for their gaming experience the difference between 300fps and 500fps is entirely meaningless while difference between 30fps and 50fps is huge. Similarly latency differences much below the latency of the monitor itself are fairly meaningless but latency of 100ms would be devastating even if fps is 10000. And 10fps is really bad even if you get latency to 0.0001ms.
The problem with this kind of “performance index” is that it doesn’t take into account future games that are more demanding. Measures of raw computing and rendering pipeline speed tell us about future performance too.
WASynless@reddit
>Frame gen blew the problem of relying on FPS wide open
Or we just don't count these frames. Pretty lazy but still
i_love_massive_dogs@reddit
Let's say a game has 10ms latency with Nvidia Reflex X.0 and is using 4x frame gen bringing it to 200 FPS. Without frame gen, it runs at 50FPS and 5ms latency with Reflex.
On AMD or Intel, let's say they don't have frame gen or equivalent latency reducing technologies, so the game runs at 80FPS with 20ms latency.
From a consumer perspective, it would seem obvious that Nvidia card is a much better choice (without considering price) since it has lower input lag and lower latency. However, simple frame chart would display Intel/AMD as clearly the superior choice, misleading the customer.
Massive_Parsley_5000@reddit
I mean, so does any graphical setting though.
Should we not benchmark at ultra because it introduces latency by raising frametime?
I'm not saying you're on the face of it incorrect, you're actually correct, really, because it's the best way currently to set a baseline, but end of the day the reasoning is a bit spurious.
WASynless@reddit
frame rate being an imperfect dataset to measure performance
-> It is good enought, if you try to generate real frames instead of fake ones. It is a case of the good money being driven out by the fake one ...
Automatic_Beyond2194@reddit (OP)
Why would you use a formula to create a new “unit of measurement”(real frames) that has no real impact, when you could use latency which 1:1 corresponds to how good a game feels?
The options are…
1.) calculate the “real frame rate” by taking the in game frame rate, then figuring out how many AI frames were generated and subtracting them. This gives you a number that only shows you one thing. How much LATECY the game has.
2.) just directly use the latency. No calculations. No needing to explain the context of how framerate isn’t really framerate, and it only matters now because it affects latency. You just copy and paste the latency number, and it is self explanatory, just like framerate used to be before frame generation broke it as a useful measuring tool.
WASynless@reddit
3.) Keep benchmarking with frame generation off, until it is backed in the game engines and it actually good. Then add a disclaimer after having tested the average mouse-click to game engine latency number or something
p4block@reddit
Furthermore, in the long run with games going fully path traced there will be little "settings" to play with in the first place. Textures will fit to your vram tier (16/24/32G) and games will look exactly the same on all gpus, cheap or expensive. More gpu just gets a less blurry image / higher res / more rays / less artifacts.
upvotesthenrages@reddit
I think you're spot on in relation to the problem, but I'm not sure how latency is a proper solution.
50ms latency is Alan Wake 2 is completely fine. But 50ms in a racing game or in CS2 is extremely noticeable.
The image quality is extremely hard to measure, especially now that we have MFG.
Going from 60 to 120 FPS means the time the frame is on screen is still pretty high and the errors in the FG are more noticeable.
But going from 60 to 240 FPS means that the errors on screen are there for a far shorter period.
It's easy to pick apart a single frame, but it's a completely different matter when we're talking about the experience of 240 FPS.
PhoBoChai@reddit
No. New AAA games still struggle to reach 60 fps on decent hardware these days.
DZCreeper@reddit
The good reviewers are already using frame time metrics, they just convert to FPS for better viewership comprehension. Intel PresentMon being the new hotness for benchmarking.
Framed-Photo@reddit
For testing with frame gen technologies, latency kinda already has been the primary metric, at least in the testing I've liked.
In motion, even things like lossless scaling have been mostly coherent enough at full speed to be unnoticable to most users, but the latency is worse than DLSS FG. When we've seen outlets like HUB do testing for frame gen, they've fortunately done testing for latency in it as the primary metric.
For testing anything that's not frame gen though, I don't think latency should be the primary metric. Too many variables at play.
CatalyticDragon@reddit
You can always reach arbitrary levels of latency by reducing image quality or resolution. The latter we often do dynamically and is the primary tool consoles use to maintain a smooth frame rate.
In those cases where frame rate is locked we evaluate image quality by what features are enabled and what the base resolutions are.
Frame generation doesn't really change things here. It's a tool (good or bad, you decide) to lock output to a desired frame rate target.
It doesn't mean the output is the same quality as native though and we have the same problem. Generated, interpolated, and warped frames do not look the same as ground truth frames generated in engine.
The 6090 and 6060 might reach the same frame rates but their inputs will have to be very different, the amount of generated frames diverging from ground truth will be very different, and base quality settings will be different. So ultimately image quality will be nowhere near the same.
yo1peresete@reddit
I think we simply need to introduce everything:
Everything I mentioned never come to GPU manufacturers testing, it will be as it's already is - only independent reviewers will provide that data unfortunately.
Just look at NVIDIA presentation of DLSS4 they said that they improved frame pacing of frame gen - BUT they didn't show the frame time graph! bruh, what missed opportunity.
Sh1v0n@reddit
What about using the LcFPS (Latency calculated Frames Per Second) as a convenience for showing the latency metrics in more "palatable" form?
Xplt21@reddit
I just want image quality to be more focused on and marketed/shown more.