- Apr 13, 2006
Eventually, sometime in the future, fake frames are perceptually lossless relative to real frames.
Once that happens, it doesn't really matter if fake frame versus real frame.
Having 4K 240fps+ UE5-quality graphics is unfortunately not going to be possible through existing traditional workflows.
Netflix/Blu-Ray/Digital Cinema is only ~1 to 2 full frame per second, and the magic of video compression uses interpolation mathematics. Links to video compression papers and specs.
View attachment 525091
Figure 1: Example frame sequence of a typical video compression stream.
I = fully compressed original frame
P = unidirectionally predicted (interpolated) frame.
B = bidirectionally predicted (interpolated) frame.
Ever since MPEG1 was invented, video compression uses interpolation mathematics. MPEG2, MPEG4, H.262, H.263, H.264, H.265, H.266, you name it. Originally, you saw the video pulsing artifacts in early video compression, but now recent video compression (even light compression ratios) now are perceptually lossless -- you can't tell apart the quality of I frames, B frames and P frames anymore!
Likewise, GPU is doing something of this sort in 3 dimensions.
DLSS 3.0 has lots of problem, but it is a harbinger of the FSR/XeSS/DLSS future of frame rate amplification. And some cool reprojection demo ("Frame Rate Independent Mouse Look") just got released, that can produce 10:1 frame rate amplification ratios.
Strobeless motion blur reduction requires ultra high frame rates, so any of these algorithms is fair game to avoid eye-searing PWM-based strobe backlights. Holy Grail is brute framerate-based motion blur reduction.
People still do have a choice to go uncompressed video, or to go with native rendering. But people need to have the choice of various kinds of frame rate amplification via algorithms like supersampling / interpolation / extrapolation / reprojection / etc.
Just like video compression has access to the ground truth (original video frames) to make the interpolation better quality than a black-box man-in-the-middle interpolation chip... You also have the opportunity for a GPU reprojector to know the full input rate (e.g. 1000Hz mouse) to have a low-lag frame rate amplification system such as reprojection algorithms. The DLSS 3.0 does not use this yet, but I would bet that DLSS 4.0 will factor more data to make it less black box, lower lag, and fewer artifacts, until it perceptually disappears.
The same will happen as XeSS / FSR / DLSS leapfrog over each other over the years, using various reprojection / extrapolation / supersampling / interpolation / etc algorithms.
It will take years before three dimensions (GPU) is as good as the algorithms for two dimensions (glitch-free source-end video compressors, instead glitchy of man-in-the-middle interpolation), but the architecture is slowly headed in that direction.
This stuff increasingly is far beyond grandpa's Sony Motionflow "Soap Opera Effect" Interpolation setting on the TV...
This is why "fake frame" terminology will slowly become outdated over the years and decades; as GPU rendering goes multi-tiered in the 4K-8K 1000fps 1000Hz future, and algorithms become more lagless & artifactless over the long-term.
Many people love motion blur. But not all use cases can allow it.
Not everyone wants or need the extra frame rate, but others do.
For example, simulating real life (Holodeck) requires it. VR badly needs it, because real life doesn't strobe, and we need flickerless methods of motion blur reduction, and accurate simulation of analog real-life frame rates in a perfect "Holodeck" display requires brute framerates at brute refresh rates, to avoid extra Hz-throttled/frametime-throttled sample-and-hold motion blur above-and-beyond real life. All VR headsets are forced to flicker to lower persistence per frame (which not everyone can stand), because we don't yet have the technology to do sufficiently brute ultra high framerate=Hz based motion blur reduction yet. Instead of flashing a frame for 1ms to reduce blur, you use 1000 different 1ms frames, to get the same blur, without strobing. You can see a demonstration of motion blur that interferes with VR and Holodecks as an animation at: www.testufo.com/eyetracking
So some displays really needs the brute framerate-based display motion blur reduction -- so all algorithms are fair game if you want photorealistic graphics at photorealistic flicker-free brute framerates (low persistence sample-and-hold with zero strobe-based motion blur reduction).
Yes, as Wright Brothers, DLSS 3.0 has some high lag issues with some settings, and some artifacts. Nontheless, it needs to be acknowledged that the move to 4:1 frame rate amplification ratios was predicted by my article more than 3 years ago, and adding reprojection is the missing piece of puzzle that can reduce artifacts while increasing frame rate amplification ratios to 10:1 -- making possible 4K 1000fps UE5-quality potentially on existing GPUs such as RTX 4090, once displays are available, and reprojection algorithms are added to frame rate amplification algorithms (whether be XeSS / FSR / DLSS / etc).
Since retina refresh rate for sample-and-hold isn't until the quintuple digits (though diminishing curves means geometrics such as 60Hz -> 240Hz -> 1000Hz -> 4000Hz with GtG=0 and framerate=Hz to retain human-visible benefits). If you've seen the behaviors of refresh rates, you need to double Hz and frame rates to halve display motion blur.
Just like whatever color pill you take in The Matrix, the original frame and the interpolated frame has become virtually indistinguishable in all video compression codecs -- if you ever use streaming, it's always permanently interpolating. Fake Frames, indeed!
P.S. I am cited in over 25 research papers, so I am not trolling on this topic.
I am well aware your not trying to troll me. Nothing you said is incorrect, but frame rate is critical for you not to notice the glitches caused by interpolation and short of a specific use case I think most everyone that buys this card will leave it off or prefer DLSS 2. VR is it's own animal no doubt. Video compression for NETFLIX and others is well known and easy to do and you can still see artifacts caused by it, usually in excessively dark scenes and sudden camera movement, live sports often shows you just how much compression is being used. I am sure all companies will continue to work on this tech, will see if the public thinks it's a great idea or not, but fake frame it is for now