GamingGPUs

TAA vs DLSS: Is NVIDIA DLSS Better?

TAA and DLSS are widely used graphics technologies in nearly every PC game. While they’re similar, they’re certainly not in the same league. While TAA is a temporal anti-aliasing technique used to smoothen jagged edges, DLSS is an upscaling algorithm that boosts performance while polishing the rough edges. In fact, DLSS is superior to TAA in nearly every recent game. Let’s start from the beginning.

The introduction of DLSS kickstarted the development of a variety of upscaling technologies. At the time of writing, we’ve got DLSS 2, DLSS 3, FSR 2, and XeSS upsampling techniques. All four are built atop the same fundamental algorithm: temporal anti-aliasing or TAA. In this post, we explain the differences between TAA and NVIDIA’s DLSS to understand what makes DLSS better than TAA.

TAA has been used by developers for over a decade now. It was introduced to combat the shortcomings of FXAA and SMAA while avoiding the performance hit incurred by MSAA and SSAA. TAA works by comparing neighboring frames (temporally) and blending them to create a cleaner image in motion.

TAA can be defined as the accumulation of samples or pixels temporally. It combines the present and previous frames to increase the number of samples per frame (and per pixel), rendering a super-sampled image. Since it’s an approximation involving the blending of temporal pixels, it can lead to a loss of detail.

Motion vectors are used to track on-screen objects from one frame to the next. The process is a bi-dimensional pointer that tells the renderer how much left or right and up or down, the target objects have moved compared to the previous frame.

A jitter offset is simply a sub-pixel 2D shift within the pixel grid per frame.

TAA renders each pixel once per frame, then locates it in the precious frame, blending the two and effectively doubling the samples per pixel. The positions of the pixels temporally across frames can be calculated using motion vectors, jittered, and blended into a higher-quality image. Here’s a quick summary of how TAA works:

First, the position of every pixel in frame N is calculated in the previous frame (N-1). This is achieved using motion vectors provided by the game engine. We then calculate the color of each pixel in frame N-1, which itself is the average of that pixel in that frame and its preceding frames. For frame N, a sample from the jittered location (blue dot) is taken and merged with the pixel from frame N-1. This pixel is then blended with frame N+1 and so on.

Reprojecting frames isn’t as simple as you’d expect. Due to the constant motion of objects within a scene, you may end up with the incorrect sample. The Z-buffer calculates the visible objects, so even a slight change in the viewport can result in an occluded sample that doesn’t match the pixel from the current frame. This can result in ghosting.

In the above image, a thin foreground object can be seen. It is sampled in frames 0 and 2 but completely missing in frames 1 and 3. This is a prime example of where the validation step comes into play. The validation step tackles this problem. It looks for invalid samples, rejecting them in favor of newer, more useful ones.

DLSS 2.0 combines temporal upscaling with NVIDIA’s proprietary neural net upscaler. The latter compares high-quality 16K images against the base resolution, learning the upscaling of the latter into the former. DLSS 2 works on the same basic principle as TAA, leveraging temporal feedback to increase the number of samples per frame. However, unlike TAA, it doesn’t sample every frame’s pixels.

Much like checkerboard rendering, DLSS samples different pixels in different frames, using the temporal data from previous frames to fill in the gaps. The ever-learning neural network blends the present pixel with the previous ones.

NVIDIA has integrated motion vectors into DLSS 2.0. Motion vectors are also used in temporal anti-aliasing, where the previous frame is projected onto the next one. Object constantly moving from frame to frame can be anti-aliased by comparing the two and approximating the resulting data.

In addition to the low-resolution image samples, the network consumes the jitter offsets, motion vectors, depth, and exposure data from the previous frames to correctly predict the upscaled image. The previous-frame data gets accumulated in the upscaler, improving the image quality.

The inclusion of TAA-based frame data accumulation provides the convolution network with ample data to create an image that’s as good or better than the native frame. You can find the comparisons of TAA and DLSS 2 on the next page:

1 2Next page
Back to top button