AMD today announced its FidelityFX Super Resolution technology which is being touted as the company’s open-source, spatial alternative to NVIDIA’s AI-based (temporal) DLSS upscaling technology. Contrary to popular expectation, it actually doesn’t seem to be based on a convolution neural network (at least not one as complex as that used by DLSS), and only uses data from the present frame (spatially). In contrast, DLSS 2.0 uses motion vectors and jitter offsets from preceding frames (temporally) in parallel to a highly trained convolution network.
This means that although AMD’s FSR will be easier to implement across different titles and platforms, the image quality might suffer from a lack of detail and generally, sub-par upscaling quality compared to DLSS. Furthermore, the fact that the latter is already integrated into popular game engines such as Unity and Unreal means that adoption isn’t really an issue for it anymore. At the end of the day, we’ll likely see DLSS patched into most big-budget AAA games and those based on Unreal/Unity, but missing out on indie titles that are designed with limited resources. This is where FSR can shine, as the integration is supposedly quite simple.
Out of the two provided images, only one (below) can be uses for a viable comparison between native and FSR, as the landscape is largely similar on the two sides which isn’t the case with the above shot.
As you can see in the below shot, there is a fair bit of reduction in the LOD with the ground textures. In fact, these kinds of textures are generally quite easy to upscale and most TAA-based upsamplers do a pretty good job here. Unfortunately, AMD’s FSR doesn’t do a very good job here. A lot of detail is lost similar to most other clamping-based upsamplers.
On having a closer look, there’s a fair bit of pixelation evident on the native side while the FSR rendered side is quite soft, another indication of clamping used in spatial upscaling technologies.
In the below image, the vegetation (both the pink bloom and greenery at the bottom) see a drastic loss in detail, seem in other clamping-based upscaling techniques. The rest of the image also looks quite blurred and faded.
Overall, this seems to be a lousy upscaling method (at least in the above shots) despite being the “quality” version and conferring a moderate performance boost of 40%. In comparison, the quality preset of DLSS 2.0 is very efficient while also offering more performance: Furthermore, we don’t have any footage, only static shots of FSR. This means we still don’t know how well it performs in motion. It’s here where ghosting and other artifacts occur that temporal-based upscaling techniques tackle. Here’s a quick look at DLSS 2.0:
Here’s an example of some commonly used upscaling technologies that leverage spatial clamping causing loss in quality and detail:
Related:
- Control Gets NVIDIA DLSS 2.0; Offers Better Visual Quality than Native
- NVIDIA DLSS 2.0 vs PS4 Checkerboard Rendering vs Traditional Upscaling: Technical Comparison