New knowledge about DLSS 5 technology further ignited the discussion in the gaming community, because it turns out that this system does not actually directly affect the rendering process, but functions as an advanced AI filter over already generated images and frames.
According to information shared by Daniel Owen after communicating with NVIDIA engineer Jacob Freeman, DLSS 5 uses only rendered frame and motion vector data as input. In other words, the technology does not change the geometry of the scene itself or the way the engine renders objects.
How DLSS 5 actually works
Freeman states that the materials and details “build on an already rendered frame,” meaning that the AI model analyzes the existing image and generates a new version based on recognized patterns. The geometry remains the same, but the visual representation is modified through the generative model.
This practically means that DLSS 5 does not speed up rendering in the classic sense, nor does it improve performance through deeper integration with the graphics engine, but rather reinterprets the finished frame. In the process, it uses information about lighting, textures and movement to generate an “enhanced” version of the image.
That’s why some analysts compare DLSS 5 with advanced filters similar to those on social networks, but with a much more complex AI model in the background.
Control is there, but limited
NVIDIA claims that developers have control over the intensity of the effect, color correction, contrast and other parameters, as well as the ability to exclude certain parts of the scene from processing. However, there is no option to directly “feed” or adjust the AI model at the artistic input level, which leaves the impression that the system’s behavior is largely fixed.
This further reinforces the concern of part of the gaming community that DLSS 5 may change the original art style of the game, as the user ends up seeing an interpretation of the AI model rather than the original output of the engine.
It is important to note that this is an early version of the technology, which the NVIDIA team itself confirmed, so it is expected that the final implementation will bring additional improvements and more precise control, reports GamesRadar.