Jensen Huang, founder and CEO of NVIDIA, used his opening keynote at the GTC 2026 conference on March 16 to make an announcement that immediately divided opinion in the gaming community. Twenty-five years after NVIDIA invented the programmable shader, the company is once again revolutionizing computer graphics. DLSS 5 was described by Huang as a “GPT moment for graphics,” positioning the technology as the most important real-time leap since ray tracing debuted in 2018.
Unlike previous versions of DLSS that focused on resolution upscaling and frame generation, DLSS 5 introduces a neural rendering model that infuses pixels with photorealistic lights and materials in real time. The system takes the color and motion vectors of each frame as input and uses an AI model to enhance lighting and materials while remaining tied to the scene’s original 3D content. The result is consistent frame by frame, which is a fundamental requirement for games unlike offline video AI systems that generate content without predictable consistency.
In practical terms, this means that DLSS 5 applies machine learning to the game’s lighting model, not just upscaling or frame interpolation. NVIDIA calls it the next stage of rendering after upscaling and ray tracing. The system is specially trained to understand the complex semantics of the scene such as hair, skin, fabrics and ambient lighting conditions, so it can generate effects such as subsurface scattering on skin, fabric shine and light interactions with hair.
As for game developers, NVIDIA claims that technical precautions are built into the architecture itself. The system provides developers with detailed controls such as intensity and color correction. Artists can adjust blending, contrast, saturation, and gamma, as well as determine where and how enhancements are applied. Developers can also mask specific objects or areas to exclude them from processing.
Despite these explanations, the community’s reaction was violent and swift. Demonstrations of DLSS 5 showed dramatic changes to the characters in the games, which many commentators interpreted as the imposition of AI aesthetic standards that replace the original visual identity that the developers had envisioned. PC Gamer directly asked if DLSS 5 actually replaces the artistic expression of games with an AI “beauty filter”, and Will Smith, co-founder of the Tested platform, noted a paradox on Bluesky: NVIDIA introduced ray tracing as a tool for precise light management, and now uses that same output as input that the AI processes according to its own standards of photorealism.
The minimum hardware requirements and processor load of the model have not yet been published, and the company states that the model is still in the optimization phase. Early NVIDIA demos were run on systems with two RTX 5090 GPUs, one of which was solely dedicated to DLSS 5 processing. The launch version should work on a single card, but we’ll see how it works in practice.
DLSS 5 is coming this fall with support for the RTX 50 series of graphics processors, and the list of supported games is already impressive: Resident Evil: Requiem, Starfield, Assassin’s Creed Shadows, Hogwarts Legacy, The Elder Scrolls IV: Oblivion Remastered and a dozen other titles in collaboration with Bethesda, Capcom, Ubisoft and Warner Bros. Games. Whether this technology will be a tool that strengthens the vision of the developer or a filter that replaces it, will show the real conditions outside the controlled demo environments, reports VideoCardz.
