AI Neural rendering reduces memory usage by up to 85%

Nvidia showed at the GDC presentation that the new generation of rendering based on artificial intelligence goes much further than DLSS 5 technology and brings concrete benefits such as drastically less memory consumption.

Instead of using AI only as a final “filter”, Nvidia introduces neural networks directly into the rendering pipeline, where they take on tasks such as processing textures and materials.

Neural rendering reduces VRAM and speeds up games

The most concrete example is the Neural Texture Compression (NTC) technology, where a reduction in VRAM usage from around 6.5 GB to only 970 MB was demonstrated, which represents a saving of approximately 85%, while maintaining almost identical image quality.

READ ABOUT:  When you're out of memory for your gaming console and the clocks are good, the $780 Sony Luxury PlayStation Handheld Automatic

In addition, the concept of Neural Materials was introduced, where complex materials are compressed into a significantly smaller set of data, and then reconstructed using small neural networks. In the tests, the number of channels was reduced from 19 to 8, with rendering acceleration up to 7.7 times in 1080p resolution.

Unlike the DLSS approach, which affects the final rendering of the image, these technologies optimize the rendering process itself – reducing the need for memory, speeding up processing and enabling more detailed textures without increasing hardware requirements.

This approach can have a huge impact on the gaming industry: smaller installation files, faster updates and better utilization of graphics cards. Although reactions to DLSS 5 have been divided due to its potential impact on the visual style of games, NVIDIA is showing with this approach that AI can be a tool for optimization and performance, and not just for changing the appearance of graphics, reports VideoCardz.

READ ABOUT:  Real-time neural rendering that gives games a cinematic look

Source link