Nvidia’s AI-generated photo-based graphics will blow your tiny mind
If you thought Nvidia’s latest DLSS 3 (opens in new tab) frame-rate speedifyin’ technology was pretty trick, you ain’t, as they say, seen nothin’. May we introduce you to NeRFs or Neural Radiance Fields, (opens in new tab) a terrifyingly clever AI-accelerated method for generating full 3D scenes from a handful of photos.
The idea is conceptually simple. Take a few 2D images of something. Run it through some AI algorithms. Render the thing or things — or an entire complex scene — in full 3D. And in real time.
It’s similar to photo scanning to create 3D scenes but with added AI goodness. That’s the sales pitch, at least. NeRFs aren’t unique to Nvidia. But the company is doing some particularly interesting things with NeRFs that are relevant to PC gaming as opposed to movie making, which is where a lot of the noise around NeRFs has been so far.
You can check out this in-depth video from Corridor Crew (opens in new tab) earlier this year showing just how incredible NeRF technology is for creating photorealistic video from just a few photos. But a word of warning. Once you’ve seen it, it’ll have you doubting if anything you see in video or film is real or re-rendered with NeRFs. It is spooky.
Indeed, there are even NeRF-based apps available for Android and Apple smartphones, such as Luma AI, so you can try the technology out for yourself.
Getting back to the PC, apart from the whole process being freakishly clever, it offers a number of immediate and obvious advantages. Instead of needing detailed 3D models and lots of bandwidth-heavy high-res textures, all it takes to render a complex scene is a few photos or images.
Nvidia is taking that to new extremes so-called “compact” NeRFs that require in the region of 100 times less data to render a scene. The benefits that can have in terms of bandwidth and storage are obvious enough.
However, previous implementations of NeRF technology required significant time to train and process the AI model. Nvidia’s “instant” NeRFs, by contrast, can do the whole process in seconds.
All of this is obviously pointing in a pretty tantalising direction, namely using NeRFs for real-time game rendering. Imagine the potential bandwidth and performance replacing all that model, texture and lighting data for a game scene with a few images. And then add the kicker that the results look more realistic than conventional game engines and art.
Listen, we’re not saying this stuff is just around the corner. But NeRF technology is evolving incredibly fast. The concept only emerged around two years ago and already it’s generating incredible results. Whether it will revolutionise PC gaming, well, time will tell. While we wait, you can deep dive into Nvidia’s research paper on the subject here (opens in new tab).
Source link