Those who create 3D content know how difficult it is to recreate all kinds of objects in virtual worlds. Now NVIDIA wants to facilitate that task with its 3D MoMa technology, which thanks to artificial intelligence allows access to the so-called reverse rendering.
These systems allow extracting 3D objects from 2D images. The technique makes use of a series of photos from different perspectives and is something like a simplified “3D scan”but the result, at least according to NVIDIA, is fantastic.
Play it again, Sam
This technology was presented at a recent computer vision conference in New Orleans, where NVIDIA showed off this reverse rendering process that harness the power of the GPU “to quickly produce 3D objects that creators can import, edit, and extend without limitation of existing tools.”
The NVIDIA demo took advantage of that framework to rebuild musical instruments like a trumpet, a saxophone or a clarinet. After taking photos from different perspectives, the process is then in charge of combining them all to create a matrix of triangles that recreates that design in an initial 3D model.
That model is compatible with existing modeling tools, and the reconstruction includes both that 3D mesh model and the application of materials and textures or lighting.
Those responsible for NVIDIA show in a video how after rebuilding these objects, the NVIDIA team imported them into the NVIDIA omniverse and its simulation platform to edit them.
The behavior of these objects is also appropriate: the members of the development team verified it with Cornell box calla well-known graphics test that evaluates rendering quality.
Thanks to it, it was possible to confirm that, for example, the objects reflected the light perfectly with respect to the material with which they were modeled. The proposal is one more milestone in that NVIDIA ambition for demonstrating their efforts in providing practical use cases for their graphics cards and their AI algorithm support.