Tech News

They present a system that converts 2D images into 3D bodies without the help of artificial intelligence

The ability to take some photos, to quickly turn them into navigable 3D scenes, could soon be a viable possibility.

This, thanks to the development of a new technology that allows photorealistic 3D worlds to be rebuilt in just minutes, without the help of artificial intelligence.

A novel 3D rendering technology

Two years ago, researchers at the University of California, Berkeley, developed NeRF, a system that could convert flat images into 3D thanks to the power of neural networks, which are systems of computing nodes that act like neurons in the human brain to recognize patterns. in the data, to deliver a photorealistic experience far superior to other existing technologies at the time.

Perfecting that technology, researchers from the same house of studies presented Plenoxels, considered the evolution of NeRF, surpassing this benchmark in every way, from its speed to its image quality, which can expand its potential for the consumer, industry and scientific applications.

“NeRF is great, but it takes a whole day to recover a 3D scene”said Angjoo Kanazawa, a professor of electrical engineering and computer science, who is part of its development team. “Plenoxels, however, make training fast and convenient by getting rid of neural networks,” he added.

With NeRF, the only input required to optimize the 3D rendering is a set of images with predefined camera poses. Using classic volume rendering techniques, NeRF can render photorealistic views of complex scenes.

Perfecting an old project, dispensing with AI

After a series of experiments, the researchers wondered if the training and rendering processes could be done without neural networks. They found that this was possible with Plenoxels.

In the computer graphics hierarchy, Plenoxels are at the pinnacle of dimensionality: pixels are a 2D image element; voxels are a 3D volume element; and Plenoxels (plenoptic voxels) are volume elements that change color depending on the angle from which they are viewed.

A Plenoxel grid is made up of little blocks, just like the ones used to create a Minecraft world, except Plenoxels offer another level of dimensionality: view-dependent color. As seen in this demonstrationWhen you zoom out and look at all these blocks at once, you can see a high-resolution 3D world. But up close, at its core, only small blocks can be distinguished that can change color.

This process, called trilinear interpolation, takes the average of neighboring blocks, rather than representing a given point in space with a block or voxel. This smoothes the radiation field, improving the resolution of the resulting 3D rendering without the time delay of neural networks.

“By making some adjustments, we were able to remove the neural network and really speed up the training procedure”said Matthew Tancik, Ph.D. student in the Kanazawa lab and co-author of both the original NeRF paper and the new study on Plenoxels. “I did not expect these methods to be so fast. Instead of taking a full day, it can now take just a few minutes to create these highly photorealistic renderings, making them more practical for a variety of applications.”he pointed.

The researchers highlight the wide potential uses that this technology brings with it. From the development of immersive real estate tours, through the capture of personal memories and even covering the generation of more complex virtual and augmented reality experiences, they even contemplate their professional use in the development of integrated technologies robots, automobiles and even for the inspection of ecosystems, such as for calculating the density of trees in some territory.

Kanazawa noted that while this study showed that the Plenoxels-based technology does not rely on neural networks to turn photos into an explorable 3D world, AI might be needed if people want to use the technology for specific tasks that do require learning. “I think the next interesting thing will be to build learning into this process, so you can do similar things with a lot less images, a lot less observations.” Kanazawa said. “We use our previous experience of the world to perceive new images. This is where real machine learning comes into play. And now that we’ve made the 3D rendering process more practical, we can start thinking about it.”he pointed out regarding the future of this project.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button