Improved 3D perception: Four-legged robots conquer complex terrain

0
9
robot 3.jpg
robot 3.jpg

A team of researchers from the University of California at San Diego has developed an innovative model that improves the 3D perception quadrupedal robots, allowing them to navigate more difficult and diverse terrain with ease.

The Model: Translation from 2D to 3D

The main mechanism of the model is based on a depth camera, located in the robot’s head, which captures images in two dimensions (2D) and translates them into three-dimensional space (3D). In simple terms, this camera acts as the “eyes” of the robot, allowing it to “see” and understand its environment in a more detailed and accurate way.

Information Extraction and Movement

The data obtained from the 2D images is used to extract 3D information, including aspects such as the movements of the robot’s legs. This information is compared with that of previous frames to calculate the 3D transformation between the past and the present, essentially giving the robot a “short-term memory” of its environment.

The Importance of Short Term Memory

The use of this “short term memory” it is essential for controlling the movements of the robot. It allows you to remember what you have “seen” and actions you have taken in the past, using this information to plan and execute your future moves.

Versatility in Environments

The increased 3D perception, combined with proprioception (the robot’s sense of movement and location of the legs), makes the robot more versatile and capable of navigating terrain that was previously too challenging, such as stairs and rocky paths.

Limitations and the Future

However, the model still has its limitations. Currently, it does not guide the robot to a specific goal or destination. However, the research team has plans to include more planning techniques and complete the navigation pipeline in future work.

This project stands out for its innovation and potential to open new paths in the interaction of robots with their environment. In the future, this advance could have significant implications for the development of robotics, from space exploration to rescue tasks in disaster areas.

You have more data at rchalyang.github.io/NVM

Previous articleGoogle Pixel Drop June Update Launches with Macro Mode in Videos, Wallpapers, and More
Next articleMicrosoft’s PC Game Pass titles will start streaming on GeForce Now
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.