Advances in estimation of body posture in Virtual Reality

0
11
meta vr.jpg
meta vr.jpg

Meta AI Research, a leading artificial intelligence research entity, has unveiled advances in AI-powered body posture estimation for virtual reality (VR) systems. In a new study, researchers address the challenge of accurately track body parts beyond head and hands in today’s VR systems. They have developed reinforcement learning models that can plausibly estimate full body posture using only the tracking data from the Quest 2 headset and its controllers.

Estimating body posture in VR has been a considerable challenge. While current systems can only track the position of the head and hands, parts like elbows, torso, and legs require more advanced algorithms such as inverse kinematics (IK). However, IK algorithms are often inaccurate for elbows and almost never work correctly for legs. This is due to the multitude of possible solutions for a given configuration of the head and hands. Consequently, many VR apps show only the hands or focus on the upper body.

Researchers at Meta AI Research have addressed this limitation by developing questsim, a reinforcement learning model that can estimate a plausible full body posture using only tracking data from the Quest 2 and its controllers. This model has shown a close correspondence between the movements of the virtual avatar and the real movements of the user. According to the researchers, QuestSim’s accuracy and stability outperform inertial measurement unit (IMU)-based tracking devices, such as Sony’s Mocopi, which only contain accelerometers and gyroscopes.

However, QuestSim has a glitch in specific cases where the user interacts with the real world, such as sitting on a chair or sofa. The transition between sitting and standing it is crucial to achieve realistic full body representation in social VR. To address this issue, the researchers have introduced a new model called QuestEnvSim.

QuestEnvSim uses the same reinforcement learning approach as QuestSim, but now incorporates the presence of furniture and other objects into the virtual environment. This improves the accuracy of body posture estimation by taking into account the user’s interaction with these real-world objects. In the featured video, you can see the impressive results of QuestEnvSim.

It is important to keep in mind some considerations and limitations of these models. First of all, the article does not mention the real-time performance of the described system. Machine learning research papers typically run on powerful PC GPUs at relatively low frame rates, which means that several years will pass before these models can run in real time on consumer VR headsets like the Quest 2.

Furthermore, the furniture and objects used in the experiments were manually scanned and placed in the virtual environment. While future versions of VR headsets, like the Quest 3, may have depth sensors that allow for automatic scanning of furniture, current models do not have these crucial data.

Importantly, these models are designed to generate a plausible full body posture rather than exactly adjusting the position of the hands. System latency is also relatively high compared to real-time VR experiences. Therefore, these approaches may not work well for observing one’s own body in VR, even if real-time performance is achieved.

However, if these systems are optimized in the future, the ability to see the full body movement of other people’s avatars would be a significant improvement compared to current Meta avatars, which often lack realistic representation of the legs. . Meta’s CTO Andrew Bosworth has hinted that this would be the future direction of the company. Even if users don’t see their own legs accurately, Meta focuses on providing natural-looking legs for other people’s avatars, thus improving the overall social VR experience.

Meta AI Research’s advancement in estimating body posture in VR by using reinforcement learning models and considering objects in the environment has shown promising results. While there are challenges and limitations to overcome, such as real-time performance and the lack of depth sensors in current devices, these advances represent an important step toward more realistic body representation in VR. Viewing avatars with full, plausible body movements could significantly enhance immersion and social interaction in virtual environments. Meta AI Research continues to work in this area, and the future of VR promises an increasingly realistic and immersive experience.

More information at arxiv.org.

Previous articleGoogle Messages, emoji replies available to more users
Next articlePremieres coming to Apple TV+ in July 2023: all series and movies
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.