Currently, virtual reality systems base their operation on helmets and manual controls. Thanks to these elements, they can reliably track the position of the head and hands.
However, the position of the legs, torso and even the elbows, which are outside the direct tracking spectrum of VR devices, according to Meta, can be estimated using specialized algorithms.
The Problem with Limbs in the Metaverse
The algorithms that are currently in action for this task apply a predictive technique called inverse kinematics (IK, for “inverse kinematics”). Based on what has been explored so far with this technique, the accuracy of the position predictions is only sometimes correct for the elbows and it rarely gives correct posture for the legs.
This well-known difficulty is based on the fact that there are too many potential solutions for each possible set of combined head and hand positions. Some manufacturers have sought a solution to this problem by implementing additional sensors, which increase the associated costs and, given its status as a niche product, do not enjoy complete compatibility with the available software, mainly with games.
Because of the above conditions, many metaverse-based experiences, including those in Meta, dispense with legs on their avatars. On this, Meta’s chief technology officer, Andrew Bosworth, hinted details on intentions to improve this, in the middle of a question and answer session on Instagram:
“Yeah, we’ve made a lot of fun of avatars without legs, and I think that’s very fair and I think that’s pretty funny.
Having legs on your own avatar that don’t match your real legs is very disconcerting to people. But of course we can give other people legs, you see that, and it doesn’t bother you at all.
So we’re working on legs that look natural to someone who’s a bystander, because they don’t know how your legs are actually positioned, but probably when you look at your own legs, you’ll still see nothing. That is our current strategy.”
The solution proposed by Meta to add legs and improve the posture of avatars in virtual reality
Researchers from Meta presented through a ArticleQuestSim, a new neural network-driven system capable of estimating a plausible full-body pose using only the tracking data provided by Quest 2 and its controllers, without relying on additional devices of any kind.
The results of its implementation, which can be reviewed in a testimonial videolargely coincide with the original poses presented, presenting a higher performance than that offered by other alternative solutions, which use an accelerometer and a gyroscope to estimate the posture.
The solution offered by this system, in line with what was anticipated by Bosworth, does not have the mission of reproducing in a completely reliable way the body posture of the person wearing the helmets. Therefore, its application would be suitable for giving legs to the interlocutors of a virtual reality session, but not for the direct participant. It is simply a question of adding a quota of realism, as an additional stimulus.
The document that reports details of this project, indicates that the latency of the system is 160 ms, providing 11 frames at 72 Hz. However, further details about its performance and capabilities were not specified.
This announcement can be considered as the prelude to the annual event Meta Connectdedicated to augmented and virtual reality, which will take place in a couple of weeks.