According to the well-known analyst Ming-Chi Kuo, the mixed reality headset that Apple may announce over the next year, will include several 3D detection modules, highly sensitive, which they will allow to offer an innovative user interface based on advanced detection of hand movements and gestures.
More specifically, unlike the iPhone which only has one, the viewer should integrate four sets of 3D sensors which would allow it to acquire gestures and detect objects more accurately than the current TrueDepth camera which also allows you to detect motion, as happens when we use the Animoji.
We foresee that the structured light of the AR / MR viewer can detect not only the change of position of the user or the hand is object of other people who are present in front of the user’s eyes, but also the dynamic change of the details of the hand (precisely how the iPhone Face ID can detect the dynamic change of the user’s expression). Capturing the details of the hand movement can provide a more intuitive human-machine interface.
The analyst believes that Apple’s headset 3D sensors will also offer an increased field of view (FOV), so it can detect objects up to 200% further away than current Face ID sensors. In addition to hand gesture detection, the headset could also allow eye detection, iris recognition, voice control, skin detection, facial expression detection and spatial detection.
According to Kuo, the quality of this human-machine user interface will be the key to the success of Apple’s AR headset. Earlier this year, we recall, Patently Apple reported a patent application from Apple related to “Devices, methods and graphical user interfaces for interacting with three-dimensional environments”, which describes this very concept in detail.
The viewer from Apple, according to what has been reported so far, should weigh about 350 grams, while the second generation will be lighter. The device will be announced over the next year and shouldn’t require a paired iPhone to work.