The news comes from a recent WWDC session, which outlined the updates to the Vision framework. The functionality allows apps to analyze poses, movements and gestures. This could allow for a variety of unique app features. Apple listed some examples, such as fitness apps tracking users movement to make sure they’re doing the right exercise, safety training apps or even a media app that finds videos based on pose similarity.
Further, hand pose detection brings a lot of promise for new ways to interact with apps. In the session, Apple demoed an example where a user held their index finger and thumb together and was able to draw in an app without touching their iPhone’s display.
Another way developers could use the Vision framework is overlaying emoji over a user’s hands. For example, if you make the peace sign, an app could display the peace emoji on the screen. This could potentially be a fun addition to any video call or camera software.
The new framework can handle multiple hands or bodies in one scene, but unfortunately may not work well with people wearing gloves, bent over, facing upside down or with overflowing, robe-like clothing. Further, if a person is close to the edge of the screen or partially obstructed, it can also cause issues.
It’s also worth noting Apple includes similar functionality in ARKit, but it’s limited to AR applications and only works on the rear-facing camera of compatible iPhone and iPad models.
You can learn more about the new Vision framework and body detection features here.