News: Google Update Adds Real-World Occlusion to ARCore with Depth API

Google Update Adds Real-World Occlusion to ARCore with Depth API

Apple's ARKit has built a considerable lead in terms of features over Google's ARCore, but Google's latest update to ARCore adds a capability that makes the platform a bit more competitive with ARKit.

On Monday, Google unveiled its new Depth API for ARCore, an algorithm that creates depth maps through a standard mobile camera instead of a dedicated depth sensor.

The Depth API accomplishes this feat by capturing multiple images from varying angles and comparing them as the mobile device moves in space. The algorithm then uses the captured data to calculate the distance between common points in the images.

This data gives mobile apps environmental understanding and enables developers to add real-world occlusion to AR experiences, with 3D content appearing in front of or behind physical objects in a scene. As a result, Depth API solves one of the long-standing shortcomings of ARCore, where 3D content floats awkwardly on top of real-world objects.

Image via Google

"Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene," said Shahram Izadi, director of research and engineering at Google, in a blog post. "In addition to enabling occlusion, having a 3D understanding of the world on your device unlocks a myriad of other possibilities. Our team has been exploring some of these, playing with realistic physics, path planning, surface interaction, and more."

Google is debuting the new depth-tracking capability today though its AR Quick View feature, which enables users to view 3D content in AR through Google Search and is available on approximately 200 million ARCore-compatible mobile devices currently in the wild.

(1) Without occlusion, (2) With occlusion via Depth API. Images via Google

The first commercial partner to update their ARCore app with the Depth API will be Houzz. Google worked with the home design app maker, giving them early access to the API for its View in My Room AR feature on its mobile app. Developers interested in accessing the API can sign up via the call for collaborators form.

"Using the ARCore Depth API, people can see a more realistic preview of the products they're about to buy, visualizing our 3D models right next to the existing furniture in a room," says Sally Huang, visual technologies lead at Houzz. "Doing this gives our users much more confidence in their purchasing decisions."

Houzz with Depth API. Image via Google

While the key achievement of the Depth API is simulating the kind of depth measurement and environmental understanding with the time-of-flight sensors found in devices like the HoloLens 2 and the Magic Leap One, that doesn't mean the hardware, which can be found on the Samsung Galaxy S10 5G and Galaxy Note 10+ smartphones, is moot.

"The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion — the ability to occlude behind moving objects," said Izadi.

Images via Google

Along with the Environmental HDR feature that blends natural light into AR scenes, ARCore now rivals ARKit with its own exclusive feature. While ARKit 3 offers People Occlusion and Body Tracking on compatible iPhones, the Depth API gives ARCore apps a level of environmental understanding that ARKit can't touch as of yet.

"Accurate depth data is essential for AR use cases like environmental scanning, user input, and more. Until now, the ecosystem has been fragmented with different implementations trying to determine how to best expose depth data to developers," said Ralph Hauwert, vice president of platforms at Unity Technologies, in a statement provided by Google to Next Reality. "And developers need help with how to best use the depth data for features such as occlusions. Unity is proud to be working with partners like Google to allow developers to build powerful AR experiences that interact intelligently with the real world."

Image via Google

With the Depth API, Google also strikes a blow at third-party AR cloud platform makers, such as Niantic and Ubiquity6 (both of whom have financial backing from Google), as well as 6D.ai. Those platforms also offer world-mapping capabilities for multi-user experiences, persistent content, and real-world occlusion.

Now, Google can offer developers the same capabilities, with Cloud Anchors supplying multi-user experience and persistent content through the cross-platform Cloud Anchors platform, without a separate SDK.

Images via Google

This doesn't mean that third-party AR cloud platforms won't have something unique to offer. For instance, Ubiquity6 launched Display.land as a social photogrammetry app with real-world contextual understanding. And 6D.ai already has its sights set on supplying spatial computing on AR headsets running on Qualcomm Snapdragon chips

But, by integrating next-generation AR capabilities into its mobile AR toolkit, Google has just made it a bit tougher for the afore-mentioned AR players to compete.

Just updated your iPhone to iOS 18? You'll find a ton of hot new features for some of your most-used Apple apps. Dive in and see for yourself:

Cover image via Google

Be the First to Comment

Share Your Thoughts

  • Hot
  • Latest