For most mainstream users, LiDAR sensors for precision depth sensing remain the exclusive domain of Apple iPhones and iPads, but Google is helping Android device makers close the depth gap on the software side via its ARCore toolkit.
After introducing new AR features for Search and Maps and previewing the mind-blowing Project Starline 3D video conferencing concept during its I/O keynote presentation on Tuesday, Google unceremoniously announced ARCore 1.24, which brings two new AR capabilities, Raw Depth API and Recording and Playback API, with the former enabling more realistic AR experiences and more accurate occlusion in the absence of LiDAR.
- Don't Miss: Google Adding Stereo Depth Support for Dual Cameras to ARCore Starting with Pixel 4 & 4XL
The Raw Depth API builds on the existing Depth API by capturing additional depth data so that apps can render realistic AR experiences via a standard smartphone camera. However, Android devices with time-of-flight sensors for depth measurement will deliver higher-quality experiences.
"The new ARCore Raw Depth API provides more detailed representations of the geometry of objects in the scene by generating 'raw' depth maps with corresponding confidence images," said Google AR product managers Ian Zhang and Zeina Oweis. "These raw depth maps include unsmoothed data points, and the confidence images provide the confidence of the depth estimate for each pixel in the raw depth map."
The result of the aggregated data is improved geometry recognition, which means greater precision in depth measurement and better environmental understanding for anchoring AR content realistically into physical environments.
One of the first apps to take advantage of the Raw Depth API is TikTok . The app's Green Screen Projector effect wraps objects where the Raw Depth API has a high degree of confidence with photos from the user's camera roll.
In addition, with the Recording and Playback API, ARCore gives apps the ability to capture inertial measurement units (IMU) and depth data in video footage. For developers, this means that they can test AR apps without venturing out into the field for various real-world environments.
The Recording and Playback API also creates a new type of AR experience for end users, allowing them to add virtual content to videos. For instance, JumpAR from SK Telecom enables users to interact with video of locations in South Korea and add AR content. Meanwhile, VoxPlop! by Nexus Studios gives users the ability to add 3D characters to videos and share them with others, who can, in turn, edit them with AR content.
Google will cover these capabilities in greater...ahem...depth during the New Capabilities in ARCore session, which debuts on Wednesday.
Ironically, it was Google that first introduced depth sensors via its Tango hardware platform, but adoption was limited to just two commercial devices. Apple responded with ARKit, in which version 1.0 could detect horizontal surfaces through a standard iPhone camera. Google pivoted from the hardware approach and adapted Tango's software into ARCore.
Now, Apple has taken depth sensors mainstream with its iPhone Pro and iPad Pro lineups, while Google is relying on software to capture depth data.
Despite the continued iterations of ARKit and ARCore, the proliferation of mobile AR apps have not quite taken the world by storm. Instead, the AR platforms for Snapchat (which also supports the ARCore Depth API) and Facebook/Instagram, with their multitudes of bite-sized AR experiences, are the ones drawing attention from developers, creators, and brands.
Of course, the four companies and their respective mobile platforms are essential public beta tests for the next era in mobile computing -- smartglasses.
Have an iPhone? See everything that's new with Apple's latest iOS update: