Geo Week News

December 10, 2019

Google adds Depth API to ARCore, seeks collaborators to try it out

Screen Shot 2019-12-11 at 1.08.06 PM

ARCore – Google’s augmented reality building platform – has been updated with new features, and Google says it is looking for developers to try it out and use its Depth API to create new ways of interacting in augmented reality.

AR Core uses a combination of motion tracking, environmental understanding that detects the size, distance and location of surfaces, and light estimation to integrate virtual content into the camera’s view. Essentially, it knows where the camera is in space, and using what it is seeing to interpret it in 3D. ARCore was originally released in 2017 as a replacement for Tango, the hardware-limited AR development platform, and as an answer to Apple’s creation of ARKit.

According to Google, ARCore’s motion tracking technology uses the phone’s camera to identify interesting points, called features, and tracks how those points move over time. With a combination of the movement of these points and readings from the phone’s inertial sensors, ARCore determines both the position and orientation of the phone as it moves through space.

This “understanding” lets a user place 3D objects, annotation, or other information in the 3D view, but in a way that more realistically fits the given conditions. Environmental HDR, an update from earlier this year, brought more realistic lighting to AR objects and scenes, including accurate shadows, and lighting that better integrates with real conditions.

The most recent update to ARCore, announced Monday, is a new Depth API that is the part of the software that provides three-dimensionality.

According to a blog post by Shahram Izadi, Director of Research and Engineering at Google, this new technology allows for 3D information to be gathered from a single camera (rather than the usual pairs required to determine stereo image information).

“The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.”

For Google, the priority here is to solve a major issue with AR: AR objects don’t seem “real.” In some cases, like in annotations or other static information applications, this may be fine. But for anything else that you might be interacting with, it is crucial that the objects are reacting to the environment in which they are actually placed.

One challenge that has been difficult to overcome is that of occlusion – how the AR object reacts when a real-world object is in front of or behind it. According to Izadi, solving the problem of occlusion will help digital objects “feel as if they are actually in your space” by blending them with the scene.

Occlusion will be available in Scene Viewer, the developer tool, to an impressive list of 200 million ARCore-enabled Android devices.

Google Seeks Depth API Developers

Along with the ARCore update announcements was a call for developers to work with the new occlusion and depth features. The call for collaborators requests applications for anyone who wants to create projects that will highlight new use cases and applications of the Depth API, and projects that can begin implementation and testing “within a few weeks or months” with a clear launch timeline.

“We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature,” adds Izadi.

For more information on ARCore, visit the ARCore development website.

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion