Geo Week News

May 6, 2020

uDepth sensor continues Google's push towards better 3D depth sensing

SPAR-IMG-2020-05-06-udepth

Following up on the addition of Depth API to ARCore, Google continues to explore real-time 3D Depth Sensing with the Google Pixel 4 smartphone.

Designed to be accurate and metric, Google developed uDepth, a real-time infrared (IR) active stereo depth sensor. The idea is to work at high-speed and in darkness for Pixel 4’s face unlock feature. Besides, it helps the authentication system to identify users while also protecting against spoof attacks, and it is capable of after-the-fact photo retouching, depth-based segmentation of a scene, background blur, portrait effects, and 3D photos. The system produces both a depth stream at 30Hz, and smooth, post-processed depth maps for photography post-capture effects such as bokeh and 3D photos for social media.

To achieve this, Google trained an end-to-end deep-learning architecture that enhances the raw uDepth data, inferring a complete, dense 3D depth map, using a combination of RGB images, people segmentation, and raw depth. Also, the company synchronized Pixel 4 phones with the lights and cameras of a volumetric capture system that can produce near-photorealistic models of people. Built with a geodesic sphere outfitted with 331 custom color LED lights, an array of high-resolution cameras, and a set of custom high-resolution depth sensors, the system helped to generate training data combining real images as well as synthetic renderings from the Pixel 4 camera viewpoint.

“When a phone experiences a severe drop, it can result in the factory calibration of the stereo cameras diverging from the actual position of the cameras,” wrote Michael Schoenberg, uDepth Software Lead, and Adarsh Kowdle, uDepth Hardware/Systems Lead, Google Research on a blog describing the technology.

“To ensure high-quality results during real-world use, the uDepth system is self-calibrating. A scoring routine evaluates every depth image for signs of miscalibration, and builds up confidence in the state of the device. If miscalibration is detected, calibration parameters are regenerated from the current scene. This follows a pipeline consisting of feature detection and correspondence, subpixel refinement (taking advantage of the dot profile), and bundle adjustment.”

While smartphone face-unlock features may not yet be relevant to the commercial and professional space, the advancements of this technology showcase the potential of depth sensors for the future of 3D applications. Depth sensing provides developers and users the ability to determine 3D information about a scene. Using depth sensors on a smartphone can help not only in photography but also in augmented reality, 3D scanning applications, and more. Instead of carrying a big 3D handheld scanner around, you may be one step closer to  having one in your pocket.

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion