Geo Week News

April 19, 2016

5 Trends at SPAR 3D 2016: 3D is Moving Past Laser Scanning

David-Smith
Wearality's David Smith presents on the past, present, and future of mixed-reality.

I’m just back from SPAR 3D 2016—catching up on sleep and slowly gathering my wits about me. As I look over the notes I took from the show floor and in the various presentations I attended, I have started to notice a few themes.

1. It’s not (just) about laser scanning

I don’t mean to say that 3D scanning is unimportant to the 3D technology space—far from it. It’s still the most precise and accurate way to capture a 3D environment, and a foundational technology for everything else we do. However, it’s important that we see it as the starting line rather than the finish, as part of a bigger 3D technology ecosystem.

As Sam Billingsley noted in his own conference wrap-up, we’ll see the most growth in the future coming from beyond the laser-scanning tech we know so well. Think drones, photogrammetry, dynamic sensors, and software.

Speaking of which…

2. Software is in growth mode

For a while now, users of 3D tech have been saying the same thing: We are really good at capturing a ton of 3D data, but we still need to work on turning it into actionable information.

The good news is that this portion of the 3D workflow isn’t being ignored by software developers. Arithmetica’s new Pointfuse V2, for instance, takes your point cloud and automatically pumps out a vector model that you can use in your favorite modeling software. It turns your data into something you can use in your workflow more easily.

On a different path, SKUR has a solution of the same name developed specifically for reporting on the differences between 3D data sets — whether that be a design model and an as-built scan, or a scan taken a week ago and a scan taken today. It analyzes your data and give you actionable information.

Autodesk and FARO have both made space in their ecosystem for software developers to make solutions that help us turn our 3D data into something we can use for our own specific needs. The more software we see that does this, the better.

3. New sensors are starting to attract attention

At this year’s conference I saw a strong showing for photogrammetry applications (like ContextCapture, and Stockpile Reports) that create 3D imagery from prosumer digital cameras and even smartphones. Even NCTech got in the game, too with their iSTAR 360° camera, offering users the ability to take measurements straight from the spherical images they capture.

SPAR 3D also showed hints that we might move beyond our intense focus on 3D data to the exclusion of other kinds of data. Both Leica and FARO made big moves this year to incorporate HDR cameras into their static LiDAR packages. Simply put, these cameras (like Spheron’s bonkers SpheronLite solution) capture images with a very high dynamic range. In these images, a dark area will be as clear in the shot as a very, very bright area. Laid over a point cloud captured by a traditional scanner, they help produce a photo-realistic deliverable that combines all the benefits of high-def photography and 3D scanning. It helps bring 3D capture to the next level as a visualization tool.

More than one presentation showed that we’re starting to think of 3D data as a base for light sensors like HDR cameras, but also pressure sensors, infrared sensors, even concrete curing sensors, among others.

4. 3D end users are getting serious about new visualization tools

Though a majority of the exhibitors left their augmented and mixed-reality solutions at home, these technologies were a common fixture in the keynotes and presentations I saw. The consensus is they’re an important part of how we’ll create and consume 3D data in the future.

Paul Davies of Boeing, for instance, spoke about how augmented reality solutions can make a total newcomer an “expert” at assembling extremely complex airplane wings in virtually no time. JP Suarez of Walmart spoke about how these same technologies could help transport executives from the boardroom to the site of a new construction project, giving them an intuitive understanding of the complex spatial work being performed there.

Wearality’s Sky, along with Microsoft HoloLens, the forthcoming Magic Leap, Daqri’s Smart Helmet, the Oculus Rift, and so on, are technologies that help people to understand 3D data by presenting it in a way that is intuitive and simple to grasp. Given that the 3D technology space is always talking about how complex 3D data can get, and how we need to make it easier to understand, it’s a wonder we’re not throwing all of our money at virtual reality, augmented reality, and mixed reality technologies. Ignore them at your peril.

5. New demographics are bringing new questions

3D technologies are mature enough to catch the eye of asset owners and a growing number of large enterprises. This means their attendance is increasing at conferences like SPAR.

The benefit is that, when asset owners and larger enterprises show up at SPAR 3D and declare their needs, they will help drive the 3D space in the right direction. They’ll ask questions like: How do I incorporate 3D into my workflow? How do I educate people across my workforce to ensure this 3D data isn’t mis-used? How do I keep it from becoming stuck in a silo? How do I extract actionable information from all these data sets? Why isn’t there software that does this? Why doesn’t anyone make a sensor for this use-case?

Expanding our cozy little 3D-imaging space to welcome attendees and users beyond the vendors and service providers is a good thing. Getting our traditional attendee-base in the room with end-users of 3D data, talking about where 3D technologies stand and where they need to improve, is an important step for this space, and ultimately a key to its long-term survival.

We’re still far from a sea-change, but SPAR 2016 showed some signs that we’re starting to head in the right direction.

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion