Five Things I Learned about 3D Capture at #YII2016

0

I have just returned from London, where I attended Bentley’s Year in Infrastructure conference for the third time. A lot has changed since my first YII in 2014, and the presentations this year gave me the sense that 3D capture might have completed its journey from cutting-edge technology to become “mainstream.”

As a result, I thought now could be a good time to take stock and see where 3D technology stands. Here are five things I learned about the state of 3D capture today, as observed at Bentley’s conference.

A smart 3D model of Helsinki

A smart 3D model of Helsinki

1. 3D is now an integral part of big projects

If we take the presentations at Bentley’s conference as any indication, a huge (and growing) number of large projects now use 3D capture technology. I judged dozens of submissions for Bentley’s Be Inspired awards, and a huge number of them used 3D capture as a central part of their process.

This might be partially because the big companies like Bentley have been working hard to integrate 3D capture into their software. These days, Bentley users don’t have to go outside their main design software to use 3D capture—it’s right there. If you use OpenRoads Designer, for instance, you can clip a DTM from your 3D capture in Descartes and use it as the basis for a roadway design.

2. The smartphone is democratizing 3D capture

Sure, the UAV helped popularize 3D by providing a low-cost platform for 3D capture. But let’s not forget what the smartphone is doing for 3D.

It’s also helping make the capture process more widely available. While at the conference, I tried out a beta version of a new ContextCapture mobile application. I took 12 pictures of a piping unit, uploaded it to the cloud, and then had a surprisingly good model within 5 minutes.

If I can do this, imagine a field worker taking out his smartphone and snapping an update of a valve he just replaced. It’s easy, useful, and anyone can use it.

3. The smartphone is democratizing 3D consumption

Perhaps more importantly, the smartphone is making it easy for anyone to consume 3D data.

It used to be that it was difficult to give non-experts a sense for how a project will look or feel, or how it might affect them. You could show them 2D plans or a 3D model and you might still not get the point across. However, with the advent of bigger smartphones and technologies like Google Cardboard, you can now show those non-experts a simple but effective VR experience of the project and get your point across beautifully.

At the YII, I explored a LumenRT model of a highway project in VR, and was shocked by how real it felt. I’ve looked a lot of 3D models in my day, but this quick look through a Google Cardboard gave me a better sense for how the highway would look and operate in its surrounding landscape than anything I’ve seen before.

A 3D smart model of Helsinki that requires a lot of graphics processing power.

A 3D smart model of Helsinki that requires a lot of graphics processing power.

4. It’s not CPUs we need, it’s GPUs

This brings me to my next point. We have, for a long time, focused on developing better central processing units (CPUs) in our machines. These units process data in a way that is great for sequential information like text, but not really anything that requires processing in multiple dimensions, like, say, 3D/4D/5D data.

What we need, I learned at the conference, is better graphical processing units (GPUs). As Nvidia explains, “a CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.”

GPUs were originally developed to process graphics for 3D video games, but now we’re using them for a lot more complex calculations, like financial projections, deep learning, analytics, and, yes, the fast and efficient processing of 3D data.

The better our GPUs, the better we’ll get at processing 3D data and the more we can do with it.

5. 3D capture technology may be mature, but we still have a long way to go

We’ve figured out how to include 3D capture in our workflows, what kind of computers we need to process it, and how it to make the data easier to capture and consume. But we’re still just getting started.

One thing Bentley repeated throughout the conference is that we’re still figuring out how to help industries make use of 3D in the best way possible. In other words, the question isn’t how we can make 3D available to people, but how we can make that data more useful to them.

Share.

About Author

SPAR 3D Editor Sean Higgins produces SPAR 3D's weekly newsletters for 3D-scanning professionals, and spar3d.com. Sean has previously worked as a technical writer, a researcher, a freelance technology writer, and an editor for various arts publications. He has degrees from Hampshire College in Amherst, Massachusetts and the University of Aberdeen in Scotland, where he studied the history of sound-recording technologies. Sean is a native of Maine and lives in Portland.

Leave A Reply

© Diversified Communications. All rights reserved.