There’s been a great deal of interest in how the newest photogrammetry software, creating point clouds and 3D models from digital photographs, compares with traditional laser scanning. In general, the verdict from industry long-timers is that the two technologies are mostly complementary. It’s not an either/or discussion. Except when it is.
I’m confident the entertainment and gaming marketplace has big potential for 3D data capture to really impact it, and so I try to stay on top of the FX web sites. Still, I was surprised to see the photogrammetry/lidar discussion play out so starkly in this article about the CGI used for the movie Lincoln.
See, essentially, they were using the Virginia Capitol building (please ignore that the linked article doesn’t know the difference between “capital” and “capitol”) in place of the federal Capitol building in Washington. Their fronts look the same, and so it was much easier to film in Richmond and have live shots of Lincoln giving his second inaugural address there, rather than in Washington, where there would be many more security hassles, etc. That meant CG extensions to make the Virginia building become the Washington building.
And for that, they used photogrammetry. This is Lincoln visual effects supervisor Ben Morris and CG supervisor Mark Wilson talking about it:
To aid in the digital add-ons, Morris visited the Capitol Building in Washington to acquire still photography for photogrammetry reconstruction, since the real building could not easily be scanned. “Historically, lidar has played a big part in everyone’s lives,” says Morris, “but we’ve got some new in-house tools that let us actually go and shoot flat or spherical images for photogrammetric scene reconstruction. We used a combination of Photoscan and ImageModeler to reconstruct the Washington Capitol.”
The CG build began with an initial test using a Capitol Building stock model to see if it would line up with the Virginia version. When that looked promising, Framestore embarked on a fuller build with the photo reference. “What’s great about the photogrammetry approach is that you can photograph as much as you like with as many close-ups, essentially a load of pictures,” explains Wilson. “Then based on the shots you need, you can process the parts you need rather than going through the lidar which requires dense data. But with photogrammetry to actually capture your source material, you’re just clicking away with a camera. It’s very quick and easy to do.”
Well, yeah, that would be the benefit, wouldn’t it. Also the fact that you don’t have to have a $40,000+ laser scanner on hand. (I also really like that “historically, lidar has played a big part in everyone’s lives,” when, of course, lidar has only really existed for about 15 years. I mean, historically, the iPad has been really fun to play with, right?)
Now, no one’s saying that computer graphics for movies represents a massive part of the laser scanning market, but it’s something, and here’s a very real case where photogrammetry and software has taken lidar’s place. Is this a sign of things to come? Or is it actually an example of how complementary the two technologies really are? Really, lidar was overkill, wasn’t it? Photogrammetry in this case – where you’re only needing imagery, and 2D imagery at that, when it comes down to it – is the appropriate technology and makes 3D data capture both affordable and practical.
If that increases the use of 3D data capture in general, that makes it only more likely that professionals dabbling in photogrammetry will eventually turn to laser scanning for more exacting jobs where precision matters, or maybe light isn’t available.