Big d@mn post: Photogrammetrically Derived Point Clouds

I chatted with Howard Butler (@howardbutler) today about a project he’s working on with Uday Verma (@udayverma @udaykverma) called Greyhound (https://github.com/hobu/greyhound) a pointcloud querying and streaming framework over websockets for the web and your native apps. It’s a really promising project, and I hope to kick the tires of it really soon.

The conversation inspired this post, which I’ve been meaning to do for a while summarizing Free and Open Source software for photogrammetrically derived point clouds. Why? Because storage. It’s so easy now to take so many pictures, use a structure from motion (SfM) approach to reconstruct camera positions and sparse point cloud, and then use that with a Multi-View Stereo approach to construct dense point clouds in order to… okay. Getting ahead of myself.

PDPCs (photogrammetrically derived point clouds). Hate the term but haven’t found a better one are 3D reconstructions of the world based on 2D pictures from multiple perspective. This is ViewMaster on steroids. Scratch that. This ViewMaster on blood transfusions and Tour de France level micro-doping. This is amazing technology FTW.

Image of Viewmaster

So, imagine taking a couple of thousand unreferenced tourist images used to reconstruct the original camera positions and a sparse cloud of colorized points representing the shell of the Colosseum:

Magical.  This is step one.  For this we use bundler:

Colosseum images on the left, reconstructed point cloud and camera positions on the right

Or maybe OpenMVG.

OpenMVG icon

Ok, now that we know where all our cameras are at, and have a basic sense of the structure of the scene, we can go deeper and reconstruct dense point clouds from a multi-view stereo approach. But, let’s wait a second– this can be memory intensive, so first let us split our scene into chunks we can process in a way that we can put it all back together at the end.  Enter Clustering Views for Multi-view Stereo (CMVS).

cmvs

Honestly, the above image is the calculations from our final step, multi-view stereo (MVS), but broken into bite-size chunks for us in the previous step. Here we apply PMVS2:

Image of fully reconstructed dense point cloud of building

Now, if you’re like me, you want binaries. We have many options here.

If you are willing to depart from the pure open source, VisualSFM is an option:

http://ccwu.me/vsfm/

But, you’ll have to pay for commercial use.  Same is true for CMPMVS:

http://ptak.felk.cvut.cz/sfmservice/websfm.pl?menu=cmpmvs

But it has the bonus of returning textured meshes, which is mighty handy.  If you are a hobbiest UAS/UAV (Drone) person, this might be a good option.  See FlightRiot’s directions for it here:

http://flightriot.com/post-processing-software/cmpmvs/

Me, I’m a bit of a FOSS purist. For this you can roll your own, or get binaries from the Python Photogrammetry Toolbox (the image is the link):

Image of archeology mesh and point cloud reconstructions.

Finally, Mike James is working on some image level georeferencing for point clouds “coming soon”, so stay tuned for that:

Screen shots of georeferencing interface for sfm_georef

PDPCs for the win!

4 thoughts on “Big d@mn post: Photogrammetrically Derived Point Clouds

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.