Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Posts Tagged ‘CMVS’

Moar kite flight pics

Posted by smathermather on April 27, 2015


Posted in 3D, Bundler, Image Processing, OpenDroneMap, OpenDroneMap, Optics, Photogrammetry, PMVS, UAS | Tagged: , , , , , , , , , | Leave a Comment »

Kite flight (too windy for balloons, ahem “aerostats”)

Posted by smathermather on April 20, 2015

Inflation of aerostat

The aerostat hanger.

The end of the string.

The 9-footer is just so stable. But not enough wind to lift the cameras this day.

And so we send up the 16-foot workhorse. See that little dot? That’s the camera array.

The 16-footer flew nice and vertical, but pulled really hard. Processed images to follow soon.

Canon S100s from Kaptery — the silver one is an NIR adapted one; the black one is RGB color.

Edit: forgot the camera array:

CIR image from balloon:

IR image from the flight.

IR image from the flight.

Posted in 3D, Bundler, Image Processing, OpenDroneMap, OpenDroneMap, Photogrammetry, PMVS, UAS | Tagged: , , , , , , , , , | Leave a Comment »

Announcing OpenDroneMap — Software for civilian (and humanitarian?) UAS post processing

Posted by smathermather on September 15, 2014

OpenDroneMap logo

This past Friday at FOSS4G in Portland, I announced the (early) release of OpenDroneMap, a software toolchain for civilian (and humanitarian?) UAS/UAV image processing. The software is currently a simple fork of, and will process from unreferenced overlapping photos to an unreferenced point cloud. Directions are included in the repo to create a mesh and UV textured mesh as the subsequent steps, but the aim is to have this all automated in a single work flow.

Projects like Google Streetview, Mapillary, PhotoSynth, and most small UAS (drone) postprocessing software, such as that offered by senseFly, share a commonality– they all use computer vision techniques to create spatial data from un-referenced photography.

Screenshot of drone image thumbnails

OpenDroneMap is an open source project to unlock and make easy-to-use related related computer vision techniques, so that whether from street level photos or from civilian drone imagery, the average user will be able to generate point clouds, 3D surfaces models, DEMs, and orthophotography from processing unreferenced photos.




Screen shot of textured mesh as viewed in MeshLab

To those who may be wondering — wow cool, but what happens to the data at the end of the day? How do we share it back to a common community? The aim is for the toolchain to also be able to optionally push to a variety of online data repositories, pushing hi-resolution aerials to OpenAerialMap, point clouds to OpenTopography, and pushing digital elevation models to an emerging global repository (yet to be named…). That leaves only digital surface model meshes and UV textured meshes with no global repository home. (If anyone is working on global storage of geographically referenced meshes and textured meshes, please get in touch…).


So, try it out: will point you to the repo. Clone it, fork it, try it out. Let me know what you think.

Test data can be found here: (credit Fred Judson, Ohio Department of Transportation)

Presentations on it can be found here: and eventually here:


PostScript: Re: meshes and pointclouds on the web, Howard Butler and others are working on some pretty cool tools for handling just this problem technologically. Check out, for example and

Posted in 3D, Bundler, Camera Calibration, Drone, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS, UAS | Tagged: , , , , , , | 3 Comments »

Short follow up: Photogrammetrically Derived Point Clouds

Posted by smathermather on February 5, 2014

In my previous post,, I briefly cover software for creating photogrammetrically derived point clouds.  I didn’t summarize, like this, but PDPCs can be created in three easy steps:

  1. Structure from Motion for unordered image collections
  2. Clustering Views for Multi-view Stereo
  3. Multi-view stereo (dense point cloud reconstruction)

But, unfairly, I gloss over some of the complications of creating meaningful data from PDPC processing. Truth told, it’s probably a 9 step process

  1. Optimize image order according to geography to optimize step 2
  2. Structure from Motion
  3. Clustering Views for Multi-view Stereo
  4. Multi-view stereo
  5. Georeference
  6. Create breaklines or disparity mapping or find some other process for refining our surface model
  7. Generate surface model (mesh)
  8. Texture mesh
  9. Render mesh as ortho


Posted in 3D, Bundler, Camera Calibration, Drone, Image Processing, Optics, Photogrammetry, PMVS, UAS | Tagged: , , , , , | Leave a Comment »

Big d@mn post: Photogrammetrically Derived Point Clouds

Posted by smathermather on February 4, 2014

I chatted with Howard Butler (@howardbutler) today about a project he’s working on with Uday Verma (@udayverma @udaykverma) called Greyhound ( a pointcloud querying and streaming framework over websockets for the web and your native apps. It’s a really promising project, and I hope to kick the tires of it really soon.

The conversation inspired this post, which I’ve been meaning to do for a while summarizing Free and Open Source software for photogrammetrically derived point clouds. Why? Because storage. It’s so easy now to take so many pictures, use a structure from motion (SfM) approach to reconstruct camera positions and sparse point cloud, and then use that with a Multi-View Stereo approach to construct dense point clouds in order to… okay. Getting ahead of myself.

PDPCs (photogrammetrically derived point clouds). Hate the term but haven’t found a better one are 3D reconstructions of the world based on 2D pictures from multiple perspective. This is ViewMaster on steroids. Scratch that. This ViewMaster on blood transfusions and Tour de France level micro-doping. This is amazing technology FTW.

Image of Viewmaster

So, imagine taking a couple of thousand unreferenced tourist images used to reconstruct the original camera positions and a sparse cloud of colorized points representing the shell of the Colosseum:

Magical.  This is step one.  For this we use bundler:

Colosseum images on the left, reconstructed point cloud and camera positions on the right

Or maybe OpenMVG.

OpenMVG icon

Ok, now that we know where all our cameras are at, and have a basic sense of the structure of the scene, we can go deeper and reconstruct dense point clouds from a multi-view stereo approach. But, let’s wait a second– this can be memory intensive, so first let us split our scene into chunks we can process in a way that we can put it all back together at the end.  Enter Clustering Views for Multi-view Stereo (CMVS).


Honestly, the above image is the calculations from our final step, multi-view stereo (MVS), but broken into bite-size chunks for us in the previous step. Here we apply PMVS2:

Image of fully reconstructed dense point cloud of building

Now, if you’re like me, you want binaries. We have many options here.

If you are willing to depart from the pure open source, VisualSFM is an option:

But, you’ll have to pay for commercial use.  Same is true for CMPMVS:

But it has the bonus of returning textured meshes, which is mighty handy.  If you are a hobbiest UAS/UAV (Drone) person, this might be a good option.  See FlightRiot’s directions for it here:

Me, I’m a bit of a FOSS purist. For this you can roll your own, or get binaries from the Python Photogrammetry Toolbox (the image is the link):

Image of archeology mesh and point cloud reconstructions.

Finally, Mike James is working on some image level georeferencing for point clouds “coming soon”, so stay tuned for that:

Screen shots of georeferencing interface for sfm_georef

PDPCs for the win!

Posted in 3D, Bundler, Camera Calibration, Drone, Image Processing, Optics, Photogrammetry, PMVS, UAS | Tagged: , , , , , | 4 Comments »