Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Posts Tagged ‘SfM’

Announcing OpenDroneMap — Software for civilian (and humanitarian?) UAS post processing

Posted by smathermather on September 15, 2014

OpenDroneMap logo

This past Friday at FOSS4G in Portland, I announced the (early) release of OpenDroneMap, a software toolchain for civilian (and humanitarian?) UAS/UAV image processing. The software is currently a simple fork of, and will process from unreferenced overlapping photos to an unreferenced point cloud. Directions are included in the repo to create a mesh and UV textured mesh as the subsequent steps, but the aim is to have this all automated in a single work flow.

Projects like Google Streetview, Mapillary, PhotoSynth, and most small UAS (drone) postprocessing software, such as that offered by senseFly, share a commonality– they all use computer vision techniques to create spatial data from un-referenced photography.

Screenshot of drone image thumbnails

OpenDroneMap is an open source project to unlock and make easy-to-use related related computer vision techniques, so that whether from street level photos or from civilian drone imagery, the average user will be able to generate point clouds, 3D surfaces models, DEMs, and orthophotography from processing unreferenced photos.




Screen shot of textured mesh as viewed in MeshLab

To those who may be wondering — wow cool, but what happens to the data at the end of the day? How do we share it back to a common community? The aim is for the toolchain to also be able to optionally push to a variety of online data repositories, pushing hi-resolution aerials to OpenAerialMap, point clouds to OpenTopography, and pushing digital elevation models to an emerging global repository (yet to be named…). That leaves only digital surface model meshes and UV textured meshes with no global repository home. (If anyone is working on global storage of geographically referenced meshes and textured meshes, please get in touch…).


So, try it out: will point you to the repo. Clone it, fork it, try it out. Let me know what you think.

Test data can be found here: (credit Fred Judson, Ohio Department of Transportation)

Presentations on it can be found here: and eventually here:


PostScript: Re: meshes and pointclouds on the web, Howard Butler and others are working on some pretty cool tools for handling just this problem technologically. Check out, for example and

Posted in 3D, Bundler, Camera Calibration, Drone, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS, UAS | Tagged: , , , , , , | 3 Comments »

FOSS4G Korea 2014, poor GPS photos, and mapillary

Posted by smathermather on August 31, 2014

As I have been moving around, whether traveling to Seoul or within Seoul, I have taken a lot of pictures. Some have GPS and I’ve processed to sent to Mapillary, like a few hundred I took on a day wandering Seoul:

Screen shot of Mapillary overview of SeoulI’ve taken a lot of strange videos too. I took a couple videos of my feet in the subway train just to get the music that plays to notify passengers of an approaching stop. Walking around Bukhansan National Park, I have taken many sign pictures. As I work for a park district, how signage and wayfinding are handled here is facinating, both from what I can understand, i.e. choice of material, color, frequency, how the letters are carved, to those elements that I cannot yet understand, i.e. exactly how the Korean Language wayfinding portions work.

But mostly I have been cataloging as much as I can in order to give my children a sense and feel for the city. I am realizing this imperative has given me a child-like view of the city. (Of course, my enthusiasm for the mundane does get me the occasional funny look from locals… . But hey! What could feel more like home than people thinking I am a little strange.)

This blog wouldn’t be mine without a technical twist to the narrative, so let’s dive into some geographic problems worth solving: The camera I have has built in GPS and compass, which makes it seemingly ideal for mapillary uploads. Except the GPS isn’t that accurate, doesn’t always update from photo to photo, struggles in urban areas in general, etc. etc.. And so it is that I am working on a little solution for that problem. First let me illustrate the problem better.

A sanity check on the GPS of the data can easily be done in QGIS using the Photo2Shape plugin:

Screen snapshot of photo2shape plugin install screen

Screenshot of distribution of camera GPS points in QGIS

Let’s do two things to improve our map. For old-time sake, we’ll add a little red-dot-fever, and use one of the native tile maps, Naver, via the TMS for Korea plugin.

Naver map with photo locations overlayed as red dots

We can see our points are both unevenly distributed and somewhat clumped. How clumped? Well, according to my fellow GeoHipsters on twitter, hex bin maps so 2013, so instead we’ll just use some feature blending (multiply) plus low saturation on our red (i.e. use pink) to show intensity of overlap:

Capture of map showing overlap of points with saturation of color increasing where overlaps exist.

Ok, that’s a lot of overlaps for pictures that were taken in a straight line series. Also, note the line isn’t so straight. Yes, I was sober. No, not even with soju can I walk though so many buildings.

Like all problems when I’m obsessed with a particular technology: “The solution here is to use <particular technology with which I am currently obsessed>”. In this case, we substitute <particular technology with which I am currently obsessed> with ‘Structure from Motion’ or OpenDroneMap. ODM would give us the correct relative locations of the original photos. Combined with the absolute locations (as bad as they are) perhaps we could get a better estimate. Here’s a start (confession — mocked up in Agisoft Photoscan. Sssh. Don’t tell) showing in blue the correct relative camera positions:

Image of sparse point cloud and relative camera positions

See how evenly spaced the camera positions are? You can also see the sparse point cloud which hints at the tall buildings of Gangnam and the trees in the boulevard.

  • Next step: Do this in OpenDroneMap.
  • Following Step: Find appropriate way to correlate with GPS positions.
  • Then: Correct model to match real world.
  • Finally: Update GPS ephemeris in original photos.

So, Korea has inspired another multi-part series. Stay tuned.

Posted in 3D, Analysis, Conference, Conferences, FOSS4G Korea | Tagged: , , , , | 2 Comments »

Getting Bundler and friends running — part deux

Posted by smathermather on May 6, 2014

In my previous post on Getting Bundler and friends running, I suggested how to modify an existing script to get Bundler and other structure from motion parts/pieces up and running.  Here’s my follow up.

Install Vagrant and VirtualBox.

Download (or clone) this repo:

Navigate into the cloned or unzipped directory (on the command line), run

vagrant up

Go have a cup of coffee. Come back. Run

vagrant ssh

Congratulations. You have an Ubuntu machine capable of all sorts of StructureFromMotion / OpenDroneMap goodness.
Next: A tutorial on how to use the tools you just compiled… .

Posted in 3D, Bundler, Drone, Photogrammetry, PMVS, UAS | Tagged: , , , , , , , | 5 Comments »

Getting Bundler and friends running

Posted by smathermather on April 27, 2014

Anyone who has jumped down the rabbit hole of computer vision has run into dependency h*ll getting software to run.  I jumped down that hole again today with great success that I don’t want to forget (these directions are for Ubuntu, fyi).

First, clone BundlerTools:

This will download and compile (almost) everything for you, which is a wonderful thing.  The one exception is graclus.  This doesn’t have a direct download access anymore– you have to register, and then you will receive and e-mail with the download. So, to get the BundlerTools to work, you will need to post this someplace web accessible.  Then open and modify ( and change the following line (128)


to match your new download location.

Now change to executable:

chmod 700

and run


Posted in 3D, Bundler, Drone, Photogrammetry, PMVS, UAS | Tagged: , , , , , , , | 1 Comment »

Inventorying linear assets– really high resolution orthos

Posted by smathermather on April 5, 2014

I have been contemplating all sorts of varied uses of Structure from Motion techniques. One of those outputs, in addition to using sUAVs (drones) is just to orthorectify and generate 3D meshes from ordinary photos. This has really great potential for linear assets like streams and rivers, trails and roads. We’ll have to being to contemplate how we’ll use (and summarize!) the incredible amount of data and information that will suddenly be available to us, e.g. this subset of an orthophoto of a multi-purpose trail:

Screen shot of orthophoto showing readable text: "Be courteous give notice when passing"


Posted in 3D, Drone, Image Processing, Optics, Photogrammetry, UAS | Tagged: , , , , | Leave a Comment »

Cool write up on Structure from Motion (SfM)

Posted by smathermather on March 5, 2014

A really cool SfM workflow write up:

which includes texture mapping (yay!)… .

Pic from the end of the workflow:

Thanks to Fred Judson for the tip on this one.

Posted in 3D, Optics, Photogrammetry | Tagged: , | Leave a Comment »

Short follow up: Photogrammetrically Derived Point Clouds

Posted by smathermather on February 5, 2014

In my previous post,, I briefly cover software for creating photogrammetrically derived point clouds.  I didn’t summarize, like this, but PDPCs can be created in three easy steps:

  1. Structure from Motion for unordered image collections
  2. Clustering Views for Multi-view Stereo
  3. Multi-view stereo (dense point cloud reconstruction)

But, unfairly, I gloss over some of the complications of creating meaningful data from PDPC processing. Truth told, it’s probably a 9 step process

  1. Optimize image order according to geography to optimize step 2
  2. Structure from Motion
  3. Clustering Views for Multi-view Stereo
  4. Multi-view stereo
  5. Georeference
  6. Create breaklines or disparity mapping or find some other process for refining our surface model
  7. Generate surface model (mesh)
  8. Texture mesh
  9. Render mesh as ortho


Posted in 3D, Bundler, Camera Calibration, Drone, Image Processing, Optics, Photogrammetry, PMVS, UAS | Tagged: , , , , , | Leave a Comment »

Big d@mn post: Photogrammetrically Derived Point Clouds

Posted by smathermather on February 4, 2014

I chatted with Howard Butler (@howardbutler) today about a project he’s working on with Uday Verma (@udayverma @udaykverma) called Greyhound ( a pointcloud querying and streaming framework over websockets for the web and your native apps. It’s a really promising project, and I hope to kick the tires of it really soon.

The conversation inspired this post, which I’ve been meaning to do for a while summarizing Free and Open Source software for photogrammetrically derived point clouds. Why? Because storage. It’s so easy now to take so many pictures, use a structure from motion (SfM) approach to reconstruct camera positions and sparse point cloud, and then use that with a Multi-View Stereo approach to construct dense point clouds in order to… okay. Getting ahead of myself.

PDPCs (photogrammetrically derived point clouds). Hate the term but haven’t found a better one are 3D reconstructions of the world based on 2D pictures from multiple perspective. This is ViewMaster on steroids. Scratch that. This ViewMaster on blood transfusions and Tour de France level micro-doping. This is amazing technology FTW.

Image of Viewmaster

So, imagine taking a couple of thousand unreferenced tourist images used to reconstruct the original camera positions and a sparse cloud of colorized points representing the shell of the Colosseum:

Magical.  This is step one.  For this we use bundler:

Colosseum images on the left, reconstructed point cloud and camera positions on the right

Or maybe OpenMVG.

OpenMVG icon

Ok, now that we know where all our cameras are at, and have a basic sense of the structure of the scene, we can go deeper and reconstruct dense point clouds from a multi-view stereo approach. But, let’s wait a second– this can be memory intensive, so first let us split our scene into chunks we can process in a way that we can put it all back together at the end.  Enter Clustering Views for Multi-view Stereo (CMVS).


Honestly, the above image is the calculations from our final step, multi-view stereo (MVS), but broken into bite-size chunks for us in the previous step. Here we apply PMVS2:

Image of fully reconstructed dense point cloud of building

Now, if you’re like me, you want binaries. We have many options here.

If you are willing to depart from the pure open source, VisualSFM is an option:

But, you’ll have to pay for commercial use.  Same is true for CMPMVS:

But it has the bonus of returning textured meshes, which is mighty handy.  If you are a hobbiest UAS/UAV (Drone) person, this might be a good option.  See FlightRiot’s directions for it here:

Me, I’m a bit of a FOSS purist. For this you can roll your own, or get binaries from the Python Photogrammetry Toolbox (the image is the link):

Image of archeology mesh and point cloud reconstructions.

Finally, Mike James is working on some image level georeferencing for point clouds “coming soon”, so stay tuned for that:

Screen shots of georeferencing interface for sfm_georef

PDPCs for the win!

Posted in 3D, Bundler, Camera Calibration, Drone, Image Processing, Optics, Photogrammetry, PMVS, UAS | Tagged: , , , , , | 4 Comments »