Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Archive for the ‘OpenDroneMap’ Category

Taking Slices from ~~LiDAR~~ OpenDroneMap data: Part X

Posted by smathermather on February 23, 2017

Part 10 of N… , wait. This is a lie. This post is actually about optical drone data, not LiDAR data. This is about next phase features fro OpenDroneMap — automated and semiautomation of the point clouds, creation of DTMs and other fun such stuff.

To date, we’ve only extracted Digital Surface Models from ODM — the top surface of everything in the scene. As it is useful for hydrological modeling and other purposes to have a Digital Terrain Model estimated, we’ll be including PDAL’s Progressive Morphological Filter for the sake of DEM extraction. Here’s a small preview:

Posted in 3D, Docker, OpenDroneMap, PDAL | Tagged: , , | Leave a Comment »

Viewing Sparse Point Clouds from OpenDroneMap — GeoKota

Posted by smathermather on June 30, 2016

This is a post about OpenDroneMap, an opensource project I am a maintainer for. ODM is a toolchain for post-processing drone imagery to create 3D and mapping products. It’s currently in beta and under pretty heavy development. If you’re interested in contributing to the project head over here. The Problem So for most of the […]

via Viewing Sparse Point Clouds from OpenDroneMap — GeoKota

Posted in 3D, Image Processing, OpenDroneMap, OpenDroneMap, Optics, Other, Photogrammetry | Tagged: , , , | Leave a Comment »

OpenDroneMap — Paris Code Sprint

Posted by smathermather on February 29, 2016

I failed to make it to the Paris Code Sprint. It just wasn’t in the cards. But, my colleague Dakota and I sprinted anyway, with some help and feedback from the OpenDroneMap community.

So, what did we do? Dakota did most of the work. He hacked away at the cmake branch of ODM, a branch set up by Edgar Riba to substantially improve the installation process for ODM.

  • Fixed odm_orthophoto in the branch so that it produces geotiffs
  • Fixed PMVS so that it is multithreaded again
  • Added rerun-all and rerun-from function
  • Integrated @lupas78’s additions for an xyz point cloud output
  • Added benchmarking which is an important soft number for when we have code changes
  • (Technically before the sprint) wrote the first test for OpenDroneMap
  • Cleaned code
What did I do? Mostly, I got caught up with the project. I haven’t been very hands on since the python port, let alone the cmake branch, so I became a little more pythonistic by just trying to successfully modify the code.
  • I also added PDAL to the build processs
  • And I inserted PDAL into the point cloud translation process.

Currently, this means we’ve dropped support for LAZ output, as I haven’t successfully built PDAL with LAZ support, but it stages the project for LAZ support through PDAL, and allows us to tap into additional PDAL functionality in the future.

It was an intensive couple of days that would have been improved with French wine, but we were in Parma (Ohio). So, a shout out to the coders in Paris at the same time, and cheers to all.

Posted in 3D, Drone, OpenDroneMap, OpenDroneMap, Photogrammetry, PMVS, UAS | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits (part 삼)

Posted by smathermather on October 27, 2015

Two posts precede this one, ODM — the future that awaits, and ODM — the future that awaits (part 이)

Ben Discoe has a good point on the first post, specifically:

As I see it, the biggest gap is not in smoother uploading or cloud processing in the cloud. The biggest gap is Ground Control Points. Until there’s a way to capture those accurately at a prosumer price point, we are doomed to a patchwork of images that don’t align, which is useless for most purposes, like overlaying other geodata.

Ben’s right of course. If drone data is produced, analyzed, and combined in isolation, especially while prosumer and consumer grade drones don’t have verifiable ground control, the data can’t be combined with other geodata.

The larger framework that I’m proposing here side-steps those issues in two ways:

  1. Combine drone data with other data from the start. Drones are a platform and a choice. Open aerial imagery, the best available, should always be used in a larger mosaic. If Landsat is the best you’ve got… Use it. If a local manned flight has better data… use it. If an existing open dataset from a photogrammetric / engineering company is available… use it. And if the drone data gets you those extra pixels… use it. But if you don’t have ground control (which you likely don’t), tie it into the larger mosaic. Use that mosaic as the consistency check.
  2. The above isn’t always practical. Perhaps the existing data are really old, or are too low in resolution. Maybe the campaign is so big and other data sources so poor that the above is impractical. In this case, internal consistency is key. Since OpenDroneMap now leverages OpenSfM, we have the option of doing incremental calculation of camera positions and sparse point clouds. If we have 1000 images and need to add 50, we don’t have to reprocess the first 1000.

Posted in 3D, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits (part 이)

Posted by smathermather on October 25, 2015

In my previous post, ODM — the future that awaits, I start to chart out OpenDroneMap beyond the toolchain. Here’s a bit more, in outline form. More narrative and breakdown to come. (this is the gist)


Take OpenDroneMap from simple toolchain to an online processing tool + open aerial dataset. This would be distinct from and complementary to OpenAerialMap:

  1. Explicitly engage and provide a platform for drone enthusiasts to contribute imagery in return for processing services.
  2. Address and serve:
    • Aerial imagery
    • Point clouds
    • Surface models
    • Terrain models
  3. Additionally, as part of a virtuous circle, digitizing to OSM from this aerial imagery would refine the surface models and thus the final aerial imagery
    • More on this later: in short digitizing OSM on this dataset would result in 3D photogrammetric breaklines which would in turn refine the quality of surface and terrain models and aerial imagery.

Outputs / Data products:

  • Aerial basemap
    • (ultimately with filter for time / season?)
  • Point cloud (see e.g.
  • Digital surface model (similar to Open Terrain)
  • Digital elevation model (in conjunction with Open Terrain)

Likely Software / related projects

Back of the envelope calculations — Mapping a city with drones

If ODM is to take submissions of large portions of data, data collection campaigns may come into play. Here are some back-of-the-envelope calculations for flying whole cities, albeit the medium size cities of San Francisco and Cleveland. This ignores time needed for all sort so things, including coordinating with local air traffic control. As such, this is a best case scenario set of estimates.

Drone Flight Height Pixel Overlap Per flight City Name City Area Total # of Flights Total Flight time
E384 400 ft 3 cm 60% 1.5 sq mile San Francisco 50 sq miles 33 flights 66 hours
E384 400 ft 5 cm 90 % 0.5 sq mile San Francisco 50 sq miles 100 flights 200 hours
E384 400 ft 3 cm 60 % 0.5 sq mile Cleveland 80 sq miles 54 flights 108 hours
E384 400 ft 5 cm 90 % 0.5 sq mile Cleveland 80 sq miles 160 flights 320 hours
Iris+ 400 ft 3 cm 60% 0.078 sq mile San Francisco 50 sq miles 640 flights 213 hours
Iris+ 400 ft 5 cm 90% 0.026 sq mile San Francisco 50 sq miles 1920 flights 640 hours

Posted in 3D, OpenDroneMap, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits

Posted by smathermather on October 24, 2015

Do you recall this 2013 post on GeoHipster?:

Screen shot of geohipster write-up

Later on, I confessed my secret to making accurate predictions:

screen shot of 2014 predictions

In all this however, we are only touching the surface of what is possible. After all, while we have a solid start on a drone imagery processing toolchain, we still have gaps. For example, when you are done producing your imagery from ODM, how do you add it to OpenAerialMap? There’s no direct automatic work flow here; there isn’t even a guide yet.

Screenshot of openaerialmap

And then once this is possible, is there a hosted instance of ODM to which I can just post my raw imagery, and the magical cloud takes care of the rest? Not yet. Not yet.

So, this is the dream. But the dream is bigger and deeper:

I remember first meeting Liz Barry of PublicLab at CrisisMappers in New York in 2014. She spoke about how targeted (artisanal?) PublicLab projects are. They aren’t trying to replace Google Maps one flight at a time, but focus on specific problems and documenting specific truths in order to empower community. She said it far more articulately and precisely, of course, with all sorts of sociological theory and terms woven into the narrative. I wish I had been recording.

Then, Liz contrasted PublicLab with OpenDroneMap. OpenDroneMap could map the world. OpenDroneMap could piece together from disparate parts all the pixels for the world:

  • At a high resolution (spatial and temporal)
  • For everywhere we can fly
  • One drone, balloon, and kite flight at a time
  • And all to be pushed into common and public dataset, built on open source software commonly shared and developed.

Yes. Yes it could, Liz. Exactly what I was thinking, but trying hard to focus on the very next steps.

This future ODM vision (the “How do we map the world with ODM) relies on a lot of different communities and technologies, from PublicLab’s MapKnitter, to Humanitarian OpenStreetMap Team’s (HOT’s) OpenAerialMap / OpenImageryNetwork, to KnightFoundation / Stamen’s OpenTerrain, ++ work by Howard Butler’s team on point clouds in the browser (Greyhound, PDAL,, etc.).

Over the next while, I am going to write more about this, and the specifics of where we are now in ODM, but I wanted to let you all know, that while we fight with better point clouds, and smoother orthoimagery, the longer vision is still there. Stay tuned.

Posted in 3D, OpenDroneMap | Tagged: , | 5 Comments »

Reflections on Goldilocks, Structure from Motion, near scale remote sensing, and the special problems therein

Posted by smathermather on October 19, 2015

Goldilocks and getting your reflection just right…

I have been reading a bit about drone remote sensing of agriculture fields. On one hand, it’s amazing, world changing technology. On the other hand, some part of all of it is bunk. What do I mean? Well, applying techniques created for continent size analyses may not scale down well. Why? Well for one, all those clever techniques (like Normalized Difference Vegetation Index, as well as its non-normalized siblings) rely heavily on two things: 1– being on average right over a large area; 2 — painting with such a broad brush as to be difficult to confirm or refute.

There. I said it.

Ok, tangible example: you fly a drone over your ag field, stitch the images together, calculate a vegetation index of your choice, and you get a nice map of productivity, or plant stress, or whatever it is that some vendor is selling. One problem: which camera view do you use for each spot on the ground?

Diagram of reflectance gradient on leaf.

Diagram of reflectance gradient on leaf.

I call this the Goldilocks problem in remote sensing — reflectance messing with what you are hoping are absolute(ish) values of reflectance:

If you use the forward image (away from the sun), you are going to get a hot spot because the light from the sun reflects more strongly in this direction. If you take the image in line with the sun, you are going to get something a little too dark, because of lack of backscatter. But if you use the image just above, you’ll get something just right.

Fix this problem (or only fly on cloudy days), and you are going to eliminate a lot of bias in your data. Long-term, addressing this when there is adequate data / images is on my mental wish list for texturing in OpenDroneMap. BTW, the big kids with satellites at their command have to deal with this too. They call it all sorts of things, but “Bi-Directional Reflectance Function” or BRDF is a common moniker.

Meshing — Why do we build a mesh after we build a point cloud?

Ok, another problem I have been giving some thought to… . In my previous post, I address some of the issues with point cloud density as well as appropriate (as opposed to generic) meshing techniques. We take a point cloud (exhibit A):

Dense point cloud

And we convert it to a mesh:

Dense Mesh

Dense Mesh

As we established yesterday, if we look to closely at the mesh, it’s disappointing:

Un-textured mesh of buildings

Un-textured mesh of buildings

And so I asserted that the problem is that we aren’t dealing with different types of objects in different ways when building a mesh. I stand by that assertion.

But… why are we doing a point cloud independently of the mesh? Why not build them at the same time? Here, maybe these crude and inaccurate figures will help me communicate the idea:

Diagram of leaf with three camera observations

Diagram of leaf with three camera observations

Diagram of building roof with 3 camera observations

Diagram of building roof with 3 camera observations

Why aren’t we building that whole surface, rather than just the points that we find as we go? Is this something that something like LSD-SLAM can help with? We would have to establish gradient cut-offs for where we decide where the roof line ends and e.g. the ground begins, but that seems a reasonable compromise. (Perhaps while that’s happening we detect / region grow that geometry, classify it as roof, and wrap it in a set of break lines).

The advantage here is that if we build the structure of the mesh directly from the images, then when we texture the mesh, we don’t have to make any guesses about which cameras to use for the mesh. More importantly, we are making minimal a priori assumptions about structures when building the mesh. I think this will lead to superior vegetation meshes.  One disadvantage is that we can’t guarantee our mesh is ever complete, and it will likely never be continuous, but hopefully as a trade-off becomes a much better approximation of structure which will help its use in, e.g. generating orthophotos.

Too abstract? Too dumb? IDK. Curious what you think.

Posted in 3D, OpenDroneMap | Tagged: , | Leave a Comment »

Geocoding from Structure from Motion — Integrating MicroMappers and OpenDroneMap

Posted by smathermather on October 17, 2015

There are many automated solutions that solve really interesting problems, but at this point in time, it is the semi-automated solutions that really fascinate me. These are solutions that combine the subtlety and intelligence of the human mind with the scale of automation.

In this context, I have been thinking a lot about OpenDroneMap — what are the parts of the toolchain which should be automated and improved, and where can a human touch help. Mostly I have been thinking about how ODM can be improved by a human touch, especially where such interpretation can aid in creating better 3 dimensional structure. However, recently as I thought about Micromappers I coined (I think I coined it) the phrase “geocoding from structure from motion”.

Imagine if you will a system that allows a small army of volunteers to easily code images with information in order to help make sense of the world through those images. Imagine this system is tied to aerial imagery and used in a humanitarian crisis. Put these things together and you have Micromappers.

Screen shot of micromappers video

So, what if any thing digitized or circled was automatically geocoded in 3D space based on the 3D information derived from Structure from Motion. By geocoded, I don’t mean geocoded to the location of the camera, but geocoded to the location of the feature in 3D space based on deriving implicit 3D information from the video combined with the GPS position of the camera.

OpenDroneMap and a host of other tools already generate 3D info based on image inputs. Below is a video of LSD-SLAM, a technique that does this in real time that might make it a little clearer what this magic is.

(Enter, selfish side of this thought)

If a workflow like this worked out, then we can geocode anything by marking up individual stills in an image series. Further, the information we derive from this markup can then be used to help classify and improve other outputs (like digital surface models, digital elevation models, etc.) from Structure from Motion. Finally, prior to OpenDroneMap being feature complete as a drone imagery processing tool, we have an easy-to-use tool for deriving good enough secondary products, i.e. geocoding, with primary products slated for improvement being orthophoto, mesh, and point cloud.

Post script note: this is not a funded project, but an interesting thought experiment. I’ll have a future of OpenDroneMap (as I see it) post up here shortly.

Posted in micromappers, OpenDroneMap | Tagged: , | Leave a Comment »

MSF Canada Drone Day follow-up

Posted by smathermather on July 13, 2015

Dirk’s MSF Canada Drone Day is officially the first blog post I have “re-blogged”. Please read:

or better yet here:

I had the pleasure of co-presenting with Dirk and Ivan, and the rest is well covered in Dirk’s post. It came together as an excellent day and I think you would be hard pressed to have had a better introduction to drones.

The day was valuable to me as an emerging practitioner. I learned more about the state of the art in hardware, software, regulations, philosophy, and RC control from this day, and it was inspiring to inhabit the same space with such dedicated practitioners for a short time.

Beyond the value of the workshop to the participants, the outcomes were the following, this quoted from Dirk’s post:

As a first milestone we are looking to pull together a proposal to the Humanitarian Innovation Fund in collaboration with OpenDroneMap and supported by the Missing Maps consortium.

I love the extension of ODM into this space. This is the real value of open source, the opportunity to collaborate across the world, across industries and use cases, and across organizations. Expect to see improvements to ODM in usability, performance, and output qualities from this initiative. More on this later.

Another outcome / learning for me was observing Ivan’s OpenUAV. From his repo:

This is intended to be a repository for design files, instructions, photos, documentation, and everything else needed for people wishing to build a and operate UAV (drone) in a low-income, resource-poor environment. This is not about cutting-edge UAVs, it’s about democratizing the technology and getting it into the hands of more people, particularly in poorer countries and humanitarian settings.

Photo of OpenUAV example

Ivan undersells it. This is a pro quality quad copter on a very nice price diet — a brilliant piece of pragmatic engineering.

This little quad copter will make its way into drone building workshops I’ll be offering in Cleveland and Columbus Ohio and Seoul, South Korea in August and September. More details forthcoming.

If you are in Cleveland, plan to be at FOSS4G Seoul, or Ohio GIS, come build Ivan’s capable quad.


(BTW, Ivan says with a couple of 4C 8000mAh batteries, this sucker flies for 50 minutes… .)

Posted in 3D, Bundler, Camera Calibration, FOSS4G, Image Processing, OpenDroneMap, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: , , , , , , , , , | Leave a Comment »

OpenAerialMap, OpenImageryNetwork, MapKnitter, OpenTerrain, and OpenDroneMap (cont. 1)

Posted by smathermather on June 7, 2015

Citing my previous post, let’s move on to more specifics on my thoughts regarding the integration of OpenAerialMap, OpenDroneMap, and MapKnitter as projects.

Image from kite over Seneca Golf Course

OpenDroneMap ❤ OpenAerialMap.

OpenAerialMap will become a platform by which drone users can share their imagery under an open license.

So, as the metadata spec for OpenAerialMap and OpenImageryNetwork matures, and as soon as a publicly available place for drone users to push their data comes online, ODM will write appropriate metadata and geotiffs to go into OIN to be indexed by OAM. Probably as an added bonus, ODM should be able to optionally auto-upload outputs from to the appropriate node on the OpenImageryNetwork.

Lincoln Peak Vinyard

OpenDroneMap ❤ MapKnitter.

MapKnitter / ODM integration is pretty straight forward in my mind too. There are ways that MapKnitter complements ODM, and vice versa. ODM does not have a graphical user interface at this time. MapKnitter promises to fill that role in a future OpenDroneMap implementation. MapKnitter has no image blending or auto-matching tools. OpenDroneMap will soon have both.

  • Ways MapKnitter may help OpenDroneMap:
    • MapKnitter’s clever use of Leaflet to handle affine transformation of images is really exciting, and may help with improving final georeferencing for ODM datasets.
    • Regarding the above, one really useful thing for fliers launching balloons, drones, and kites without GPS would be the ability to quickly and easily perform really approximate georeferencing. I would envision a workflow where a user moves an image to its approximate position and size relative to a background aerial. ODM would be able to take advantage of this approximate georeferencing to optimize matching.
  • Ways OpenDroneMap could benefit MapKnitter
    • For large image datasets, matching images can be very tedious. Automatic feature extraction and matching can help. OpenDroneMap could be adapted to serve back match information to Mapknitter to ease this process. This will become increasingly important as MapKnitter raises the ~60 image limit on images that it can process.
    • A near future version of ODM will have image blending / smoothing / radiometric matching. For the server portion of the MapKnitter infrastructure, this feature could be a really useful addition for production of final mosaics.

These projects (plus OpenTerrain…) are really exciting in their own right. Together, they represent amazing opportunities to foster, cultivate, process, and serve a large community of imagery providers, from individuals and small entities capturing specific datasets using kites, drones, and balloons, to satellite imagery providers hosting their own “image buckets” of open imagery data. Exciting times.

Image over Groth Memorial from kite

Posted in 3D, Bundler, Camera Calibration, FOSS4G, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: , , , , , , , , , , | Leave a Comment »