Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Archive for the ‘OpenDroneMap’ Category

OpenDroneMap on the road part II

Posted by smathermather on March 28, 2017

Thinking a little more about moderately large compute resources and their container (see previous post), I revised my analysis to look and see if we can fit these 10 NUCs plus switch and outlets into a carry-on sized case. It turns out, at first blush, it seems feasible:

pelican_cloud_1535

Posted in 3D, Docker, OpenDroneMap, PDAL | Tagged: , , | 5 Comments »

OpenDroneMap on the road

Posted by smathermather on March 27, 2017

Contemplation

This is a theoretical post. Imagine for a moment that OpenDroneMap can scale to the compute resources that you have in an elastic and sane way (we are short weeks away from the first work on this), and so, if you are a typical person in the high-speed internet world, you might be thinking, “Great! Let’s throw this up on the cloud!”

But imagine for a moment you are in a network limited environment. Do you process on a local laptop? Do you port around a desktop? The folks in the humanitarian space think about this a lot — depending on the project, one could spend weeks or months in network limited environments.

Enter POSM

Folks at American Red Cross (ARC) have been thinking about this a lot. What has resulted, in order to aid in mapping e.g. rural areas in West Africa is Portable OpenStreetMap, or POSM, a tool for doing all the OpenStreetMap stuff, but totally and temporarily offline.

The software for this is critical, but I’ve been increasingly interested in the hardware side of things. OpenDroneMap, even with it’s upcoming processing, memory, and scaling improvements will still require more compute resources than, say OpenMapKit and Field Papers. I’ve been contemplating that once the improvements are in place, what kind of compute center could you haul in the field with you?

I’m not just thinking humanitarian and development use cases either — what can we do to make processing drone imagery in the field faster? Can we make it fast enough to get results before leaving the field? Can we modify our flight planning based on the stream of data being processed and adapt while we are there? Our real costs for flying are often finding staff and weather windows that are good, and sometimes we miss opportunities in the delay between imagery capture and processing. How can we close that loop faster?

The NUC

On the hardware side of the house, the folks at ARC are using Intel NUC kits. For ODM, as I understand it, they go a step up in processing power from their specs to something with an i7. So, I got to thinking — can we put together a bunch of these, running on a generator, and not break the bank on weight (keep it under 50 lbs)? It turns out, maybe we can. For a round $10,000, you might assemble 10 of these 4-core NUCs with a network switch, stuff it into a Pelican Air 1605 case, with 320 GB RAM, and 2.5 TB of storage. More storage can be added if necessary.

This is a thought experiment so far, and may not be the best way to get compute resources in the field, your mileage may vary, etc., but it’s and interesting though.field_compute.PNG

Cost Breakdown

field_compute1

Follow up

Any thoughts? Anyone deployed serious compute resources to the field for drone image processing? I’d love to hear what you think.

Posted in 3D, Docker, OpenDroneMap, PDAL | Tagged: , , | 2 Comments »

Time for localization?

Posted by smathermather on March 26, 2017

Just saw this great blog post by my friend Mr. Yu at Korea National Park on using OpenDroneMap. If you need it in English, google seems to translate it rather well:


Maybe it’s time to look at localization for WebODM… .

Posted in 3D, OpenDroneMap, Other | Tagged: | Leave a Comment »

Scaling OpenDroneMap, necessary (and fun!) next steps

Posted by smathermather on March 8, 2017

Project State

OpenDroneMap has really evolved since I first put together a concept project presented at FOSS4G Portland in 2014, and hacked with my first users (Michele M. Tobias & Alex Mandel). At this stage, we have a really nicely functioning tool that can take drone images and output high-quality geographic products. The project has 45 contributors, hundreds of users, and a really great community (special shout-out to Piero Toffanin and Dakota Benjamin without whom the project would be nowhere near as viable, active, or wonderful). Recent improvements can be roughly categorized into data quality improvements and usability improvements. Data quality improvements were aided by the inclusion better point cloud creation from OpenSfM and better texturing from mvs-texturing. Usability improvements have largely been in the development of WebODM as a great to use and easy-to-deploy front end for OpenDroneMap.

With momentum behind these two directions — improved usability and improved data output, it’s time to think a little about how we scale OpenDroneMap. It works great for individual flights (up to a few hundred images at a time), but a promise of open source projects is scalability. Regularly we get questions from the community about how they can run ODM on larger and larger datasets in a sustainable and elastic way. To answer these questions, let me outline where we are going.

Project Future

Incremental optimizations

When I stated that scalability is one of the promises of open source software. I mostly meant scaling up: if I need more computing resources with an open source project, I don’t have to purchase more software licenses, I just need to rent or buy more computing resources. But an important element to scalability is the per unit use of computing resources as well. If we are not efficient and thoughtful about how we use things on the small scale, then we are not maximizing our scaled up resources.  Are we efficient in memory usage; is our matching algorithm as accurate as possible for the level of accuracy thus being efficient with the processor resources I have; etc.? I think of this as improving OpenDroneMap’s ability to efficiently digest data.

Magic school bus going doing the digestive system

Incremental toolchain optimizations are thus part of this near future for OpenDroneMap (and by consequence OpenSfM, the underlying computer vision tools for OpenDroneMap), focusing on memory and processor resources. The additional benefit here is that small projects and small computing resources also benefit. For humanitarian and development contexts where compute and network resources are limiting, these incremental improvements are critical. Projects like American Red Cross’ Portable OpenStreetMap (POSM) will benefit from these improvements, as will anyone in the humanitarian and development communities that need efficient processing of drone imagery offline.

To this end, three approaches are being considered for incremental improvements.  Matching speed could be improved by the use of Cascade Hashing matching or Bag of Words based method.Memory improvements could come via improved correspondence graph data structures and possibly SLAM-like pose-graph methods for global adjustment of camera positions in order to avoid global bundle adjustment.

Figure from Bag of Words paper

Figure from Bag of Words paper

Large-scale pipeline

In addition to incremental improvements, for massive datasets we need an approach to splitting up our dataset into manageable chunks. If incremental improvements help us better and more quickly process datasets, the large-scale pipeline is the teeth of this approach — we need to cut and chew up our large datasets into smaller chunks to digest.

Image of Dr. Teeth of the Muppets.

Dr. Teeth

If for a given node I can process 1000 images efficiently, but I have 80,000 images, I need a process that splits my dataset into 80 manageable chunks and processes through them sequentially or in parallel until done. Maybe I have 9000 images? Then I need it split into 9 chunks.

Image over island showing grid of 9 for spliting an aerial dataset

Eventually, I want to synthesize the outputs back into a single dataset. Ideally I split the dataset with some overlap as follows:

Image over island showing grid of 9 for spliting an aerial dataset shown with overlap

Problems with splitting SfM datasets

We do run into some very real problems with splitting our datasets into chunks for processing. There are a variety of issues, but the most stark is consistency issues from the resultant products. Quite often our X, Y, and Z values won’t match in the final reconstructions. This becomes critical when performing, e.g. hydrologic analyses on resultant Digital Terrain Models.

Water flow on patched DEM showing pooling effects around discontinuities

Water flow on patched DEM showing pooling effects around discontinuities (credit: Anna Petrasova et al)

Anna Petrasova et al address merging disparate DEM’s in GRASS with Seamless fusion of high-resolution DEMs from multiple sources with r.patch.smooth.

Water flow on fused DEM

Water flow on fused DEM showing corrected flow (credit: Anna Petrasova et al)

What Anna describes and solves is the problem of matching LiDAR and drone data and assumes that the problems between the datasets are sufficiently small that smoothing the transition between the datasets is adequate. Unfortunately, when we process drone imagery in chunks, we can get translation, rotation, skewing, and a range of other differences that often cannot be accounted for when we’re processing the digital terrain model at the end.

What follows is a small video of a dataset split and processed in two chunks. Notice offsets, rotations, and other issues of mismatch in the X and Y dimensions, and especially Z.

When we see these differences in the resultant digital terrain model, the problem can be quite stark:

Elevation differences along seamline of merged OpenDroneMap DTMs

Elevation differences along seamline of merged OpenDroneMap DTMs

To address these issues we require both the approach that Anna proposes that fixes for and smooths out small differences, and a deeper approach specific to matching drone imagery datasets to address the larger problems.

Deeper approach to processing our bites of drone data

To ensure we are getting the most out of stitching these pieces of data back together at the end, we require using a very similar matching approach to what we use in the matching of images to each other. Our steps will be something like as follows:

  • Split our images to groups
  • Run reconstruction on each group
  • Align and tranform those groups to each other using matching features between the groups
  • For secondary products, like Digital Terrain Models, blend the outputs using an approach similar to r.patch.smooth.

In close

I hope you enjoyed a little update on some of the upcoming features for OpenDroneMap. In addition to the above, we’ll also be wrapping in reporting and robustness improvements. More on that soon, as that is another huge piece that will help the entire community of users.

(This post CC BY-SA 4.0 licensed)

(Shout out to Pau Gargallo Piracés of Mapillary for the technical aspects of this write up. He is not responsible for any of the mistakes, generalities, and distortions in the technical aspects. Those are all mine).

Posted in 3D, Docker, OpenDroneMap, OpenDroneMap, PDAL | Tagged: , , | 2 Comments »

Viewing Sparse Point Clouds from OpenDroneMap — GeoKota

Posted by smathermather on June 30, 2016

This is a post about OpenDroneMap, an opensource project I am a maintainer for. ODM is a toolchain for post-processing drone imagery to create 3D and mapping products. It’s currently in beta and under pretty heavy development. If you’re interested in contributing to the project head over here. The Problem So for most of the […]

via Viewing Sparse Point Clouds from OpenDroneMap — GeoKota

Posted in 3D, Image Processing, OpenDroneMap, OpenDroneMap, Optics, Other, Photogrammetry | Tagged: , , , | Leave a Comment »

OpenDroneMap — texturing improvements

Posted by smathermather on March 27, 2016

Great news on OpenDroneMap. We now have a branch that has MVS-Texturing integrated, thanks to continuing work by Spotscale, and of course continuing integration work by @dakotabenjamin.

The MVS-Texturing branch isn’t fully tested yet, nor fully integrated, but the initial results are promising. MVS-Texturing itself handles the problems of choosing the best photos for a given facet on a textured model in order to do a great job texturing a complex scene. This bears the promise of vastly improved textured models and very nice orthophotos.. It seems an ideal drop in for the texturing limitations of OpenDroneMap. From the project site:

Our method addresses most challenges occurring in such reconstructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders (e.g., moving plants or pedestrians). Using the proposed technique, we are able to texture datasets that are several orders of magnitude larger and far more challenging than shown in related work.

When we apply this approach to one of our more difficult datasets, which was taken on a partially cloud part of the day…

IMG_1347_RGB.jpg

we get very promising results:

 

Posted in 3D, OpenDroneMap | 1 Comment »

OpenDroneMap — Paris Code Sprint

Posted by smathermather on February 29, 2016

I failed to make it to the Paris Code Sprint. It just wasn’t in the cards. But, my colleague Dakota and I sprinted anyway, with some help and feedback from the OpenDroneMap community.

So, what did we do? Dakota did most of the work. He hacked away at the cmake branch of ODM, a branch set up by Edgar Riba to substantially improve the installation process for ODM.

  • Fixed odm_orthophoto in the branch so that it produces geotiffs
  • Fixed PMVS so that it is multithreaded again
  • Added rerun-all and rerun-from function
  • Integrated @lupas78’s additions for an xyz point cloud output
  • Added benchmarking which is an important soft number for when we have code changes
  • (Technically before the sprint) wrote the first test for OpenDroneMap
  • Cleaned code
What did I do? Mostly, I got caught up with the project. I haven’t been very hands on since the python port, let alone the cmake branch, so I became a little more pythonistic by just trying to successfully modify the code.
  • I also added PDAL to the build processs
  • And I inserted PDAL into the point cloud translation process.

Currently, this means we’ve dropped support for LAZ output, as I haven’t successfully built PDAL with LAZ support, but it stages the project for LAZ support through PDAL, and allows us to tap into additional PDAL functionality in the future.

It was an intensive couple of days that would have been improved with French wine, but we were in Parma (Ohio). So, a shout out to the coders in Paris at the same time, and cheers to all.

Posted in 3D, Drone, OpenDroneMap, OpenDroneMap, Photogrammetry, PMVS, UAS | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits (part 이)

Posted by smathermather on October 25, 2015

In my previous post, ODM — the future that awaits, I start to chart out OpenDroneMap beyond the toolchain. Here’s a bit more, in outline form. More narrative and breakdown to come. (this is the gist)

Objectives:

Take OpenDroneMap from simple toolchain to an online processing tool + open aerial dataset. This would be distinct from and complementary to OpenAerialMap:

  1. Explicitly engage and provide a platform for drone enthusiasts to contribute imagery in return for processing services.
  2. Address and serve:
    • Aerial imagery
    • Point clouds
    • Surface models
    • Terrain models
  3. Additionally, as part of a virtuous circle, digitizing to OSM from this aerial imagery would refine the surface models and thus the final aerial imagery
    • More on this later: in short digitizing OSM on this dataset would result in 3D photogrammetric breaklines which would in turn refine the quality of surface and terrain models and aerial imagery.

Outputs / Data products:

  • Aerial basemap
    • (ultimately with filter for time / season?)
  • Point cloud (see e.g. http://plas.io)
  • Digital surface model (similar to Open Terrain)
  • Digital elevation model (in conjunction with Open Terrain)

Likely Software / related projects

Back of the envelope calculations — Mapping a city with drones

If ODM is to take submissions of large portions of data, data collection campaigns may come into play. Here are some back-of-the-envelope calculations for flying whole cities, albeit the medium size cities of San Francisco and Cleveland. This ignores time needed for all sort so things, including coordinating with local air traffic control. As such, this is a best case scenario set of estimates.

Drone Flight Height Pixel Overlap Per flight City Name City Area Total # of Flights Total Flight time
E384 400 ft 3 cm 60% 1.5 sq mile San Francisco 50 sq miles 33 flights 66 hours
E384 400 ft 5 cm 90 % 0.5 sq mile San Francisco 50 sq miles 100 flights 200 hours
E384 400 ft 3 cm 60 % 0.5 sq mile Cleveland 80 sq miles 54 flights 108 hours
E384 400 ft 5 cm 90 % 0.5 sq mile Cleveland 80 sq miles 160 flights 320 hours
Iris+ 400 ft 3 cm 60% 0.078 sq mile San Francisco 50 sq miles 640 flights 213 hours
Iris+ 400 ft 5 cm 90% 0.026 sq mile San Francisco 50 sq miles 1920 flights 640 hours

Posted in 3D, OpenDroneMap, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — Improvements Needed

Posted by smathermather on October 18, 2015

Talking about the future sometimes requires critiquing the present. The wonderful thing about an open source project is we can be quite open about limitations, and discuss ways forward. OpenDroneMap is a really interesting and captivating project… and there’s more work to do.

To understand what work needs done, we need to understand OpenDroneMap / structure from motion in general. Some of the limitations endemic to ODM are specific to its maturity as a project. Some of the limitations to ODM are extant in commercial closed-source industry leaders. I’ll highlight each as I do the walk through.

A simplified version of Structure from Motion (SfM) workflows as they apply to drone image processing are as follows:

Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products

This misses some steps, but gives the major themes. Let’s visualize these as drawings and screen shots. (In the interest of full disclosure, the screen shots are from a closed source solution so that I can demonstrate the problems endemic across all software I have tested to date.)

Diagrams / screenshots of the toolchain parts:

Find features & Match features --> Find scene structure

Find features & Match features –> Find scene structure


 

Create Dense Point Cloud

Create Dense Point Cloud


 

Create mesh

Create mesh


 

Texture mesh

Texture mesh


 

And then generate orthophoto and secondary products (no diagram)

Problem space:

Of these, let’s highlight in bold known deficiencies in ODM:

Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products

(These highlights assume that our new texturing engine that’s being written will address deficiencies there. Time and testing will tell… . This also assumes that the inclusion of OpenSfM in the toolchain fixes the scene structure /camera issues. This assumption also requires more testing.)

Each portion of the pipeline is dependent upon the next, if for example the camera positions are poor, point cloud won’t be great, and the texturing will be very problematic. If the dense point cloud isn’t as dense as possible, features will be lost, and the mesh, textured mesh, orthophoto, and other products will be degraded as well. For example, see these two different densities of point clouds:

Create Dense Point Cloud

More sparse point cloud


 

Less sparse (dense) point cloud

Less sparse point cloud


It becomes clear that the density and veracity of that point cloud lays the groundwork for the remainder of the pipeline.

ODM Priority 1: Improve density / veracity of point cloud

So what about the mesh issues? The meshing process for ODM and its closed source siblings (with possible exceptions) is problematic. Take for example this mesh of a few buildings:

Textured mesh of building

Textured mesh of building

The problems with this mesh become quite apparent when we view the un-textured counterpart:

Un-textured mesh of buildings

Un-textured mesh of buildings

We can see many issues with this mesh. This is a problem with all drone image processing tools I have tested to date — geometric surfaces are not treated as planar, meshing processes treat vegetation, ground, built environment equally, and thus don’t model any of them well.

ODM Priority 2: Improve meshing process

Priority 2 is difficult space, probably requires automated or semi-automated classification of the point cloud &/or input imagery, and while simple in the case of buildings, may be quite complicated in the case of vegetation. Old-school photogrammetry would have hand digitized hard and soft breaklines for built environments. How we handle this for ODM is an area we have yet to explore.

Conclusions

I am optimistic that ODM’s Find features & Match features –> Find scene structure / camera positions step is much improved with the integration of OpenSfM (please comment if you’ve found otherwise and have test cases to demonstrate). I am hopeful that the upcoming Texture mesh –> Generate orthophoto improvements will be a good solution. Where we need to improve will be in the near future is in the Create dense point cloud step. Where every software I have tested needs improvement, closed source and open source, is in the Create mesh step.

Posted in 3D, OpenDroneMap | Tagged: , | 1 Comment »

Humanitarian UAV Experts Meeting — first blush.

Posted by smathermather on October 13, 2015


UAViators, MIT Lincoln Labs, UNOCHA, and others organized and hosted the UAViators Experts Meeting on MIT’s campus this weekend. It was a remarkable event, if only for the thoughtfulness and knowledge base of the people in the room. The meeting brought together UAV operators, manufacturers, humanitarians, and a few folks at the intersection of these.

For me, it was relevatory with respect to all the non-mapping specific drone applications, from the advancement of technologies for last mile logistics to basic tactical / observational applications.

What was also interesting was some of the insights into regulatory issues and questions that are on the horizon.

I gave a short presentation on OpenDroneMap and therefore much of my time was spent thinking and listening in order to understand the potential application of OpenDroneMap in the humanitarian and development space. This extends a bit my understanding of its role and potential role in environmental applications.

More soon!

Posted in 3D, OpenDroneMap | Leave a Comment »