Talking about the future sometimes requires critiquing the present. The wonderful thing about an open source project is we can be quite open about limitations, and discuss ways forward. OpenDroneMap is a really interesting and captivating project… and there’s more work to do.
To understand what work needs done, we need to understand OpenDroneMap / structure from motion in general. Some of the limitations endemic to ODM are specific to its maturity as a project. Some of the limitations to ODM are extant in commercial closed-source industry leaders. I’ll highlight each as I do the walk through.
A simplified version of Structure from Motion (SfM) workflows as they apply to drone image processing are as follows:
Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products
This misses some steps, but gives the major themes. Let’s visualize these as drawings and screen shots. (In the interest of full disclosure, the screen shots are from a closed source solution so that I can demonstrate the problems endemic across all software I have tested to date.)
Diagrams / screenshots of the toolchain parts:




And then generate orthophoto and secondary products (no diagram)
Problem space:
Of these, let’s highlight in bold known deficiencies in ODM:
Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products
(These highlights assume that our new texturing engine that’s being written will address deficiencies there. Time and testing will tell… . This also assumes that the inclusion of OpenSfM in the toolchain fixes the scene structure /camera issues. This assumption also requires more testing.)
Each portion of the pipeline is dependent upon the next, if for example the camera positions are poor, point cloud won’t be great, and the texturing will be very problematic. If the dense point cloud isn’t as dense as possible, features will be lost, and the mesh, textured mesh, orthophoto, and other products will be degraded as well. For example, see these two different densities of point clouds:


It becomes clear that the density and veracity of that point cloud lays the groundwork for the remainder of the pipeline.
ODM Priority 1: Improve density / veracity of point cloud
So what about the mesh issues? The meshing process for ODM and its closed source siblings (with possible exceptions) is problematic. Take for example this mesh of a few buildings:

The problems with this mesh become quite apparent when we view the un-textured counterpart:

We can see many issues with this mesh. This is a problem with all drone image processing tools I have tested to date — geometric surfaces are not treated as planar, meshing processes treat vegetation, ground, built environment equally, and thus don’t model any of them well.
ODM Priority 2: Improve meshing process
Priority 2 is difficult space, probably requires automated or semi-automated classification of the point cloud &/or input imagery, and while simple in the case of buildings, may be quite complicated in the case of vegetation. Old-school photogrammetry would have hand digitized hard and soft breaklines for built environments. How we handle this for ODM is an area we have yet to explore.
Conclusions
I am optimistic that ODM’s Find features & Match features –> Find scene structure / camera positions step is much improved with the integration of OpenSfM (please comment if you’ve found otherwise and have test cases to demonstrate). I am hopeful that the upcoming Texture mesh –> Generate orthophoto improvements will be a good solution. Where we need to improve will be in the near future is in the Create dense point cloud step. Where every software I have tested needs improvement, closed source and open source, is in the Create mesh step.
One thought on “OpenDroneMap — Improvements Needed”