Thinking a little more about moderately large compute resources and their container (see previous post), I revised my analysis to look and see if we can fit these 10 NUCs plus switch and outlets into a carry-on sized case. It turns out, at first blush, it seems feasible:
Posted by smathermather on March 28, 2017
Posted by smathermather on March 27, 2017
This is a theoretical post. Imagine for a moment that OpenDroneMap can scale to the compute resources that you have in an elastic and sane way (we are short weeks away from the first work on this), and so, if you are a typical person in the high-speed internet world, you might be thinking, “Great! Let’s throw this up on the cloud!”
But imagine for a moment you are in a network limited environment. Do you process on a local laptop? Do you port around a desktop? The folks in the humanitarian space think about this a lot — depending on the project, one could spend weeks or months in network limited environments.
Folks at American Red Cross (ARC) have been thinking about this a lot. What has resulted, in order to aid in mapping e.g. rural areas in West Africa is Portable OpenStreetMap, or POSM, a tool for doing all the OpenStreetMap stuff, but totally and temporarily offline.
The software for this is critical, but I’ve been increasingly interested in the hardware side of things. OpenDroneMap, even with it’s upcoming processing, memory, and scaling improvements will still require more compute resources than, say OpenMapKit and Field Papers. I’ve been contemplating that once the improvements are in place, what kind of compute center could you haul in the field with you?
I’m not just thinking humanitarian and development use cases either — what can we do to make processing drone imagery in the field faster? Can we make it fast enough to get results before leaving the field? Can we modify our flight planning based on the stream of data being processed and adapt while we are there? Our real costs for flying are often finding staff and weather windows that are good, and sometimes we miss opportunities in the delay between imagery capture and processing. How can we close that loop faster?
On the hardware side of the house, the folks at ARC are using Intel NUC kits. For ODM, as I understand it, they go a step up in processing power from their specs to something with an i7. So, I got to thinking — can we put together a bunch of these, running on a generator, and not break the bank on weight (keep it under 50 lbs)? It turns out, maybe we can. For a round $10,000, you might assemble 10 of these 4-core NUCs with a network switch, stuff it into a Pelican Air 1605 case, with 320 GB RAM, and 2.5 TB of storage. More storage can be added if necessary.
Any thoughts? Anyone deployed serious compute resources to the field for drone image processing? I’d love to hear what you think.
Posted by smathermather on March 26, 2017
Just saw this great blog post by my friend Mr. Yu at Korea National Park on using OpenDroneMap. If you need it in English, google seems to translate it rather well:
Posted by smathermather on March 8, 2017
OpenDroneMap has really evolved since I first put together a concept project presented at FOSS4G Portland in 2014, and hacked with my first users (Michele M. Tobias & Alex Mandel). At this stage, we have a really nicely functioning tool that can take drone images and output high-quality geographic products. The project has 45 contributors, hundreds of users, and a really great community (special shout-out to Piero Toffanin and Dakota Benjamin without whom the project would be nowhere near as viable, active, or wonderful). Recent improvements can be roughly categorized into data quality improvements and usability improvements. Data quality improvements were aided by the inclusion better point cloud creation from OpenSfM and better texturing from mvs-texturing. Usability improvements have largely been in the development of WebODM as a great to use and easy-to-deploy front end for OpenDroneMap.
With momentum behind these two directions — improved usability and improved data output, it’s time to think a little about how we scale OpenDroneMap. It works great for individual flights (up to a few hundred images at a time), but a promise of open source projects is scalability. Regularly we get questions from the community about how they can run ODM on larger and larger datasets in a sustainable and elastic way. To answer these questions, let me outline where we are going.
When I stated that scalability is one of the promises of open source software. I mostly meant scaling up: if I need more computing resources with an open source project, I don’t have to purchase more software licenses, I just need to rent or buy more computing resources. But an important element to scalability is the per unit use of computing resources as well. If we are not efficient and thoughtful about how we use things on the small scale, then we are not maximizing our scaled up resources. Are we efficient in memory usage; is our matching algorithm as accurate as possible for the level of accuracy thus being efficient with the processor resources I have; etc.? I think of this as improving OpenDroneMap’s ability to efficiently digest data.
Incremental toolchain optimizations are thus part of this near future for OpenDroneMap (and by consequence OpenSfM, the underlying computer vision tools for OpenDroneMap), focusing on memory and processor resources. The additional benefit here is that small projects and small computing resources also benefit. For humanitarian and development contexts where compute and network resources are limiting, these incremental improvements are critical. Projects like American Red Cross’ Portable OpenStreetMap (POSM) will benefit from these improvements, as will anyone in the humanitarian and development communities that need efficient processing of drone imagery offline.
To this end, three approaches are being considered for incremental improvements. Matching speed could be improved by the use of Cascade Hashing matching or Bag of Words based method.Memory improvements could come via improved correspondence graph data structures and possibly SLAM-like pose-graph methods for global adjustment of camera positions in order to avoid global bundle adjustment.
In addition to incremental improvements, for massive datasets we need an approach to splitting up our dataset into manageable chunks. If incremental improvements help us better and more quickly process datasets, the large-scale pipeline is the teeth of this approach — we need to cut and chew up our large datasets into smaller chunks to digest.
If for a given node I can process 1000 images efficiently, but I have 80,000 images, I need a process that splits my dataset into 80 manageable chunks and processes through them sequentially or in parallel until done. Maybe I have 9000 images? Then I need it split into 9 chunks.
Eventually, I want to synthesize the outputs back into a single dataset. Ideally I split the dataset with some overlap as follows:
Problems with splitting SfM datasets
We do run into some very real problems with splitting our datasets into chunks for processing. There are a variety of issues, but the most stark is consistency issues from the resultant products. Quite often our X, Y, and Z values won’t match in the final reconstructions. This becomes critical when performing, e.g. hydrologic analyses on resultant Digital Terrain Models.
What Anna describes and solves is the problem of matching LiDAR and drone data and assumes that the problems between the datasets are sufficiently small that smoothing the transition between the datasets is adequate. Unfortunately, when we process drone imagery in chunks, we can get translation, rotation, skewing, and a range of other differences that often cannot be accounted for when we’re processing the digital terrain model at the end.
What follows is a small video of a dataset split and processed in two chunks. Notice offsets, rotations, and other issues of mismatch in the X and Y dimensions, and especially Z.
When we see these differences in the resultant digital terrain model, the problem can be quite stark:
To address these issues we require both the approach that Anna proposes that fixes for and smooths out small differences, and a deeper approach specific to matching drone imagery datasets to address the larger problems.
Deeper approach to processing our bites of drone data
To ensure we are getting the most out of stitching these pieces of data back together at the end, we require using a very similar matching approach to what we use in the matching of images to each other. Our steps will be something like as follows:
- Split our images to groups
- Run reconstruction on each group
- Align and tranform those groups to each other using matching features between the groups
- For secondary products, like Digital Terrain Models, blend the outputs using an approach similar to r.patch.smooth.
I hope you enjoyed a little update on some of the upcoming features for OpenDroneMap. In addition to the above, we’ll also be wrapping in reporting and robustness improvements. More on that soon, as that is another huge piece that will help the entire community of users.
(This post CC BY-SA 4.0 licensed)
(Shout out to Pau Gargallo Piracés of Mapillary for the technical aspects of this write up. He is not responsible for any of the mistakes, generalities, and distortions in the technical aspects. Those are all mine).
Posted by smathermather on February 23, 2017
Part 10 of N… , wait. This is a lie. This post is actually about optical drone data, not LiDAR data. This is about next phase features fro OpenDroneMap — automated and semiautomation of the point clouds, creation of DTMs and other fun such stuff.
To date, we’ve only extracted Digital Surface Models from ODM — the top surface of everything in the scene. As it is useful for hydrological modeling and other purposes to have a Digital Terrain Model estimated, we’ll be including PDAL’s Progressive Morphological Filter for the sake of DEM extraction. Here’s a small preview:
The test data above is Midpines, flown by NextGen Air Transportation Center (NGAT) access to the data through collaboration with Center for Geospatial Analytics at NCSU.
Posted by smathermather on February 20, 2017
Part 9 of N… , see e.g. my previous post on the topic.
We’ve been working to reduce the effect of overlapping samples on statistics we run on LiDAR data, and to do so, we’ve been using PDAL’s filters.sample approach. One catch: this handles the horizontal sampling problem well, but we might want to intentionally retain samples from high locations — after all, I want to see the trees for the forest and vice versa. So, it might behoove us to sample within each of our desired height classes to retain as much vertical information as possible.
Posted by smathermather on February 18, 2017
Part 8 of N… , see e.g. my previous post on the topic.
I didn’t think my explanation of sampling problems with LiDAR data in my previous post was adequate. Here are a couple more figures for clarification.
We can take this dataset over trees, water, fences, and buildings that is heavily sampled in some areas and sparsely sampled in others and use PDAL’s filters.sample (Poisson dart-throwing) to create an evenly sampled version of the dataset.
An extra special thanks to the PDAL team for not only building such cool software, but being so responsive to questions!
Posted by smathermather on February 16, 2017
This blog post is from a series of posts on gorilla and biodiversity research in Rwanda. I have introduced the people, the place, and a little on the beasties there. Now we’ll talk some R-code for doing home range estimation.
Home range estimation is a pretty deep and also abstract concept. Heuristically, it is the process looking at where an animal or group of animals move in the world. If one were to create a home range map for me, it’d be a pretty simple bimodal map of home and work.
Where it gets funky, is what does one do with the unusual places that an animal travels. So, for my home range, I am mostly at work, home, church on Sundays, yoga, and the grocery store. But I did spend 2 weeks in Rwanda, one week in Tanzania, one week in Belgium and the Netherlands, one week in Seattle, one in Raleigh, etc. etc.. Should these places be part of my home range?
All this to say usually home ranges are calculated with some means to not include the less common places. I would be happy if East Africa were part of my home range, but I think it’s arguable that it is not yet so.
Also, depending on the approach we use, travel between places may or may not be considered part of the home range. Back to my home range: ideally even if we concluded that 2 weeks living in Rwanda expanded my home range to include Musanze, the flight there and back probably shouldn’t be included in my home range. For our work today, we are not going to be excluding travel from our home range calculations, but understand that it can be relevant to some home range calculations.
For our home range calculation today, the following assumptions will be made:
- We won’t be explicitly excluding travel from our home range calculations.
- We’ll use simple techniques to exclude ephemeral portions of the home range
For basic home range analysis, we’ll use R’s adehabitat home range (adehabitatHR) package.
# Load the adehabitatHR library # Load appropriate libraries for loading and manipulating data library(sp) # Spatial data objects in R library(rgdal) # Geospatial Data Abstraction Library library(adehabitatHR) # Adehabitat HomeRange library(readr) # File read capacity library(rgeos) # Geometric calculation to be used later library(maptools) # more spatial stuff
Now that we have every library we need loaded (and maybe then some) let’s load the data.
# We need to add a data filter here... . # For now, we assign just the columns we need for HR calculation loc_int_totf <- loc_int_tot[,c('X','Y', 'id')]
We’ll want to explicitly turn these data into geospatial data.
# Use sp library to assign coordinates and projection coordinates(loc_int_totf) <- c("X", "Y") # Our projection is UTM Zone 35S # proj4string acquired at spatialreference.org proj4string(loc_int_totf) <- CRS("+proj=utm +zone=35 +south +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
Now we are ready to do some home range calculations. We’ll use the kernelUD function to convert our data into a surface representing our home range estimate. The total of all the pixels in this surface (as represented by a raster) will total to 1, or 100 of the home range.
# Estimating the utilization distribution using "reference" bandwidth kud <- kernelUD(loc_int_totf) # Display the utilization distribution image(kud)
Recall that the total of all the pixel values here is 1. This means that this image represents 100 percent of the calculated range of the Golden Monkeys. If we want to calculate the 70% homerange (what we estimate the golden monkeys spend 70% of their time in) we would do so as follows:
# Estimate the homerange from the utilization distribution homerange <- getverticeshr(kud, 70) plot(homerange)
Now it would be useful to convert this to a data frame so that we can further manipulate and understand the data. For example, what is the home range size for any given percentile?
# Calculate home range sizes as.data.frame(homerange) # Calculate home range sizes for every 5% from 50-95% ii <- kernel.area(kud, percent=seq(1, 99, by=1)) plot(ii)
Finally, it would be nice to be able to get these data out of R and display alongside other GIS data. We’ll use writeOGR as part of RGDAL to do so.
# Write out data writeOGR(homerange, getwd(), "homerange", driver="ESRI Shapefile")
That’s it for today. This bit of R will serve as the core code for a range of different analyses. Stay tuned!
Posted by smathermather on February 15, 2017
Part 7 of N… , see e.g. my previous post on the topic.
More work on taking LiDAR slices. This time, the blog post is all about data preparation. LiDAR data, in its raw form, often has scan line effects when we look at density of points.
This can affect statistics we are running, as our sampling effort is not even. To ameliorate this affect a bit, we can decimate our point cloud before doing further work with it. In PDAL, we have three choices for decimation: filters.decimation, which samples every Nth point from the point cloud; filters.voxelgrid, which does volumetric pixel based resampling; and filters.sample or “Poisson sampling via ‘Dart Throwing'”.
filters.decimation won’t help us with the above problem. Voxelgrid sampling could help, but it’s very regular, so I reject this on beauty grounds alone. This leaves filters.sample.
The nice thing about both the voxelgrid and the poisson sampling is that they retain much of the shape of the point cloud while down sampling the data:
We will execute the poisson sampling in PDAL. As many things in PDAL are best done with a (json) pipeline file, we construct a pipeline file describing the filtering we want to do, and then call that from the command line:
We can slice our data up similar to previous posts, and then look at the point density per slice. R-code for doing this forthcoming (thanks to Chris Tracey at Western Pennsylvania Conservancy and the LidR project), but below is a graphic as a teaser. For the record, we will probably pursue a fully PDAL solution in the end, but really interesting results in the interim:
More to come. Stay tuned.
Posted by smathermather on January 30, 2017
I’ve been working on base cartography for the research area in Rwanda. Unlike here in Cleveland, we have some great topography to work with, so we can leverage that for basemaps. But, it’s such a beautiful landscape, I didn’t want to sell these hillshades short by doing a halfway job, so I’ve been diving deep.
First, some legacy. I read three great blog posts on hillshades. One was from ESRI revealing their “Next Generation Hillshade”. Drawing on Swiss Cartographic Traditions, these are some nice looking hillshades using lighting sources from multiple directions (more on this later):
Next, we look to Peter Richardsons’s recent post on Mapzen’s blog regarding terrain simplification.
I tried (not nearly as hard as I should have) to understand their code, when I saw a link to Daniel Huffman’s blog post from 2011 on terrain generalization: On Generalization Blending for Shaded Relief.
That’s when I saw the equation:
((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax
I’ll let you read these posts, rather than rehashing, but here’s what I did toward adding to them. The gist of Daniel and Peter’s approach is to blend together a high resolution and lower resolution version of the DEM based on a weighting factor. Both use a standard deviation filter to determine where to use the high resolution DEM vs resampled version — if the location is much higher or lower than it’s neighbors, it is considered an important feature, and given detail, otherwise the low resolution version is used (actually, I suspect Mapzen’s approach is only highlighting top features based on their diagrams, but I haven’t dived into the code to verify).
Excuse the colors, we’ll fix those at the end, but this allows us to simplify something that looks like this:
Into something that looks like this:
See how the hilltops and valleys remain in place and at full detail, but some of the minor facets of the hillsides are simplified? This is our aim.
I developed a pure GDAL approach for the simplification. It is purely command line, has hardcoded file names, etc, but could be done with a python or other API and turned into a proper function. TL:DR: this is not yet refined but quite effective.
If you’ve been following my blog for a while, you may recall a series of blog posts on determining landscape position using gdal.
This, with small modification, is a perfect tool for determining where to retain DEM features and where to generalize. The one modification is to calculate standard deviation from our simple difference data.
Back to those ugly colors on my hillshade version of the map. They go deeper than just color choice — it’s hard not to get a metallic look to digital hillshades. We see it in ESRI’s venerable map and in Mapbox’s Outdoor style. Mapzen may have avoided it by muting the multiple-light approach that ESRI lauds and Mapbox uses — I’m not sure.
To avoid this with our process (HT Brandon Garmin) I am using HDRI environment mapping for my lighting scheme. This allows for more complicated and realistic lighting that is pleasing to the eye and easy to interpret. Anyone who has followed me for long enough knows where this is going: straight to Pov-Ray… :
The results? Stunning (am I allowed to say that?):
The color is very simple here, as we’ll be overlaying data. Please stay tuned.