Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Orthophotos from orthophotos?

Posted by smathermather on August 21, 2017

An increasing desired use case for OpenDroneMap is to use it for non-drone maps. For example:

  • I have a bucket of scanned images from the 1950’s. I want to make them into a mosaic. I have the geographic center for each image. Can I process these in OpenDroneMap?
  • I have a bucket of individually orthorectified images from a sensor like from the Wildfire Airborne Sensor Program (WASP). I’d like to mosaic them into one image. Can I process these in OpenDroneMap?

The answer is unknown for the first question, and I’m looking forward to testing. For the second question, I have done some testing with great success.

The WASP sensor, as flown for the mission I’m testing outputs a whole bucket of individually orthorectified imagery.

Image of aerials with lines showing boundaries of overlapping independently orthorectified imagery.

Image of aerials with lines showing boundaries of overlapping independently orthorectified imagery.

The problem that we encounter is both radiometric inconsistencies between images and geometric offsets.

offsets

Screen shot showing radiometric inconsistencies and geometric offsets.

Fortunately, OpenDroneMap is good at dealing with both. The questions for this dataset simply were — is there enough overlap to allow ODM to do its magic && and does it create usable output?

Looking at overlap was promising:

overlap3

Image showing the high level of overlap in overlapping image footprints.

The best thing to next is to try. One problem. ODM doesn’t read GeoTiffs. Oops. Two problems — ODM assumes that your data has exif headers describing the location of the camera when the image was taken.

The first problem is easy. If we want jpegs, we might run something like the following:

for f in *.tif; do

gdal_translate -of JPEG -co QUALITY=100 -co worldfile=yes -scale 0 1024 0 255 $f $f.jpg

; done

The second problem is a little trickier. We can extract info from our new jpegs pretty easily with gdal:

gdalinfo orthoVNIR2549_flatfield.tif.jpg

Resulting in something like the following:

Driver: JPEG/JPEG JFIF

Files: orthoVNIR2549_flatfield.tif.jpg

orthoVNIR2549_flatfield.tif.jpg.aux.xml

orthoVNIR2549_flatfield.tif.wld

Size is 3233, 4303

Coordinate System is:

GEOGCS[“WGS 84”,

DATUM[“WGS_1984”,

SPHEROID[“WGS 84”,6378137,298.2572229328696,

AUTHORITY[“EPSG”,”7030″]],

AUTHORITY[“EPSG”,”6326″]],

PRIMEM[“Greenwich”,0],

UNIT[“degree”,0.0174532925199433],

AUTHORITY[“EPSG”,”4326″]]

Origin = (-72.222122858200009,18.545027305599998)

Pixel Size = (0.000001126600000,-0.000001126600000)

Metadata:

AREA_OR_POINT=Point

EXIF_GPSLatitude=(18) (32) (33.3722)

EXIF_GPSLatitudeRef=N

EXIF_GPSLongitude=(72) (13) (13.0861)

EXIF_GPSLongitudeRef=W

EXIF_GPSVersionID=0x2 0x3 00 00

EXIF_ResolutionUnit=1

EXIF_XResolution=(1)

EXIF_YCbCrPositioning=1

EXIF_YResolution=(1)

Image Structure Metadata:

COMPRESSION=JPEG

INTERLEAVE=PIXEL

SOURCE_COLOR_SPACE=YCbCr

Corner Coordinates:

Upper Left  ( -72.2221229,  18.5450273) ( 72d13’19.64″W, 18d32’42.10″N)

Lower Left  ( -72.2221229,  18.5401795) ( 72d13’19.64″W, 18d32’24.65″N)

Upper Right ( -72.2184806,  18.5450273) ( 72d13′ 6.53″W, 18d32’42.10″N)

Lower Right ( -72.2184806,  18.5401795) ( 72d13′ 6.53″W, 18d32’24.65″N)

Center      ( -72.2203017,  18.5426034) ( 72d13’13.09″W, 18d32’33.37″N)

Band 1 Block=3233×1 Type=Byte, ColorInterp=Red

Overviews: 1617×2152, 809×1076, 405×538

Image Structure Metadata:

COMPRESSION=JPEG

Band 2 Block=3233×1 Type=Byte, ColorInterp=Green

Overviews: 1617×2152, 809×1076, 405×538

Image Structure Metadata:

COMPRESSION=JPEG

Band 3 Block=3233×1 Type=Byte, ColorInterp=Blue

Overviews: 1617×2152, 809×1076, 405×538

Image Structure Metadata:

COMPRESSION=JPEG

From this we can just extract what we need from the image center info:

gdalinfo orthoVNIR2549_flatfield.tif.jpg | grep Center

Which results in the following:

Center      ( -72.2203017,  18.5426034) ( 72d13’13.09″W, 18d32’33.37″N)

Ok, this will require a little filtering to extract what we need. Then we’ll use exiftool to write it back to the file as follows:

Voila! Now we can process in OpenDroneMap. Now to find an older historical dataset to attempt this on.

output1

OpenDroneMap derived mosaic of input orthophotos with remaining image boundaries to process in the background.

Posted in Other | 1 Comment »

Gorilla Food Plants Biomass and Ranging

Posted by smathermather on July 4, 2017

The last two days, I have been working with Olivier Jean Leonce Manzi here at Karisoke Research Center on the question of the relationship between gorilla food plants biomass and the ranging patterns of gorillas outside Volcanoes National Park (VNP) in Rwanda.

A view from the edge of VNP looking toward Mounts Mgahinga and Muhabura

A view from the edge of VNP looking toward Mounts Mgahinga and Muhabura

Even though Volcanoes National Park is set aside and gorillas thrive inside, they don’t strictly stay within the bounds of the park — often leaving the park to forage in the adjacent farming communities. This tendency is an opportunity for human / wildlife conflict, and thus it is important to understand the relationship between gorilla browsing habits outside the park and their food resources in these farming communities.

Among the food plants in the study, eucalyptus, a non-native, is planted widely in Rwanda as a fast growing timber source. Gorillas like to climb up eucalyptus, stripping the outer bark, and use their teeth to scrape the inner bark from the tree. For larger trees, this seems to have little effect. For smaller trees, it can kill the tree easily, either by the weight of the gorilla on the tree or the damage to the vascular system of the tree.

 

Eucalyptus trees planted near VNP. Eating the inner bark of Eucalyptus is a favorite food of gorillas. For larger trees, this seems to have no effect. For smaller trees, this can maim or kill the non-native tree plantings.

Eucalyptus trees planted near VNP. Eating the inner bark of Eucalyptus is a favorite food of gorillas. For larger trees, this seems to have no effect. For smaller trees, this can maim or kill the non-native tree plantings.

Bamboo, native to the region, is widely planted outside VNP (and grows in forests inside VNP). As it sprouts during rainy season, it is a favorite food of gorillas both inside and outside the park.

Small bamboo stand outside VNP.

Small bamboo stand outside VNP.

Olivier’s study area is in the farming area along the base of Karisimbi and Bisoke Volcanoes just outside VNP.  (Quick aside: it was a combination of Karisimbi and Bisoke names that formed Karisoke Research Center’s name).

Map of study area near Karisimbi and Bisoke Volcanoes outside Volcanoes National Park.

Map of study area near Karisimbi and Bisoke Volcanoes outside Volcanoes National Park.

The two datasets we want to compare for the study are counts of gorillas ranging outside the park and transects of gorilla plant food biomass, also outside the park.

Gorilla from the Amahoro (Peace) group in a bamboo stand during rainy season. (Amahoro group is not part of this study.)

Gorilla from the Amahoro (Peace) group in a bamboo stand during rainy season. (Amahoro group is not part of this study.)

 

study_area_inset

Map of gorilla sightings outside VNP within our study area.

The measurements that Olivier made on gorilla plant food biomass is that of herbs (overall), trees (overall), eucalyptus, bamboo, and rubus (a group commonly known as raspberries, blackberries, dewberries, etc.) — favored foods for gorillas especially outside the park.

vegetation_plots.png

Vegetation plot locations along border of VNP

For the purposes of analysis, Olivier grouped the vegetation plots and gorilla counts into 18 approximately equal zones. Gorilla counts and vegetation plots were summarized per zone and compared.

vegetation_plot_zones.png

I remember his project as one of the more difficult ones to wrap my head around when I was here in December. In summary, here’s the problem that we wanted to solve: given a dataset of gorilla food biomass outside the park, and counts of gorillas outside the park, can we establish a relationship between the two. It should be an easy problem: on one side of the equation, we have count data, on the other ratio data. My first thought would be to apply a Poisson approach.

Gorilla.count ~ Herbs.biomass + Tree.biomass + Eucalyptus.biomass + Bamboo.biomass + Rubus.biomass

But we quickly run into a problem: our data are not independent in time or in space, and so a simple Poisson approach is likely not appropriate.

For as much work as I’ve done in the geospatial space, I have done precious little with spatial statistics, so this posed a conundrum to me. I can’t remember now if it was through much googling, or the great stewardship of Dr. Patrick Lorch, the research manager at my institution, that we settled upon using a Markov chain Monte Carlo (MCMC) approach to the problem. The advantage to this approach is that we can do the analysis using a Poisson distribution without regard to autocorrelation. In the R statistical package, this is an easy analysis to set up.

Finally, we’ll close with a picture of Olivier working on his literature review for the methods section:

olivier.jpg

Posted in Ecology, Gorillas, Karisoke, National Park, R | Tagged: , , , , , , | 1 Comment »

ZMI — Zanzibar Mapping Initiative Level 1

Posted by smathermather on June 12, 2017

The Zanzibar Mapping Initiative is the largest civilian drone mapping project in the world — an ambitious project to map the Zanzibar Archipelago using a whole host of eBee drones.

Khadija Abdulla Ali demonstrating launch.

The project is nearing completion of mapping Unguja, the larger of the two main islands.

Yussuf Said Yussuf showing how the camera seats in an eBee drone.

Yves Barthelemy talking to the ZMI team.

overview

Unguja Island from Landsat 5, 2009

Because of the large area to be covered, ZMI required an approach to partition the data into manageable flight areas.

grid

Zone grid

And in practicality, these were flown as overlapping areas, with some areas flown at a higher resolution:

grid_actual.png

Overlapping flight areas

single_tile

Now here’s the problem: how do we put these back together with just GDAL and a Windows command prompt? I had the privilege of testing out my ideas on these data:

https://gist.github.com/smathermather/d948a252f5e417334244adc05c10790b

zmi_level1

Cookie cutter versions of the imagery.

mosaic.png

Mosaic

Posted in GDAL, Other | Tagged: , , , , | 1 Comment »

A little Gorilla Time

Posted by smathermather on June 12, 2017

I miss my mountain gorilla friends in Rwanda. Let’s write a little more code to support them. I’ll be visiting Karisoke again next week, so it seems timely to post a little more code (HT Jean Pierre Samedi Mucyo for working with me on this one).

The problem today is simple — given a time series of gorilla locations and dates, can we calculate rate of travel using PostgreSQL? Well, of course we can. We can do anything in Postgre.

We have two tricks here:

  1. The first is to order our data so we can just compare one row to the next.
  2. Once we do that, we need simply to use PostGIS to calculate distance, and ordinary time functions from Postgres to calculate time difference.

This is my first use of WITH RECURSIVE, and it’s probably unnecessary (could be replaced with windowing functions), but I was very proud to finally get over my fear of WITH RECURSIVE . (We actually use one windowing function in our prep of the data. But there we are… ).

snip

For the record, WITH RECURSIVE isn’t recursive, but it is useful here in allowing us to compare the current row with the previous.

Posted in Database, Ecology, Gorillas, Karisoke, National Park, PostgreSQL, SQL | Tagged: , , , , | Leave a Comment »

OpenDroneMap on the road part II

Posted by smathermather on March 28, 2017

Thinking a little more about moderately large compute resources and their container (see previous post), I revised my analysis to look and see if we can fit these 10 NUCs plus switch and outlets into a carry-on sized case. It turns out, at first blush, it seems feasible:

pelican_cloud_1535

Posted in 3D, Docker, OpenDroneMap, PDAL | Tagged: , , | 5 Comments »

OpenDroneMap on the road

Posted by smathermather on March 27, 2017

Contemplation

This is a theoretical post. Imagine for a moment that OpenDroneMap can scale to the compute resources that you have in an elastic and sane way (we are short weeks away from the first work on this), and so, if you are a typical person in the high-speed internet world, you might be thinking, “Great! Let’s throw this up on the cloud!”

But imagine for a moment you are in a network limited environment. Do you process on a local laptop? Do you port around a desktop? The folks in the humanitarian space think about this a lot — depending on the project, one could spend weeks or months in network limited environments.

Enter POSM

Folks at American Red Cross (ARC) have been thinking about this a lot. What has resulted, in order to aid in mapping e.g. rural areas in West Africa is Portable OpenStreetMap, or POSM, a tool for doing all the OpenStreetMap stuff, but totally and temporarily offline.

The software for this is critical, but I’ve been increasingly interested in the hardware side of things. OpenDroneMap, even with it’s upcoming processing, memory, and scaling improvements will still require more compute resources than, say OpenMapKit and Field Papers. I’ve been contemplating that once the improvements are in place, what kind of compute center could you haul in the field with you?

I’m not just thinking humanitarian and development use cases either — what can we do to make processing drone imagery in the field faster? Can we make it fast enough to get results before leaving the field? Can we modify our flight planning based on the stream of data being processed and adapt while we are there? Our real costs for flying are often finding staff and weather windows that are good, and sometimes we miss opportunities in the delay between imagery capture and processing. How can we close that loop faster?

The NUC

On the hardware side of the house, the folks at ARC are using Intel NUC kits. For ODM, as I understand it, they go a step up in processing power from their specs to something with an i7. So, I got to thinking — can we put together a bunch of these, running on a generator, and not break the bank on weight (keep it under 50 lbs)? It turns out, maybe we can. For a round $10,000, you might assemble 10 of these 4-core NUCs with a network switch, stuff it into a Pelican Air 1605 case, with 320 GB RAM, and 2.5 TB of storage. More storage can be added if necessary.

This is a thought experiment so far, and may not be the best way to get compute resources in the field, your mileage may vary, etc., but it’s and interesting though.field_compute.PNG

Cost Breakdown

field_compute1

Follow up

Any thoughts? Anyone deployed serious compute resources to the field for drone image processing? I’d love to hear what you think.

Posted in 3D, Docker, OpenDroneMap, PDAL | Tagged: , , | 2 Comments »

Time for localization?

Posted by smathermather on March 26, 2017

Just saw this great blog post by my friend Mr. Yu at Korea National Park on using OpenDroneMap. If you need it in English, google seems to translate it rather well:


Maybe it’s time to look at localization for WebODM… .

Posted in 3D, OpenDroneMap, Other | Tagged: | Leave a Comment »

Scaling OpenDroneMap, necessary (and fun!) next steps

Posted by smathermather on March 8, 2017

Project State

OpenDroneMap has really evolved since I first put together a concept project presented at FOSS4G Portland in 2014, and hacked with my first users (Michele M. Tobias & Alex Mandel). At this stage, we have a really nicely functioning tool that can take drone images and output high-quality geographic products. The project has 45 contributors, hundreds of users, and a really great community (special shout-out to Piero Toffanin and Dakota Benjamin without whom the project would be nowhere near as viable, active, or wonderful). Recent improvements can be roughly categorized into data quality improvements and usability improvements. Data quality improvements were aided by the inclusion better point cloud creation from OpenSfM and better texturing from mvs-texturing. Usability improvements have largely been in the development of WebODM as a great to use and easy-to-deploy front end for OpenDroneMap.

With momentum behind these two directions — improved usability and improved data output, it’s time to think a little about how we scale OpenDroneMap. It works great for individual flights (up to a few hundred images at a time), but a promise of open source projects is scalability. Regularly we get questions from the community about how they can run ODM on larger and larger datasets in a sustainable and elastic way. To answer these questions, let me outline where we are going.

Project Future

Incremental optimizations

When I stated that scalability is one of the promises of open source software. I mostly meant scaling up: if I need more computing resources with an open source project, I don’t have to purchase more software licenses, I just need to rent or buy more computing resources. But an important element to scalability is the per unit use of computing resources as well. If we are not efficient and thoughtful about how we use things on the small scale, then we are not maximizing our scaled up resources.  Are we efficient in memory usage; is our matching algorithm as accurate as possible for the level of accuracy thus being efficient with the processor resources I have; etc.? I think of this as improving OpenDroneMap’s ability to efficiently digest data.

Magic school bus going doing the digestive system

Incremental toolchain optimizations are thus part of this near future for OpenDroneMap (and by consequence OpenSfM, the underlying computer vision tools for OpenDroneMap), focusing on memory and processor resources. The additional benefit here is that small projects and small computing resources also benefit. For humanitarian and development contexts where compute and network resources are limiting, these incremental improvements are critical. Projects like American Red Cross’ Portable OpenStreetMap (POSM) will benefit from these improvements, as will anyone in the humanitarian and development communities that need efficient processing of drone imagery offline.

To this end, three approaches are being considered for incremental improvements.  Matching speed could be improved by the use of Cascade Hashing matching or Bag of Words based method.Memory improvements could come via improved correspondence graph data structures and possibly SLAM-like pose-graph methods for global adjustment of camera positions in order to avoid global bundle adjustment.

Figure from Bag of Words paper

Figure from Bag of Words paper

Large-scale pipeline

In addition to incremental improvements, for massive datasets we need an approach to splitting up our dataset into manageable chunks. If incremental improvements help us better and more quickly process datasets, the large-scale pipeline is the teeth of this approach — we need to cut and chew up our large datasets into smaller chunks to digest.

Image of Dr. Teeth of the Muppets.

Dr. Teeth

If for a given node I can process 1000 images efficiently, but I have 80,000 images, I need a process that splits my dataset into 80 manageable chunks and processes through them sequentially or in parallel until done. Maybe I have 9000 images? Then I need it split into 9 chunks.

Image over island showing grid of 9 for spliting an aerial dataset

Eventually, I want to synthesize the outputs back into a single dataset. Ideally I split the dataset with some overlap as follows:

Image over island showing grid of 9 for spliting an aerial dataset shown with overlap

Problems with splitting SfM datasets

We do run into some very real problems with splitting our datasets into chunks for processing. There are a variety of issues, but the most stark is consistency issues from the resultant products. Quite often our X, Y, and Z values won’t match in the final reconstructions. This becomes critical when performing, e.g. hydrologic analyses on resultant Digital Terrain Models.

Water flow on patched DEM showing pooling effects around discontinuities

Water flow on patched DEM showing pooling effects around discontinuities (credit: Anna Petrasova et al)

Anna Petrasova et al address merging disparate DEM’s in GRASS with Seamless fusion of high-resolution DEMs from multiple sources with r.patch.smooth.

Water flow on fused DEM

Water flow on fused DEM showing corrected flow (credit: Anna Petrasova et al)

What Anna describes and solves is the problem of matching LiDAR and drone data and assumes that the problems between the datasets are sufficiently small that smoothing the transition between the datasets is adequate. Unfortunately, when we process drone imagery in chunks, we can get translation, rotation, skewing, and a range of other differences that often cannot be accounted for when we’re processing the digital terrain model at the end.

What follows is a small video of a dataset split and processed in two chunks. Notice offsets, rotations, and other issues of mismatch in the X and Y dimensions, and especially Z.

When we see these differences in the resultant digital terrain model, the problem can be quite stark:

Elevation differences along seamline of merged OpenDroneMap DTMs

Elevation differences along seamline of merged OpenDroneMap DTMs

To address these issues we require both the approach that Anna proposes that fixes for and smooths out small differences, and a deeper approach specific to matching drone imagery datasets to address the larger problems.

Deeper approach to processing our bites of drone data

To ensure we are getting the most out of stitching these pieces of data back together at the end, we require using a very similar matching approach to what we use in the matching of images to each other. Our steps will be something like as follows:

  • Split our images to groups
  • Run reconstruction on each group
  • Align and tranform those groups to each other using matching features between the groups
  • For secondary products, like Digital Terrain Models, blend the outputs using an approach similar to r.patch.smooth.

In close

I hope you enjoyed a little update on some of the upcoming features for OpenDroneMap. In addition to the above, we’ll also be wrapping in reporting and robustness improvements. More on that soon, as that is another huge piece that will help the entire community of users.

(This post CC BY-SA 4.0 licensed)

(Shout out to Pau Gargallo Piracés of Mapillary for the technical aspects of this write up. He is not responsible for any of the mistakes, generalities, and distortions in the technical aspects. Those are all mine).

Posted in 3D, Docker, OpenDroneMap, OpenDroneMap, PDAL | Tagged: , , | 2 Comments »

Taking Slices from ~~LiDAR~~ OpenDroneMap data: Part X

Posted by smathermather on February 23, 2017

Part 10 of N… , wait. This is a lie. This post is actually about optical drone data, not LiDAR data. This is about next phase features fro OpenDroneMap — automated and semiautomation of the point clouds, creation of DTMs and other fun such stuff.

To date, we’ve only extracted Digital Surface Models from ODM — the top surface of everything in the scene. As it is useful for hydrological modeling and other purposes to have a Digital Terrain Model estimated, we’ll be including PDAL’s Progressive Morphological Filter for the sake of DEM extraction. Here’s a small preview:

Posted in 3D, Docker, OpenDroneMap, PDAL | Tagged: , , | Leave a Comment »

Taking Slices from LiDAR data: Part IX

Posted by smathermather on February 20, 2017

Part 9 of N… , see e.g. my previous post on the topic.

We’ve been working to reduce the effect of overlapping samples on statistics we run on LiDAR data, and to do so, we’ve been using PDAL’s filters.sample approach. One catch: this handles the horizontal sampling problem well, but we might want to intentionally retain samples from high locations — after all, I want to see the trees for the forest and vice versa. So, it might behoove us to sample within each of our desired height classes to retain as much vertical information as possible.

Posted in 3D, Database, Docker, LiDAR, Other, PDAL, pointcloud, PostGIS, PostgreSQL | Tagged: , , , , , | 1 Comment »