Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Taking Slices from ~~LiDAR~~ OpenDroneMap data: Part X

Posted by smathermather on February 23, 2017

Part 10 of N… , wait. This is a lie. This post is actually about optical drone data, not LiDAR data. This is about next phase features fro OpenDroneMap — automated and semiautomation of the point clouds, creation of DTMs and other fun such stuff.

To date, we’ve only extracted Digital Surface Models from ODM — the top surface of everything in the scene. As it is useful for hydrological modeling and other purposes to have a Digital Terrain Model estimated, we’ll be including PDAL’s Progressive Morphological Filter for the sake of DEM extraction. Here’s a small preview:

Posted in 3D, Docker, OpenDroneMap, PDAL | Tagged: , , | Leave a Comment »

Taking Slices from LiDAR data: Part IX

Posted by smathermather on February 20, 2017

Part 9 of N… , see e.g. my previous post on the topic.

We’ve been working to reduce the effect of overlapping samples on statistics we run on LiDAR data, and to do so, we’ve been using PDAL’s filters.sample approach. One catch: this handles the horizontal sampling problem well, but we might want to intentionally retain samples from high locations — after all, I want to see the trees for the forest and vice versa. So, it might behoove us to sample within each of our desired height classes to retain as much vertical information as possible.

Posted in 3D, Database, Docker, LiDAR, Other, PDAL, pointcloud, PostGIS, PostgreSQL | Tagged: , , , , , | Leave a Comment »

Taking Slices from LiDAR data: Part VIII

Posted by smathermather on February 18, 2017

Part 8 of N… , see e.g. my previous post on the topic.

I didn’t think my explanation of sampling problems with LiDAR data in my previous post was adequate. Here are a couple more figures for clarification.

We can take this dataset over trees, water, fences, and buildings that is heavily sampled in some areas and sparsely sampled in others and use PDAL’s filters.sample (Poisson dart-throwing) to create an evenly sampled version of the dataset.

Figure showing overlap of LiDAR scanlines

Figure showing overlap of LiDAR scanlines

Figure showing data resampled for eveness

Figure showing data resampled for evenness

An extra special thanks to the PDAL team for not only building such cool software, but being so responsive to questions!

Posted in 3D, Database, Docker, LiDAR, Other, PDAL, pointcloud, PostGIS, PostgreSQL | Tagged: , , , , , | Leave a Comment »

Quantitative analysis of gorilla and monkey movement and R-Stat (part 3)

Posted by smathermather on February 16, 2017

This blog post is from a series of posts on gorilla and biodiversity research in Rwanda. I have introduced the people, the place, and a little on the beasties there. Now we’ll talk some R-code for doing home range estimation.

Home range estimation is a pretty deep and also abstract concept. Heuristically, it is the process looking at where an animal or group of animals move in the world. If one were to create a home range map for me, it’d be a pretty simple bimodal map of home and work.

Where it gets funky, is what does one do with the unusual places that an animal travels. So, for my home range, I am mostly at work, home, church on Sundays, yoga, and the grocery store. But I did spend 2 weeks in Rwanda, one week in Tanzania, one week in Belgium and the Netherlands, one week in Seattle, one in Raleigh, etc. etc.. Should these places be part of my home range?

All this to say usually home ranges are calculated with some means to not include the less common places. I would be happy if East Africa were part of my home range, but I think it’s arguable that it is not yet so.

Also, depending on the approach we use, travel between places may or may not be considered part of the home range. Back to my home range: ideally even if we concluded that 2 weeks living in Rwanda expanded my home range to include Musanze, the flight there and back probably shouldn’t be included in my home range. For our work today, we are not going to be excluding travel from our home range calculations, but understand that it can be relevant to some home range calculations.

For our home range calculation today, the following assumptions will be made:

  • We won’t be explicitly excluding travel from our home range calculations.
  • We’ll use simple techniques to exclude ephemeral portions of the home range

For basic home range analysis, we’ll use R’s adehabitat home range (adehabitatHR) package.

# Load the adehabitatHR library
# Load appropriate libraries for loading and manipulating data
library(sp) # Spatial data objects in R
library(rgdal) # Geospatial Data Abstraction Library
library(adehabitatHR) # Adehabitat HomeRange
library(readr) # File read capacity
library(rgeos) # Geometric calculation to be used later
library(maptools) # more spatial stuff

Now that we have every library we need loaded (and maybe then some) let’s load the data.

# We need to add a data filter here... .
# For now, we assign just the columns we need for HR calculation
loc_int_totf <- loc_int_tot[,c('X','Y', 'id')]

We’ll want to explicitly turn these data into geospatial data.

# Use sp library to assign coordinates and projection
coordinates(loc_int_totf) <- c("X", "Y")
# Our projection is UTM Zone 35S
# proj4string acquired at
proj4string(loc_int_totf) <- CRS("+proj=utm +zone=35 +south +ellps=WGS84 +datum=WGS84 +units=m +no_defs") 

Now we are ready to do some home range calculations. We’ll use the kernelUD function to convert our data into a surface representing our home range estimate. The total of all the pixels in this surface (as represented by a raster) will total to 1, or 100 of the home range.

# Estimating the utilization distribution using "reference" bandwidth
kud <- kernelUD(loc_int_totf)

# Display the utilization distribution

Kernel density map

Recall that the total of all the pixel values here is 1. This means that this image represents 100 percent of the calculated range of the Golden Monkeys. If we want to calculate the 70% homerange (what we estimate the golden monkeys spend 70% of their time in) we would do so as follows:

# Estimate the homerange from the utilization distribution
homerange <- getverticeshr(kud, 70)

Outline of 70% home range

Now it would be useful to convert this to a data frame so that we can further manipulate and understand the data. For example, what is the home range size for any given percentile?

# Calculate home range sizes

# Calculate home range sizes for every 5% from 50-95%
ii <- kernel.area(kud, percent=seq(1, 99, by=1))

Plot comparing home range % vs hectares

Finally, it would be nice to be able to get these data out of R and display alongside other GIS data. We’ll use writeOGR as part of RGDAL to do so.

# Write out data
writeOGR(homerange, getwd(),
         "homerange", driver="ESRI Shapefile")

That’s it for today. This bit of R will serve as the core code for a range of different analyses. Stay tuned!

Posted in Gorillas, Karisoke, R | Tagged: , , , , | Leave a Comment »

Taking Slices from LiDAR data: Part VII

Posted by smathermather on February 15, 2017

Part 7 of N… , see e.g. my previous post on the topic.

More work on taking LiDAR slices. This time, the blog post is all about data preparation. LiDAR data, in its raw form, often has scan line effects when we look at density of points.


This can affect statistics we are running, as our sampling effort is not even. To ameliorate this affect a bit, we can decimate our point cloud before doing further work with it. In PDAL, we have three choices for decimation: filters.decimation, which samples every Nth point from the point cloud; filters.voxelgrid, which does volumetric pixel based resampling; and filters.sample or “Poisson sampling via ‘Dart Throwing'”.

filters.decimation won’t help us with the above problem. Voxelgrid sampling could help, but it’s very regular, so I reject this on beauty grounds alone. This leaves filters.sample.

The nice thing about both the voxelgrid and the poisson sampling is that they retain much of the shape of the point cloud while down sampling the data:



We will execute the poisson sampling in PDAL. As many things in PDAL are best done with a (json) pipeline file, we construct a pipeline file describing the filtering we want to do, and then call that from the command line:

We can slice our data up similar to previous posts, and then look at the point density per slice. R-code for doing this forthcoming (thanks to Chris Tracey at Western Pennsylvania Conservancy and the LidR project), but below is a graphic as a teaser. For the record, we will probably pursue a fully PDAL solution in the end, but really interesting results in the interim:


More to come. Stay tuned.

Posted in 3D, Database, Docker, LiDAR, Other, PDAL, pointcloud, PostGIS, PostgreSQL | Tagged: , , , , , | 2 Comments »

Gorilla research in Musanze, Rwanda: Hillshades continued

Posted by smathermather on January 30, 2017

I’ve been working on base cartography for the research area in Rwanda. Unlike here in Cleveland, we have some great topography to work with, so we can leverage that for basemaps. But, it’s such a beautiful landscape, I didn’t want to sell these hillshades short by doing a halfway job, so I’ve been diving deep.


First, some legacy. I read three great blog posts on hillshades. One was from ESRI revealing their “Next Generation Hillshade”. Drawing on Swiss Cartographic Traditions, these are some nice looking hillshades using lighting sources from multiple directions (more on this later):

ESRIs Hillshades

Next, we look to Peter Richardsons’s recent post on Mapzen’s blog regarding terrain simplification.

Terrain Generalization example from Mapzen

Terrain Generalization example from Mapzen

I tried (not nearly as hard as I should have) to understand their code, when I saw a link to Daniel Huffman’s blog post from 2011 on terrain generalization: On Generalization Blending for Shaded Relief.

That’s when I saw the equation:

((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax

I’ll let you read these posts, rather than rehashing, but here’s what I did toward adding to them. The gist of Daniel and Peter’s approach is to blend together a high resolution and lower resolution version of the DEM based on a weighting factor. Both use a standard deviation filter to determine where to use the high resolution DEM vs resampled version — if the location is much higher or lower than it’s neighbors, it is considered an important feature, and given detail, otherwise the low resolution version is used (actually, I suspect Mapzen’s approach is only highlighting top features based on their diagrams, but I haven’t dived into the code to verify).

The Need

Excuse the colors, we’ll fix those at the end, but this allows us to simplify something that looks like this:

Multicolor hillshade with no simplification

Multicolor hillshade with no simplification

Into something that looks like this:

Multicolor hillshade with simplification

Multicolor hillshade with simplification

See how the hilltops and valleys remain in place and at full detail, but some of the minor facets of the hillsides are simplified? This is our aim.

I developed a pure GDAL approach for the simplification. It is purely command line, has hardcoded file names, etc, but could be done with a python or other API and turned into a proper function. TL:DR: this is not yet refined but quite effective.

Landscape Position

If you’ve been following my blog for a while, you may recall a series of blog posts on determining landscape position using gdal.

Landscape position as calculated on a DEM

Landcape position

This, with small modification, is a perfect tool for determining where to retain DEM features and where to generalize. The one modification is to calculate standard deviation from our simple difference data.

The Tools


Back to those ugly colors on my hillshade version of the map. They go deeper than just color choice — it’s hard not to get a metallic look to digital hillshades. We see it in ESRI’s venerable map and in Mapbox’s Outdoor style. Mapzen may have avoided it by muting the multiple-light approach that ESRI lauds and Mapbox uses — I’m not sure.

HDRI (Shading)

To avoid this with our process (HT Brandon Garmin) I am using HDRI environment mapping for my lighting scheme. This allows for more complicated and realistic lighting that is pleasing to the eye and easy to interpret. Anyone who has followed me for long enough knows where this is going: straight to Pov-Ray… :


The results? Stunning (am I allowed to say that?):

Example of simplified and HDRI rendered hillshade.

Example of simplified and HDRI rendered hillshade.

Hillshade over the hills of Rwanda

The color is very simple here, as we’ll be overlaying data. Please stay tuned.

Posted in GDAL, Gorillas, Karisoke, Optics, POV-Ray, QGIS, R | Tagged: , , , , , , , | Leave a Comment »

On Generalization Blending for Shaded Relief

Posted by smathermather on January 29, 2017


I have nearly recovered sufficiently from an amazing NACIS conference, and I think I’m ready to get back to a little blogging. This time around, I’d like to present you all with an unfinished concept, and to ask you for your help in carrying it to completion. Specifically, I’d like to show you some attempts I’ve made at improving digital hillshades (I’ll be randomly switching terminology between ‘hillshade’ and ‘shaded relief’ throughout).

Automated, straight-out-of-your-GIS hillshades are usually terrible, and it generally takes some extra cleanup work to get them to the point where they aren’t embarrassing or won’t burst into flames simply by being put next to a well-executed manual shaded relief. Here’s an example I stole from which illustrates the problem:

The computer doesn’t see the big picture — that every little bump in elevation can sum to a large mountain, or that some bumps are more critical…

View original post 1,411 more words

Posted in Other | Leave a Comment »

Gorilla research in Musanze, Rwanda: Hillshades!

Posted by smathermather on January 22, 2017

In previous posts here1, here2, and here3 discussed a then and future trip to Rwanda to help with GIS and gorilla research.

No more in depth write up yet, but I’ve been working on some of the cartographic products to show in the background of maps. Since Rwanda is so beautifully hilly (read: mountainous) and the research is focused on the Virunga Mountains (volcanoes) themselves, this is a huge opportunity for some hillshades to figure in the background of some maps.

So… the following image probably won’t make its way into production maps, but it is very painterly and beautiful, so I thought I’d share it:

Hillshade of the Virungas and surrounding areas.

Hillshade of the Virungas and surrounding areas.

Posted in Gorillas, Karisoke, QGIS, R | Tagged: , , , , , , | Leave a Comment »

Gorilla research in Musanze, Rwanda

Posted by smathermather on January 18, 2017

In previous posts here1 and here2, I discussed a (then future) trip to Rwanda to help with GIS and gorilla research.

I cannot say enough good about the experience. The people of Rwanda are warm and welcoming, the research team at Karisoke (Dian Fossey Gorilla Fund International) hard working, brilliant, and fun. For now we’ll do some pictures to give flavor. Then I’ll build out the narrative and code in the next few blog posts:

Mount Mgahinga over Musanze Town

Mount Mgahinga over Musanze Town

Mount Sabinyo over Musanze Town

Mount Sabinyo over Musanze Town


Class on QGIS, First Day

A little Karisoke Soccer/Football. No GIS Managers were injured in the filming of this video:


Samedi Mucyo working on some QGIS2ThreeJS for visualizing Golden Monkey ranging data

Potato fields below Mount Bisoke

Stay tuned for more!

Posted in Gorillas, Karisoke, QGIS, R | Tagged: , , , , | 1 Comment »

FOSS4G 2018: Dar es Salaam

Posted by smathermather on January 10, 2017

This will be a different FOSS4G. As the first in a developing country, our vision is for an accessible and inclusive FOSS4G, a FOSS4G for All.

Source: FOSS4G 2018: Dar es Salaam

Posted in Other | Leave a Comment »