Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Taking Slices from LiDAR data: Part VII

Posted by smathermather on February 15, 2017

Part 7 of N… , see e.g. my previous post on the topic.

More work on taking LiDAR slices. This time, the blog post is all about data preparation. LiDAR data, in its raw form, often has scan line effects when we look at density of points.

lidar_flightlines

This can affect statistics we are running, as our sampling effort is not even. To ameliorate this affect a bit, we can decimate our point cloud before doing further work with it. In PDAL, we have three choices for decimation: filters.decimation, which samples every Nth point from the point cloud; filters.voxelgrid, which does volumetric pixel based resampling; and filters.sample or “Poisson sampling via ‘Dart Throwing'”.

filters.decimation won’t help us with the above problem. Voxelgrid sampling could help, but it’s very regular, so I reject this on beauty grounds alone. This leaves filters.sample.

The nice thing about both the voxelgrid and the poisson sampling is that they retain much of the shape of the point cloud while down sampling the data:

subsample-ex1

subsample-ex2

We will execute the poisson sampling in PDAL. As many things in PDAL are best done with a (json) pipeline file, we construct a pipeline file describing the filtering we want to do, and then call that from the command line:

We can slice our data up similar to previous posts, and then look at the point density per slice. R-code for doing this forthcoming (thanks to Chris Tracey at Western Pennsylvania Conservancy and the LidR project), but below is a graphic as a teaser. For the record, we will probably pursue a fully PDAL solution in the end, but really interesting results in the interim:

image001

More to come. Stay tuned.

Posted in 3D, Database, Docker, LiDAR, Other, PDAL, pointcloud, PostGIS, PostgreSQL | Tagged: , , , , , | 2 Comments »

Gorilla research in Musanze, Rwanda: Hillshades continued

Posted by smathermather on January 30, 2017

I’ve been working on base cartography for the research area in Rwanda. Unlike here in Cleveland, we have some great topography to work with, so we can leverage that for basemaps. But, it’s such a beautiful landscape, I didn’t want to sell these hillshades short by doing a halfway job, so I’ve been diving deep.

Background

First, some legacy. I read three great blog posts on hillshades. One was from ESRI revealing their “Next Generation Hillshade”. Drawing on Swiss Cartographic Traditions, these are some nice looking hillshades using lighting sources from multiple directions (more on this later):

ESRIs Hillshades

Next, we look to Peter Richardsons’s recent post on Mapzen’s blog regarding terrain simplification.

Terrain Generalization example from Mapzen

Terrain Generalization example from Mapzen

I tried (not nearly as hard as I should have) to understand their code, when I saw a link to Daniel Huffman’s blog post from 2011 on terrain generalization: On Generalization Blending for Shaded Relief.

That’s when I saw the equation:

((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax

I’ll let you read these posts, rather than rehashing, but here’s what I did toward adding to them. The gist of Daniel and Peter’s approach is to blend together a high resolution and lower resolution version of the DEM based on a weighting factor. Both use a standard deviation filter to determine where to use the high resolution DEM vs resampled version — if the location is much higher or lower than it’s neighbors, it is considered an important feature, and given detail, otherwise the low resolution version is used (actually, I suspect Mapzen’s approach is only highlighting top features based on their diagrams, but I haven’t dived into the code to verify).

The Need

Excuse the colors, we’ll fix those at the end, but this allows us to simplify something that looks like this:

Multicolor hillshade with no simplification

Multicolor hillshade with no simplification

Into something that looks like this:

Multicolor hillshade with simplification

Multicolor hillshade with simplification

See how the hilltops and valleys remain in place and at full detail, but some of the minor facets of the hillsides are simplified? This is our aim.

I developed a pure GDAL approach for the simplification. It is purely command line, has hardcoded file names, etc, but could be done with a python or other API and turned into a proper function. TL:DR: this is not yet refined but quite effective.

Landscape Position

If you’ve been following my blog for a while, you may recall a series of blog posts on determining landscape position using gdal.

Landscape position as calculated on a DEM

Landcape position

This, with small modification, is a perfect tool for determining where to retain DEM features and where to generalize. The one modification is to calculate standard deviation from our simple difference data.

The Tools

Generalization

Back to those ugly colors on my hillshade version of the map. They go deeper than just color choice — it’s hard not to get a metallic look to digital hillshades. We see it in ESRI’s venerable map and in Mapbox’s Outdoor style. Mapzen may have avoided it by muting the multiple-light approach that ESRI lauds and Mapbox uses — I’m not sure.

HDRI (Shading)

To avoid this with our process (HT Brandon Garmin) I am using HDRI environment mapping for my lighting scheme. This allows for more complicated and realistic lighting that is pleasing to the eye and easy to interpret. Anyone who has followed me for long enough knows where this is going: straight to Pov-Ray… :

Results

The results? Stunning (am I allowed to say that?):

Example of simplified and HDRI rendered hillshade.

Example of simplified and HDRI rendered hillshade.

Hillshade over the hills of Rwanda

The color is very simple here, as we’ll be overlaying data. Please stay tuned.

Posted in GDAL, Gorillas, Karisoke, Optics, POV-Ray, QGIS, R | Tagged: , , , , , , , | Leave a Comment »

On Generalization Blending for Shaded Relief

Posted by smathermather on January 29, 2017

somethingaboutmaps

I have nearly recovered sufficiently from an amazing NACIS conference, and I think I’m ready to get back to a little blogging. This time around, I’d like to present you all with an unfinished concept, and to ask you for your help in carrying it to completion. Specifically, I’d like to show you some attempts I’ve made at improving digital hillshades (I’ll be randomly switching terminology between ‘hillshade’ and ‘shaded relief’ throughout).

Automated, straight-out-of-your-GIS hillshades are usually terrible, and it generally takes some extra cleanup work to get them to the point where they aren’t embarrassing or won’t burst into flames simply by being put next to a well-executed manual shaded relief. Here’s an example I stole from shadedreliefarchive.com which illustrates the problem:

The computer doesn’t see the big picture — that every little bump in elevation can sum to a large mountain, or that some bumps are more critical…

View original post 1,411 more words

Posted in Other | Leave a Comment »

Gorilla research in Musanze, Rwanda: Hillshades!

Posted by smathermather on January 22, 2017

In previous posts here1, here2, and here3 discussed a then and future trip to Rwanda to help with GIS and gorilla research.

No more in depth write up yet, but I’ve been working on some of the cartographic products to show in the background of maps. Since Rwanda is so beautifully hilly (read: mountainous) and the research is focused on the Virunga Mountains (volcanoes) themselves, this is a huge opportunity for some hillshades to figure in the background of some maps.

So… the following image probably won’t make its way into production maps, but it is very painterly and beautiful, so I thought I’d share it:

Hillshade of the Virungas and surrounding areas.

Hillshade of the Virungas and surrounding areas.

Posted in Gorillas, Karisoke, QGIS, R | Tagged: , , , , , , | Leave a Comment »

Gorilla research in Musanze, Rwanda

Posted by smathermather on January 18, 2017

In previous posts here1 and here2, I discussed a (then future) trip to Rwanda to help with GIS and gorilla research.

I cannot say enough good about the experience. The people of Rwanda are warm and welcoming, the research team at Karisoke (Dian Fossey Gorilla Fund International) hard working, brilliant, and fun. For now we’ll do some pictures to give flavor. Then I’ll build out the narrative and code in the next few blog posts:

Mount Mgahinga over Musanze Town

Mount Mgahinga over Musanze Town

Mount Sabinyo over Musanze Town

Mount Sabinyo over Musanze Town

 

Class on QGIS, First Day

A little Karisoke Soccer/Football. No GIS Managers were injured in the filming of this video:

 

Samedi Mucyo working on some QGIS2ThreeJS for visualizing Golden Monkey ranging data

Potato fields below Mount Bisoke

Stay tuned for more!

Posted in Gorillas, Karisoke, QGIS, R | Tagged: , , , , | 1 Comment »

FOSS4G 2018: Dar es Salaam

Posted by smathermather on January 10, 2017

This will be a different FOSS4G. As the first in a developing country, our vision is for an accessible and inclusive FOSS4G, a FOSS4G for All.

Source: FOSS4G 2018: Dar es Salaam

Posted in Other | Leave a Comment »

Quantitative analysis of gorilla movement and R-Stat (part 2)

Posted by smathermather on November 23, 2016

In my previous post, I did a bit of setup on the who and what for analyzing gorilla data. Now let’s move into R-Stat a little bit, specifically installation and configuration.

For R, we’ll install whatever package is the correct one for your machine, and then also install R-Studio. This gives us a nice development environment for R.

Screen shot of R-Studio

We’re going to leverage a few packages in our work. The first is the “sp” package. This gives us “CLasses and Methods for Spatial Data” in R:

install.packages("sp")

We’ll also “rgdal”. This package gives us “Bindings for the Geospatial Data Abstraction Library”. For our use case here, this allows us to read shapefiles, geotiffs, and other spatial data.

install.packages("rgdal")

We’ll use “rmarkdown” also. This allows us to build a markdown file (think HTML but much simpler) that includes our code in code blocks, and can be compiled to a document whether a web page, pdf, etc.. This encourages us to write code and document it in one place, and ultimately will allow us to build some really great reporting functionality while getting our analysis work done.

install.packages("rmarkdown")

Finally, we’ll need the adehabitatLT package for aiding in our analysis of animal movement and habitat use.

install.packages("adehabitatLT")

We can, of course install all this in one swell foop as follows:

install.packages("sp")
install.packages("rgdal")
install.packages("rmarkdown")
install.packages("adehabitatLT")

Now we have all the building blocks we need to start analyzing our data. Stick around for part 3.

Posted in Gorillas, Karisoke, R | Tagged: , , , , | 2 Comments »

Quantitative analysis of gorilla movement and R-Stat

Posted by smathermather on November 21, 2016

At this point in my career, it is a rare request that intimidates me. Most requests are either solved, largely solved (and I know how to find the answer), or broadly improbably / excessively expensive.

XKCD cartoon demonstrating the difficulty of distinguishing difficult problems in computer science

This is a nice place to be. I have a bit of calmness with every request, a vision for how to execute, or the escape valve of saying, “That’s harder / more expensive than you think. Perhaps we should look at these alternatives.”

Recently, I got a request that was tougher — a theoretically possible project at the edge of my knowledge domain, a desire to deliver at any reasonable cost, and no clear sense of who to ask for help.

Specifically, how does one take volumes of gorilla tracking data, and make sense of the behaviors and intentions implicit in their movement? Many know the story of Dian Fossey, the primatologist who lived with the mountain gorillas in Rwanda. What many may not know, is her research continues in Rwanda at Karisoke Research Center (and has continued for 50 years), following the troops of gorillas in Rwanda, and the project has expanded quite a bit under auspices of the Dian Fossey Gorilla fund:

Map of Africa

Map of East Africa centered on Rwanda

Map of Rwanda

Zoomed in map of Rwanda centered near Karisoke

Image of Karisoke then and now

I have the privilege of being sponsored by the Cleveland Zoological Society to got to Karisoke, observe the research, and help where needed with GIS infrastructure development and geospatial analyses. I am excited beyond belief.

So, what’s the test here? I know how to do geospatial infrastructure. Ask me to improve infrastructure and I have a decade of experience chasing down these problems, trying out different technologies, developing new ones. I also know how to perform advanced analyses. I pride myself on my analytic skills, even as I’ve moved from analyst to programmer to manager, etc.. But, I’ve never done the more advanced quantitative analyses of movement necessary to deepen understanding beyond work already done at Karisoke. Lots of great minds have already been looking at the data for years and decades. The basic question in my mind — how can I meaningfully help?

Thanks to my colleague Dr. Patrick Lorch, I have some sources. Pat did his thesis work studying the movements and behavior of Mormon crickets in the western US, and his experience with quantitative analysis of movement is directly applicable.

First we looked to Peter Turchin’s Quantitative Analysis of Movement. Then we found the adehabitat R-Stat module.

In short, the next few blog posts will be about the apparently incomparably beautiful nation of Rwanda, mountain gorillas, R-Stat, and adehabitat. There will be PostGIS mixed in for good measure. Stay tuned.

Posted in Gorillas, Karisoke, R | Tagged: , , , , | 3 Comments »

Filtering buildings and vegetation from drone-derived surface models

Posted by smathermather on October 21, 2016

I’ve been playing for the last few days with PDAL and it’s ability to filter point clouds. More code and details later, but here’s a teaser of my results to date:

Image of unfiltered surface model.

Image of unfiltered surface model.

Image of unfiltered surface model

Image of unfiltered surface model.

Image of surface model derived from points filtered using approximate progressive morphological filter.

Image of surface model derived from points filtered using approximate progressive morphological filter.

Image of surface model derived from points filtered using approximate progressive morphological filter.

Image of surface model derived from points filtered using progressive morphological filter.

Image of surface model derived from points filtered using progressive morphological filter.

Image of surface model derived from points filtered using progressive morphological filter.

Image of surface model derived from points filtered using progressive morphological filter.

Image of surface model derived from points filtered using progressive morphological filter plus building footprint and outlier removal.

Image of surface model derived from points filtered using progressive morphological filter plus building footprint and outlier removal.

Image of surface model derived from points filtered using progressive morphological filter plus building footprint and outlier removal.

Image of surface model derived from points filtered using progressive morphological filter plus building footprint and outlier removal.

Posted in Other | Leave a Comment »

Finding peace, finding ground: Drone flights for hydrologic modeling

Posted by smathermather on October 15, 2016

Š-L-M

Lately I’ve been helping with project to predict flooding in Dar es Salaam (“Home of Peace”), the capital former capital of Tanzania in East Africa. This is a really fun project for me, in part because most of the hard work is already done, but there still remain some tricky problems to solve.

This project is a part of Dar Ramani Huria, a “community-based mapping project… training university students and local community members to create highly accurate maps of the most flood-prone areas of the city using OpenStreetMap.”

Dar es Salaam (or Dar for short), is the fastest growing city in Africa, making the identification of flood prone zones during rainy seasons absolutely critical. The Ramani Huria crew has mapped the streams, ditches, and other waterways of Dar, as well as flown imagery (via drone) over much of the city critical parts of the city.

screen-shot-2016-10-15-at-3-01-04-pm

Problem Space:

The results are stunning, but using drone imagery and photogrammetric point clouds (instead of LiDAR) has it’s limitations.

One problem, that I’m not going to get into in any depth today, is the difficulty of conflating (vertically and horizontally) different datasets flown at different times with commodity GPS.

Another problem is the difficulty of turning photogrammetrically derived point clouds into Digital Terrain Models. There is proprietary software that does this well (e.g. LasTools and others), but we seek a free and open source alternative. Let’s visualize the problem. See below a photogrammetrically-derived point cloud (processed in Pix4D, visualized at https://plas.io):

screen-shot-2016-10-15-at-3-15-44-pm

We can directly turn this into a digital surface model (DSM):

screen-shot-2016-10-15-at-3-24-23-pm

5

We can see the underlying topography, but buildings and trees are much of the signal in the surface model. If we, for example, calculate height above the nearest stream as a proxy for flooding, we get a noisy result.

OSM to the Rescue

If you recall at the beginning of this post, I said this is “a really fun project for me” because “most of the hard work is already done”. Dar is a city with pretty much every building digitized. We can take advantage of this work to remove buildings from our point cloud. We’ll use the utilities associated with the Point Data Abstraction Layer (PDAL) library. Specifically, we’ll use PDAL’s ability to clip point cloud data with 2D OGR compatible geometries (a great tutorial on this here).

First thing we need to do is extract the buildings from OSM into shapefiles. For this, and easy way is to use Mapzen’s Metro Extracts:

screen-shot-2016-10-15-at-3-58-35-pm

I chose Datasets grouped into individual layers by OpenStreetMap tags (IMPOSM) as this gives me the capacity to just choose “Buildings”. I loaded those buildings into a PostGIS database for the next steps. What I seek is a shapefile of all the areas which are not buildings, essentially from this:

building

To this:

no_building

For this we’ll need to use ST_Difference + ST_ConvexHull. ST_ConvexHull will create a shape of the extent of our data, and we’ll use ST_Difference to cookie-cutter out all the building footprints:

DROP TABLE IF EXISTS "salaam-buff-diff-split1";
CREATE TABLE "salaam-buff-diff-split1" AS
WITH subset AS (
	SELECT ST_Transform(geom, 32737) AS geom FROM "salaam-buildings" LIMIT 20000
	),
extent AS (
	SELECT ST_ConvexHull(ST_Union(geom)) AS geom FROM subset
	),
unioned AS (
	SELECT ST_Union(geom) AS geom FROM subset
	),
differenced AS (
	SELECT ST_Difference(a.geom, b.geom) AS geom FROM
	extent a, unioned b
	)
SELECT 1 AS id, geom FROM differenced;

This gives us a reasonable result. But, when we use this in PDAL, (I assume) we want a subdivided shape to ensure we don’t have to access the entire point cloud at any given time to do our clipping. We’ll add to this process ST_SubDivide, which will subdivide our shape into parts not to exceed a certain number of nodes. In this case we’ll choose 500 nodes per shape:

DROP TABLE IF EXISTS "salaam-buff-diff-split1";
CREATE TABLE "salaam-buff-diff-split1" AS
WITH subset AS (
	SELECT ST_Transform(geom, 32737) AS geom FROM "salaam-buildings" LIMIT 20000
	),
extent AS (
	SELECT ST_ConvexHull(ST_Union(geom)) AS geom FROM subset
	),
unioned AS (
	SELECT ST_Union(geom) AS geom FROM subset
	),
differenced AS (
	SELECT ST_Difference(a.geom, b.geom) AS geom FROM
	extent a, unioned b
	)
SELECT 1 AS id, ST_Subdivide(geom, 500) AS geom FROM
differenced;

subdivided

Finally, if we want to be sure to remove the points from the edges of buildings (we can assume the digitized buildings won’t perfectly match our point clouds), then we should buffer our shapes:

DROP TABLE IF EXISTS "salaam-buff-diff-split1";
CREATE TABLE "salaam-buff-diff-split1" AS
WITH subset AS (
	SELECT ST_Transform(geom, 32737) AS geom FROM "salaam-buildings" LIMIT 20000
	),
buffered AS (
	SELECT ST_Buffer(geom, 2, 'join=mitre mitre_limit=5.0') AS geom FROM subset
	),
extent AS (
	SELECT ST_ConvexHull(ST_Union(geom)) AS geom FROM subset
	),
unioned AS (
	SELECT ST_Union(geom) AS geom FROM Buffered
	),
differenced AS (
	SELECT ST_Difference(a.geom, b.geom) AS geom FROM
	extent a, unioned b
	)
SELECT 1 AS id, ST_Subdivide(geom, 500) AS geom FROM
differenced;

If we recall our before point cloud:

screen-shot-2016-10-15-at-3-15-44-pm

screen-shot-2016-10-15-at-4-38-53-pm

Now we have filtered out most buildings:

screen-shot-2016-10-15-at-4-36-15-pm

screen-shot-2016-10-15-at-4-36-31-pm

In order to do this filtering in PDAL, we do two things. First we create a json file that defines the filter:

{
  "pipeline":[
    "/data/2015-05-20_tandale_merged_densified_point_cloud_part_1.las",
    {
      "type":"filters.attribute",
      "dimension":"Classification",
      "datasource":"/data/salaam-buff-diff-split.shp",
      "layer":"salaam-buff-diff-split",
      "column":"id"
    },
    {
      "type":"filters.range",
      "limits":"Classification[1:1]"
    },
    "/data/2015-05-20_tandale_merged_densified_point_cloud_part_1_nobuild_buff.las"
  ]
}

Then we use these json definitions to apply the filter:

nohup sudo docker run -v /home/gisuser/docker/:/data pdal/pdal:1.3 pdal pipeline /data/shape-clip.json

Problems continue

This looks pretty good, but as we interrogate the data, we can see the artifacts of trees and some buildings still linger in the dataset.

screen-shot-2016-10-15-at-4-37-38-pm

screen-shot-2016-10-15-at-4-38-03-pm

 

Enter the Progressive Morphological Filter

It is possible to use the shape of the surface model to filter out buildings and trees. To do so, we start with the assumption that the buildings and vegetation shapes distinctive from the shape of the underlying ground. We have already used the hard work of the OpenStreetMap community to filter most of the buildings, but we still have some buildings and plenty of trees. PDAL has another great tutorial for applying this filter which we’ll leverage.

Again we need a JSON file to define our filter:

{
  "pipeline": {
    "name": "Progressive Morphological Filter with Outlier Removal",
    "version": 1.0,
    "filters": [{
        "name": "StatisticalOutlierRemoval",
        "setMeanK": 8,
        "setStddevMulThresh": 3.0
      }, {
        "name": "ProgressiveMorphologicalFilter",
        "setCellSize": 1.5
    }]
  }
}

And then use that filter to remove all the morphology and statistical outliers we don’t want:

nohup sudo docker run -v /home/gisuser/docker/:/data pdal/pdal:1.3 pdal pcl -i /data/2015-05-20_tandale_merged_densified_point_cloud_part_1_nobuild_buff.las -o /data/nobuild_filtered2.las -p /data/sor-pmf.json

This command will remove our colorization for the points, so we’ll see the colorization according to height only:

screen-shot-2016-10-15-at-4-53-55-pm

screen-shot-2016-10-15-at-4-54-49-pm

 

What remains is to interpolate this into digital terrain model. That is a project for another day.

Posted in 3D, Drone, Other, PDAL, Photogrammetry, UAS | Tagged: , , , | 1 Comment »