Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Archive for the ‘GDAL’ Category

Gorilla research in Musanze, Rwanda: Hillshades continued

Posted by smathermather on January 30, 2017

I’ve been working on base cartography for the research area in Rwanda. Unlike here in Cleveland, we have some great topography to work with, so we can leverage that for basemaps. But, it’s such a beautiful landscape, I didn’t want to sell these hillshades short by doing a halfway job, so I’ve been diving deep.

Background

First, some legacy. I read three great blog posts on hillshades. One was from ESRI revealing their “Next Generation Hillshade”. Drawing on Swiss Cartographic Traditions, these are some nice looking hillshades using lighting sources from multiple directions (more on this later):

ESRIs Hillshades

Next, we look to Peter Richardsons’s recent post on Mapzen’s blog regarding terrain simplification.

Terrain Generalization example from Mapzen

Terrain Generalization example from Mapzen

I tried (not nearly as hard as I should have) to understand their code, when I saw a link to Daniel Huffman’s blog post from 2011 on terrain generalization: On Generalization Blending for Shaded Relief.

That’s when I saw the equation:

((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax

I’ll let you read these posts, rather than rehashing, but here’s what I did toward adding to them. The gist of Daniel and Peter’s approach is to blend together a high resolution and lower resolution version of the DEM based on a weighting factor. Both use a standard deviation filter to determine where to use the high resolution DEM vs resampled version — if the location is much higher or lower than it’s neighbors, it is considered an important feature, and given detail, otherwise the low resolution version is used (actually, I suspect Mapzen’s approach is only highlighting top features based on their diagrams, but I haven’t dived into the code to verify).

The Need

Excuse the colors, we’ll fix those at the end, but this allows us to simplify something that looks like this:

Multicolor hillshade with no simplification

Multicolor hillshade with no simplification

Into something that looks like this:

Multicolor hillshade with simplification

Multicolor hillshade with simplification

See how the hilltops and valleys remain in place and at full detail, but some of the minor facets of the hillsides are simplified? This is our aim.

I developed a pure GDAL approach for the simplification. It is purely command line, has hardcoded file names, etc, but could be done with a python or other API and turned into a proper function. TL:DR: this is not yet refined but quite effective.

Landscape Position

If you’ve been following my blog for a while, you may recall a series of blog posts on determining landscape position using gdal.

Landscape position as calculated on a DEM

Landcape position

This, with small modification, is a perfect tool for determining where to retain DEM features and where to generalize. The one modification is to calculate standard deviation from our simple difference data.

The Tools

Generalization

Back to those ugly colors on my hillshade version of the map. They go deeper than just color choice — it’s hard not to get a metallic look to digital hillshades. We see it in ESRI’s venerable map and in Mapbox’s Outdoor style. Mapzen may have avoided it by muting the multiple-light approach that ESRI lauds and Mapbox uses — I’m not sure.

HDRI (Shading)

To avoid this with our process (HT Brandon Garmin) I am using HDRI environment mapping for my lighting scheme. This allows for more complicated and realistic lighting that is pleasing to the eye and easy to interpret. Anyone who has followed me for long enough knows where this is going: straight to Pov-Ray… :

Results

The results? Stunning (am I allowed to say that?):

Example of simplified and HDRI rendered hillshade.

Example of simplified and HDRI rendered hillshade.

Hillshade over the hills of Rwanda

The color is very simple here, as we’ll be overlaying data. Please stay tuned.

Posted in GDAL, Gorillas, Karisoke, Optics, POV-Ray, QGIS, R | Tagged: , , , , , , , | Leave a Comment »

Efficient delivery of raster data part 4

Posted by smathermather on May 1, 2016

Guest post from my colleague Patrick Lorch who wrote up what we did the other day in order to view a whole bunch of tiled images in a directory in QGIS (I did some mild editing to his posts. Mistakes are mine). The great thing about the approach is that is generalizeable to most tools that use GDAL for their raster API. This is part of a series. You can view this series in reverse with this post:

Building virtual datasets from a bunch of tiffs
What do you do when someone gives you aerial images stored as tiles in different directories representing different zoom levels? The goal is to make them easy to use as baselayers in QGIS. The answer is to reference them in a virtual data set (VRT).

gdalbuildvrt is the ticket
First make lists of tiffs
If the directory structure is something like this:

total 8047
drwxr-xr-x 7 pdl    4096 Apr 29 14:22 ./
drwxr-xr-x 5 pdl    4096 Apr 27 22:10 ../
drwxr-xr-x 2 pdl 1310720 Apr 22 21:37 0/
drwxr-xr-x 2 pdl  393216 Apr 22 22:54 1/
drwxr-xr-x 2 pdl   98304 Apr 28 14:44 2/
drwxr-xr-x 2 pdl   32768 Apr 28 14:44 3/
drwxr-xr-x 2 pdl    8192 Apr 28 14:44 4/

Then you first need a set of list files listing tiffs in each directory.

ls 0\*.tif > list0.txt
ls 1\*.tif > list1.txt
ls 2\*.tif > list2.txt
ls 3\*.tif > list3.txt
ls 4\*.tif > list4.txt

Now make the vrts

gdalbuildvrt -input_file_list list0.txt aerial_2015_0.vrt
gdalbuildvrt -input_file_list list1.txt aerial_2015_1.vrt
gdalbuildvrt -input_file_list list2.txt aerial_2015_2.vrt
gdalbuildvrt -input_file_list list3.txt aerial_2015_3.vrt
gdalbuildvrt -input_file_list list4.txt aerial_2015_4.vrt

Now you can open these in QGIS depending on what zoom level you need.

These VRTs may now be loaded as ordinary rasters in QGIS or whatever you please. In this case, we retiled with multiple resample levels (see this post for more info), so we’ll have to define max/min ranges at which the different image collections are visible.

Thanks for the write up Pat!

Posted in GDAL | Tagged: , , , | Leave a Comment »

(Whichever tiler you use) and efficient delivery of raster data (image pyramid layer) (update2)

Posted by smathermather on April 15, 2016

Subdivision of geographic data is a panacea to problems you didn’t know you had.

Maybe you deal with vector data, so you pre-tile your vector data to ship to the browser to render– you’re makin’ smaller data. Maybe you use cutting edge PostGIS so you apply ST_Subdivide to keep your data smaller than the database page size like Paul Ramsey describes here. Smaller’s better… . Or perhaps you are forever reprojecting your data in strange ways, across problematic boundaries or need to buffer in an optimum coordinate system to avoid distortion. Regardless of the reason, smaller is better.

Maybe you aren’t doing vector work, but this time raster. What’s the equivalent tiling process?  I wrote about this for GeoServer almost 5 (eep!) years ago now (with a slightly more recent follow up) and much of what I wrote still applies:

  • Pre-tile your raw data in modest chunks
  • Use geotiff so you can use internal data structures to have even smaller tiles inside your tiles
  • Create pyramids / pre-summarized data as tiles too.

Fortunately, while these posts were written for GeoServer, they apply to any tiler. Pre-process with gdal_retile.

gdal_retile.py -v -r bilinear -levels 4 -ps 6144 6144 -co "TILED=YES" -co "BLOCKXSIZE=256" -co "BLOCKYSIZE=256" -s_srs EPSG:3734 -targetDir aerial_2011 --optfile list.txt

Let’s break this down a little:

First we choose our resampling method for our pyramids (bilinear). Lanzcos would also be fine here.

-r bilinear

Next we set the number of resampling levels. This will depend on the size of the dataset.

-levels 4

Next we specify the pixel and line size of the output geotiff. This can be pretty large. We probably want to avoid a size that forces the use of bigtiff (i.e. 4GB).

-ps 6144 6144

Now we get into the geotiff data structure — we internally tile the tifs, and make them 256×256 pixels. We could also choose 512. We’re just aiming to have our tile size near to the size that we are going to send to the browser.

-co "TILED=YES" -co "BLOCKXSIZE=256" -co "BLOCKYSIZE=256"

Finally, we specify our coordinate system (this is state plane Ohio), our output directory (needs created ahead of time) and our input file list.

-s_srs EPSG:3734 -targetDir aerial_2011 --optfile list.txt

That’s it. Now you have a highly optimized raster dataset that can:

  • get the level of detail necessary for a given request,
  • and can extract only the data necessary for a given request.
  • Pretty much any geospatial solution which uses GDAL can leverage this work to make for very fast rendering of raster data to a tile cache. If space is an issue, apply compression options that match your use case.

    Posted in GDAL | Tagged: , , , | 2 Comments »

    GDAL tools of the trade: GDAL, MrSid, and nearblack revisited

    Posted by smathermather on February 11, 2016

    internet_archiveBack in 2011 I wrote a post about GDAL, translating MrSID files and trimming off the artifacts so mosaics work. I’m revisiting some similar work this week and realized I had lost my copy of FWTools that can deal with MrSIDs. The Internet Archive came to the rescue.

    At some point LizardTech licensed a DLL that allowed for compilation compatible with FWTools licensing so we could do things like use gdal_translate to covert a MrSID file to a tiff. Later on, the licensing changed, but binaries created prior to the change hold that original license. The WayBack Machine archived one such copy here: https://web.archive.org/web/20130127013649/http://home.gdal.org/fwtools/FWTools202.exe .

    Download, install, and now for some Batch scripting fun… .

    W:\Aircraft\naip_ortho>for %I in (*.sid) do gdal_translate %I %~nI.tif

    W:\Aircraft\naip_ortho>gdal_translate ortho_1-1_1n_s_oh035_2015_1.sid ortho_1-1_
    1n_s_oh035_2015_1.tif
    Input file size is 58222, 49140
    0…10…20.

    Posted in GDAL, Image Processing | Tagged: , , , | 2 Comments »

    Landscape Position using GDAL — PT 3

    Posted by smathermather on November 25, 2014

    More landscape position pictures — just showing riparianess. See also

    https://smathermather.wordpress.com/2014/11/22/landscape-position-using-gdal/

    and

    https://smathermather.wordpress.com/2014/11/24/landscape-position-using-gdal-pt-2/

    valleyz3 valleyz2 valleyz1 valleyz

    Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL

    Posted in Analysis, Ecology, GDAL, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , | Leave a Comment »

    Landscape Position using GDAL — PT 2

    Posted by smathermather on November 24, 2014

    Just pretty pics today of estimated riparianess. If you prefer a bit of code, see previous post https://smathermather.wordpress.com/2014/11/22/landscape-position-using-gdal/

    valleys_improved valleys_improved_1 valleys_improved_2 valleys_improved_3 valleys_improved_3.1 valleys_improved_4 valleys_improved_5

    Posted in Analysis, Ecology, GDAL, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , | 4 Comments »

    Landscape Position using GDAL

    Posted by smathermather on November 22, 2014

    Hat tip again to Seth Fitzsimmons. I’ve been looking for a good, easy to use smoothing algorithm for rasters. Preferably something so easy, I don’t even need to write a little python, and so efficient I can run it on 30GB+ datasets and have it complete before I get distracted again by the next shiny project (a few hours).

    Seth’s solution? Downsample to a low resolution using GDAL, then sample back up to a higher resolution in order to smooth the raster. My innovation to his approach? Use Lanczos resampling to keep location static, and get a great smooth model:

    Unsmoothed DEM

    Unsmoothed DEM

    Smoothed DEM

    Smoothed DEM

    Code to do this in gdal follows. “-tr” sets our resamping resolution, “-r lanczos” sets our resampling algorithm, and the “-co” flags are not strictly necessary, but I’ve got a 30GB dataset, so it helps to chop up the inside of the TIFF in little squares to optimize subsequent processing.

    gdalwarp -tr 50 50 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem.tif oh_leap_dem_50.tif
    gdalwarp -tr 10 50 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem_50.tif oh_leap_dem_10-50.tif
    

    At first this excited me for cartographic reasons. We can use this to simplify contours, and then use simplified contours at different zoom levels for maps:

    But, we can also use this for analyses. For example, if we difference these smoothed images with our original digital elevation model, we get a measurement of local elevation difference, the first step in establishing where valleys, ridges, and other land forms are.

    # Resample to lower resolution
    gdalwarp -tr 328.0523587211646 328.0523587211646 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem.tif oh_leap_dem_328.tif
    # Upsample again to get nicely smoothed data
    gdalwarp -tr 3.048293887897243 3.048293887897243 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem_328.tif oh_leap_dem_3-328.tif
    # Merge two datasets together into single image as separate bands to ensure they are the same dimensions
    # (gdal_calc, as a wrapper for numpy requires this)
    gdal_merge -separate -o oh_leap_dem_3-328_m.tif oh_leap_dem.tif oh_leap_dem_3-328.tif
    # And now we'll use gdal_calc to difference our elevation model with the smoothed one to get relative elevation 
    gdal_calc -A oh_leap_dem_3-328_m.tif -B oh_leap_dem_3-328_m.tif --A_band=1 --B_band=2 --outfile=oh_leap_dem_lp_328.tif --calc="A-B"
    

    So, if we want a good proxy for riparian zones, we can use a technique like this, instead of buffering our streams and rivers a fixed distance (in this case, I’ve used 4 different distances:

    Map of landscape position estimated valleys in Cuyahoga County, Ohio

    Map of landscape position estimated valleys in Cuyahoga County, Ohio

    Pretty snazzy riparian finder. It seems to also find upland headwater wetlands (most of them historic and drained for Cuyahoga County). I am now running on 4 million acres of Ohio at a 10ft (~3 meter) resolution. It’s that efficient.

    Addendum: It also finds escarpment edges, like the Portage Escarpment in the above, so it is a mix of a few landforms. Darn handy nonetheless.

    Posted in Analysis, Ecology, GDAL, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , | 2 Comments »

    Custom projections using spatialreference.org and gdal

    Posted by smathermather on March 17, 2014

    Every now and then I get the urge to define my own projection. Usually, I sit down for a while, hit my head on the wall, and the urge passes. For a few years I have worked with the Lake Erie and Allegheny Partnership for Biodiversity on various projects. Now we are getting deep into region-wide data collection, and so I decided to define an Albers Equal Area projection for the task. Specifically, I sought to refine an existing projection to match the bounds and center of the LEAP region. Yes, I know. I will be cursed one day for this decision, but it sure beats the alternatives at the moment.

    Map of LEAP Region

    Beyond defining the projection, I wanted it as an ESRI WKT and in Proj4 format. Here are the steps I took. I used spatialreference.org to help me do the format translation.

    http://spatialreference.org needs the projection input in a form called Well Known Text (WKT)– specifically the Open Geospatial Consortium’s for of WKT. First, I uploaded the description in ESRI’s WKT: this doesn’t work. So here’s ESRI’s WKT for the LEAP boundary as I created a few days ago:

    PROJCS["LEAP_Contiguous_Albers_Equal_Area_Conic",GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.017453292519943295]],PROJECTION["Albers"],PARAMETER["False_Easting",0.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",-80.75],PARAMETER["Standard_Parallel_1",39],PARAMETER["Standard_Parallel_2",43],PARAMETER["Latitude_Of_Origin",41],UNIT["Meter",1.0]]
    

    I changed 4 things from the North America version of this:

    "Central_Meridian"
    "Standard_Parallel_1"
    "Standard_Parallel_2"
    "Latitude_Of_Origin"
    

    OGC WKT has different names for these, specifically:

    "longitude_of_center"
    "Standard_Parallel_1"
    "Standard_Parallel_2"
    "latitude_of_center"
    

    So, we can take USA Contiguous Albers Equal Area Conic from http://spatialreference.org/ref/esri/102003/

    Go to it’s OGC WKT page:

    http://spatialreference.org/ref/esri/102003/ogcwkt/

    And thus extract the OGC WKT we want to modify:

    PROJCS["USA_Contiguous_Albers_Equal_Area_Conic",GEOGCS["GCS_North_American_1983",DATUM["North_American_Datum_1983",SPHEROID["GRS_1980",6378137,298.257222101]],PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]],PROJECTION["Albers_Conic_Equal_Area"],PARAMETER["False_Easting",0],PARAMETER["False_Northing",0],PARAMETER["longitude_of_center",-96],PARAMETER["Standard_Parallel_1",29.5],PARAMETER["Standard_Parallel_2",45.5],PARAMETER["latitude_of_center",37.5],UNIT["Meter",1],AUTHORITY["EPSG","102003"]]
    

    Modifying those parameters thusly:

    PARAMETER["longitude_of_center",-80.75],
    PARAMETER["Standard_Parallel_1",39],
    PARAMETER["Standard_Parallel_2",43],
    PARAMETER["latitude_of_center",41]
    

    Resulting in the following:

    PROJCS["USA_Contiguous_Albers_Equal_Area_Conic",GEOGCS["GCS_North_American_1983",DATUM["North_American_Datum_1983",SPHEROID["GRS_1980",6378137,298.257222101]],PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]],PROJECTION["Albers_Conic_Equal_Area"],PARAMETER["False_Easting",0],PARAMETER["False_Northing",0],PARAMETER["longitude_of_center",-80.75],PARAMETER["Standard_Parallel_1",39],PARAMETER["Standard_Parallel_2",43],PARAMETER["latitude_of_center",41],UNIT["Meter",1]]
    

    Which I have uploaded to a custom LEAP projection.

    Ok. Final step. gdalwarp was the original tool I wanted to use. It requires that we define our projection in a format called Proj4 or using an EPSG code. Since we invented this projection, it’s has no EPSG code. Now that we have a definition for it loaded into http://spatialreference.org, we can use the web site to give us the proj4 definition:

    http://spatialreference.org/ref/sr-org/7915/proj4/

    This site now gives us the following:

    +proj=aea +lat_1=39 +lat_2=43 +lat_0=41 +lon_0=-80.75 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs
    

    The sample reprojection code from the gdalwarp website is as follows:

    gdalwarp -t_srs '+proj=utm +zone=11 +datum=WGS84' raw_spot.tif utm11.tif
    

    To use it for our own data, we’ll do something like this:

    gdalwarp -t_srs '+proj=aea +lat_1=39 +lat_2=43 +lat_0=41 +lon_0=-80.75 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ' input.tif output.tif
    

    Boom! Data reprojected. And so another projection is born. Oops.

    postscript– Yes I know about epsg.io— as far as I can tell, it does not yet have the ability to receive uploads.

    Edit from Howard Butler:

    ala:

    gdalwarp -t_srs "http://spatialreference.org/ref/sr-org/7915/proj4/" input.tif output.tif
    

    Brilliant!

    Posted in GDAL | Tagged: , , , , , | 3 Comments »

    Photogrammetry using Bundler, PMVS, GDAL, and PostGIS.

    Posted by smathermather on January 8, 2013

    Photogrammetric like ortho-photos rendered with GDAL/PostGIS from 3D point clouds created in Bundler and PMVS?  Yes indeedeedoo:

    Point cloud in meshlab:

    point_cloud

     

    And “orthophoto” derived from voronoi polygons in PostGIS, rendered to tif with gdal:

     

    Screen shot 2013-01-08 at 10.55.36 PM

     

    Hat tip to my3dscanner for making it so easy to generate the point cloud.  More on this later.

    Posted in Bundler, GDAL, Photogrammetry, PMVS, PostGIS | Tagged: , , , | Leave a Comment »

    Reprocessing imagery with GDAL and BASH — and then cleaning up afterward

    Posted by smathermather on August 13, 2012

    I show a simple example today of using gdal tools to change from raster to polygons with Bash.  That’s right.  Still no pythonista here.  I have poked ever so gently at the surface of ruby and rails recently, while watching another coder re-tune his brain to a new set of principles of least surprise.

    By the way, for my regular readers, please be patient with me over the next few quarters.  I am endeavoring to undertake a much larger writing project for the next few months, which will leave little time for this blog.  But, hopefully at the end, it will be worth everyone’s while.

    
    #!/bin/bash
    
    for name in $( ls ????_???_gt.tif);
    do
    
    basename1=`echo $name | awk -F'.' '{print $1}'`              # Split off non-extension portion of name
    
    if [ -f $basename1".shp" ]
    then
    echo $basename1".shp exists"
    else
    echo "Converting " $basename1".shp"
    
    gdal_polygonize.py $name -f "ESRI Shapefile" $basename1".shp"
    
    if [ $? -lt 1 ]
    then
    echo "Conversion a success."
    echo "Removing original file."
    rm $name
    else
    echo "mehmeh.  try again."
    fi
    
    fi
    
    done
    

    Posted in GDAL | Tagged: , | Leave a Comment »