Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Posts Tagged ‘POV-Ray’

Gorilla research in Musanze, Rwanda: Hillshades continued

Posted by smathermather on January 30, 2017

I’ve been working on base cartography for the research area in Rwanda. Unlike here in Cleveland, we have some great topography to work with, so we can leverage that for basemaps. But, it’s such a beautiful landscape, I didn’t want to sell these hillshades short by doing a halfway job, so I’ve been diving deep.

Background

First, some legacy. I read three great blog posts on hillshades. One was from ESRI revealing their “Next Generation Hillshade”. Drawing on Swiss Cartographic Traditions, these are some nice looking hillshades using lighting sources from multiple directions (more on this later):

ESRIs Hillshades

Next, we look to Peter Richardsons’s recent post on Mapzen’s blog regarding terrain simplification.

Terrain Generalization example from Mapzen

Terrain Generalization example from Mapzen

I tried (not nearly as hard as I should have) to understand their code, when I saw a link to Daniel Huffman’s blog post from 2011 on terrain generalization: On Generalization Blending for Shaded Relief.

That’s when I saw the equation:

((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax

I’ll let you read these posts, rather than rehashing, but here’s what I did toward adding to them. The gist of Daniel and Peter’s approach is to blend together a high resolution and lower resolution version of the DEM based on a weighting factor. Both use a standard deviation filter to determine where to use the high resolution DEM vs resampled version — if the location is much higher or lower than it’s neighbors, it is considered an important feature, and given detail, otherwise the low resolution version is used (actually, I suspect Mapzen’s approach is only highlighting top features based on their diagrams, but I haven’t dived into the code to verify).

The Need

Excuse the colors, we’ll fix those at the end, but this allows us to simplify something that looks like this:

Multicolor hillshade with no simplification

Multicolor hillshade with no simplification

Into something that looks like this:

Multicolor hillshade with simplification

Multicolor hillshade with simplification

See how the hilltops and valleys remain in place and at full detail, but some of the minor facets of the hillsides are simplified? This is our aim.

I developed a pure GDAL approach for the simplification. It is purely command line, has hardcoded file names, etc, but could be done with a python or other API and turned into a proper function. TL:DR: this is not yet refined but quite effective.

Landscape Position

If you’ve been following my blog for a while, you may recall a series of blog posts on determining landscape position using gdal.

Landscape position as calculated on a DEM

Landcape position

This, with small modification, is a perfect tool for determining where to retain DEM features and where to generalize. The one modification is to calculate standard deviation from our simple difference data.

The Tools

Generalization

Back to those ugly colors on my hillshade version of the map. They go deeper than just color choice — it’s hard not to get a metallic look to digital hillshades. We see it in ESRI’s venerable map and in Mapbox’s Outdoor style. Mapzen may have avoided it by muting the multiple-light approach that ESRI lauds and Mapbox uses — I’m not sure.

HDRI (Shading)

To avoid this with our process (HT Brandon Garmin) I am using HDRI environment mapping for my lighting scheme. This allows for more complicated and realistic lighting that is pleasing to the eye and easy to interpret. Anyone who has followed me for long enough knows where this is going: straight to Pov-Ray… :

Results

The results? Stunning (am I allowed to say that?):

Example of simplified and HDRI rendered hillshade.

Example of simplified and HDRI rendered hillshade.

Hillshade over the hills of Rwanda

The color is very simple here, as we’ll be overlaying data. Please stay tuned.

Posted in GDAL, Gorillas, Karisoke, Optics, POV-Ray, QGIS, R | Tagged: , , , , , , , | Leave a Comment »

Gorilla research in Musanze, Rwanda: Hillshades!

Posted by smathermather on January 22, 2017

In previous posts here1, here2, and here3 discussed a then and future trip to Rwanda to help with GIS and gorilla research.

No more in depth write up yet, but I’ve been working on some of the cartographic products to show in the background of maps. Since Rwanda is so beautifully hilly (read: mountainous) and the research is focused on the Virunga Mountains (volcanoes) themselves, this is a huge opportunity for some hillshades to figure in the background of some maps.

So… the following image probably won’t make its way into production maps, but it is very painterly and beautiful, so I thought I’d share it:

Hillshade of the Virungas and surrounding areas.

Hillshade of the Virungas and surrounding areas.

Posted in Gorillas, Karisoke, QGIS, R | Tagged: , , , , , , | Leave a Comment »

Cartographically Pleasing Vegetation Boundaries– with PovRay, PostGIS, and LiDAR

Posted by smathermather on May 18, 2012

Just pretty pictures today.

Posted in Cartography, Database, Optics, Other, PostGIS, PostgreSQL, POV-Ray | Tagged: , , , , | Leave a Comment »

Fast Calculation of Voronoi Polygons in PovRay– Applied

Posted by smathermather on January 30, 2012

In a further exploration of using PovRay to do fast calculation of Voronoi polygons, let’s look to a real stream system as an example.  Here’s where the magic comes out, and where medial axes are found.

First the simple but real example.

Here’s the povray code:


#include "transforms.inc"
#version 3.6;

#include "edge_coords1.inc"

#declare Camera_Location = <2258076, 10, 659923>;
#declare Camera_Lookat   = <2258076, 0, 659923>;
 #declare Camera_Size = 1800;

camera {
orthographic
location Camera_Location
look_at Camera_Lookat
right Camera_Size*x
up Camera_Size*y
}

background {color <0.5, 0.5, 0.5>}

//union {

#declare Rnd_1 = seed (1153);

#declare LastIndex = dimension_size(edge_coords, 1)-2;
#declare Index = 0;
#while(Index <= LastIndex)

cone {
<0.0, -5, 0.0>, 30,  <0.0, 5, 0.0>, 0.0
     hollow
material {
texture {
pigment {
rgb <rand(Rnd_1)*2 + 1, rand(Rnd_1)*5 + 2, rand(Rnd_1)*5 +5>
}
finish {
diffuse 0.6
                 brilliance 1.0
}
}
interior {
ior 1.3
}
}
translate edge_coords[Index]
}


#declare Index = Index + 1;

#end


#declare edge_coords = array[16842]
{<2256808.71000000000, 0.00000000000, 660094.46988100000> ,
<2256810.06170000000, 0.00000000000, 660092.99580200000> ,
<2256811.41308000000, 0.00000000000, 660091.52172400000> ,
  <2256811.68965000000, 0.00000000000, 660091.21988700000> ,
<2256813.46393000000, 0.00000000000, 660090.29698900000> ,
<2256815.23820000000, 0.00000000000, 660089.37376200000> ,
<2256817.01248000000, 0.00000000000, 660088.45086400000> ,
  <2256818.78675000000, 0.00000000000, 660087.52763700000> ,
<2256820.56103000000, 0.00000000000, 660086.60473900000> ,
<2256822.33530000000, 0.00000000000, 660085.68151200000> ,
<2256824.10958000000, 0.00000000000, 660084.75861400000> ,
  <2256825.88352000000, 0.00000000000, 660083.83538800000> ,
<2256827.65780000000, 0.00000000000, 660082.91248900000> ,
<2256829.43207000000, 0.00000000000, 660081.98926300000> ,
<2256831.20635000000, 0.00000000000, 660081.06636400000> ,

 

Posted in Analysis, Optics, Other, POV-Ray | Tagged: , , , | Leave a Comment »

Fast Calculation of Voronoi Polygons in PovRay

Posted by smathermather on January 20, 2012

Yet further abuse to follow in the application of PovRay for spatial analyses– be forwarned… .

Calculating Voronoi polygons is a useful tool for calculating the initial approximation of a medial axis. We densify the vertices on the polygon for which we want a medial axis, and then calculate Voronoi polygons on said vertices. I’ll confess– I’ve been using ArcGIS for this step. There. I said it. My name is smathermather, and I use ArcGIS. It’s a darn useful tool, and, and, I’m not ashamed to say that I use it all the time.

But, I do like the idea of doing this with non-proprietary, open source tools. I’ve contemplated using the approach that Regina Obe blogs about using PL/R + PostGIS to calculate them, but the installation of PL/R on my system failed my initial Yak Shaving Test.

So, with some google-fu, I stumbled upon articles on using 3D approaches to calculate 2D Voronoi diagrams, and my brain went into high gear. Right cones of equal size, viewed orthogonally generate lovely Voronoi diagrams, the only caveat being you need to know what your maximum distance you want to calculate in your diagram. For my use cases, this isn’t a limitation at all.

And so we abuse PovRay, yet again, to do our dirty work in analyses. I’ll undoubtedly have some follow up posts on this, but in the mean time, some diagrams and really dirty code:


global_settings {
	ambient_light rgb <0.723364, 0.723368, 0.723364>
}

camera {
	orthographic 
	location < 0.0, 5, 0>
	right x * 3
	up y * 3
	look_at < 0.0, 0.0, 0.0>
}

light_source {
	< 0.0, 100, 0>
	rgb <0.999996, 1.000000, 0.999996>
	parallel
	point_at < 0.0, 0.0, 0.0>
}


light_source {
	< 0.0, 100, 100>
	rgb <0.999996, 1.000000, 0.999996>
	parallel
	point_at < 0.0, 0.0, 0.0>
}

light_source {
	< 100, 100, 0>
	rgb <0.999996, 1.000000, 0.999996>
	parallel
	point_at < 0.0, 0.0, 0.0>
}

cone {
	<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
	hollow
	material {
		texture {
			pigment {
				rgb <0.089677, 0.000000, 1.000000>
			}
			finish {
				diffuse 0.6
				brilliance 1.0
			}
		}
		interior {
			ior 1.3
		}
	}
}

cone {
	<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
	hollow
	material {
		texture {
			pigment {
				rgb <1, 0.000000, 1.000000>
			}
			finish {
				diffuse 0.6
				brilliance 1.0
			}
		}
		interior {
			ior 1.3
		}
	}
	translate <-0.5,0,-0.5>
}

cone {
	<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
	hollow
	material {
		texture {
			pigment {
				rgb <1, 0.000000, 000000>
			}
			finish {
				diffuse 0.6
				brilliance 1.0
			}
		}
		interior {
			ior 1.3
		}
	}
	translate <-0.5,0,0.5>
}

cone {
	<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
	hollow
	material {
		texture {
			pigment {
				rgb <1, 1.000000, 000000>
			}
			finish {
				diffuse 0.6
				brilliance 1.0
			}
		}
		interior {
			ior 1.3
		}
	}
	translate <0.3,0,0.3>
}

cone {
	<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
	hollow
	material {
		texture {
			pigment {
				rgb <0, 1.000000, 000000>
			}
			finish {
				diffuse 0.6
				brilliance 1.0
			}
		}
		interior {
			ior 1.3
		}
	}
	translate <0.4,0,-0.2>
}

cone {
	<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
	hollow
	material {
		texture {
			pigment {
				rgb <1, 1.000000, 1.000000>
			}
			finish {
				diffuse 0.6
				brilliance 1.0
			}
		}
		interior {
			ior 1.3
		}
	}
	translate <0,0,-.7>
}

Posted in Analysis, Optics, Other, POV-Ray | Tagged: , , , | 2 Comments »

Landscape Position: Conclusion? (part 2)

Posted by smathermather on December 7, 2011

From earlier post:

“I’ve managed to pilot most of a fast high resolution landscape position workflow with PovRay as my magic tool. The final steps I hope to pipe through PostGIS Raster. In the meantime a screenshot and description: blues are riparian, raw ocre, etc upland categories, grey is mostly flat lake plain and mid slopes, all derived from just a high res DEM input (no hydro lines as input, so it works on areas where you don’t know where the streams may be). There will be more categories in the final, in the meantime, welcome to December.”

I didn’t use PostGIS Raster, but the incredibly useful gdal_calc.py to finish out my landscape position calculations.  I will endeavor to post my code to github soon, but the parts and pieces include using gdal and PovRay.  PovRay helps with the sampling (really!) of nearby neighbors in the raster DEM, and gdal does the averaging and differencing of those to get relative landscape position.  I spent some time yesterday scratching my head over how to show all the new landscape position information on a readable and useable map, and after discussion with collegues, decided to use it to divide the world into two categories– riparian & lowland + headwaters & upland (not reflected yet in the labels).  To find out more about landscape position, follow this link, or better yet this one.  (BTW, the green is park land, blue is riparian/lowland/stream networks, purple is the basin boundary).

Riparian map based on landscape position, calculated using PovRay and GDAL.

Posted in Analysis, Ecology, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , , | Leave a Comment »

Landscape Position: Conclusion?

Posted by smathermather on November 30, 2011

I’ve managed to pilot most of a fast high resolution landscape position workflow with PovRay as my magic tool. The final steps I hope to pipe through PostGIS Raster. In the meantime a screenshot and description: blues are riparian, raw ocre, etc upland categories, grey is mostly flat lake plain and mid slopes, all derived from just a high res DEM input (no hydro lines as input, so it works on areas where you don’t know where the streams may be). There will be more categories in the final, in the meantime, welcome to December.

20111130-221953.jpg

Posted in Analysis, Ecology, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , , | 1 Comment »

Landscape Position and McNab Indices (cont.)

Posted by smathermather on January 30, 2011

I typed that last one too quickly– too many typos, but my wife says I’m not supposed to revise blogs, but move on… .

So, for clarity, let’s talk a little more about McNab indices.  Field-derived McNab indices are a measure of average angle from the observer to the horizon (mesoscale landform index), or from the observer to another field person a set distance away, e.g. 10m or 18m, or whatever the plot size or settled upon (minor landform index).  Typically these are done in the field at a discreet number of directions, e.g. 4 cardinal directions, or 4 cardinal plus 4 ordinal (NE,NW SE, SW, SE) directions. The landform image in this post and the last is a calculation of mesoscale landform, which is harder to do in a classic GIS than minor landform (I’ll have a follow-up post on minor landform, probably using ArcGIS).

To calculate the values for mesoscale landform computationally, we require calculating the angle to the horizon for a certain number of directions for each point in the landscape.  This has the potential to be computationally intensive and complicated.  If, however, we conceive of the problem differently, as a 3D calculation we can perform in PovRay, arriving at the answer is simplified by the coding already done by the PovRay programmers.

Essentially, McNab mesoscale indices are a field proxy for the steradian of a site.

What does that mean?  Well, essentially, a map of steradians is a proxy for the shading of a white uneven surface on a cloudy day– or how much diffuse light is available to a given site.  Used in combination with site aspect, this is enough information to determine most of of the light conditions of a site, which is why McNab indices in combination with other factors are a good predictor of site productivity, and correlated with different plant communities across the landscape.

How does an uneven white surface shade on a cloudy day? Steeper areas with more of the sky occluded are darker while wide open spaces, like the bottom of river valleys and the tops of ridges and plateaus are brighter.  If you want to witness this effect, look to snow on the ground on a cloudy day (and what a great winter to do it).  (The only difference with snow is subsurface scattering which evens out the light quite a bit.)  You’ll notice the divots in the snow to be darker than the peaks, and the edges of the divots to be darkest of all.

The question then, is how to we compute this within PovRay?  We could use radiance as a global illumination model, but the calculation of inter-reflectivity that is at the core of radiance, while an important real factor in the landscape, would fail to replicate the original mesoscale landform index.  Instead, we set up a series of lights in a dome to illuminate the whole sky sphere, blur them a bit, and call it a day, a technique developed for HDRI rendering by Ben Weston, whose code I borrowed heavily.  The more lights the element of the landscape is exposed to the brighter, and vice versa, essentially making the brightness of an element proportional to the exposure to the sky sphere.  Unlike Ben, I used a simple white sphere to get even lighting.

Rendered at 16-bit resolution, we have a possible range of values from 0-65535.  Let’s assume that a linear transformation of these values will result in values representing the proportion of the sky sphere.  From there, transformation to steradians and then to solid angles in degrees is trivial.  Once it’s solid angles in degrees, it represents the same kind of value as a mesoscale landform index would give us (more later…).

Posted in Ecology, Landscape Position, POV-Ray | Tagged: , , , , , , , , , , , , , , | 2 Comments »

Landscape Position and McNab Indices

Posted by smathermather on January 29, 2011

Just a quick teaser post for our forestry/ecology readers out there.  I have a methodology developed for calculating McNab indices that directly corresponds with the field technique (unlike, as far as I know, any previous GIS-based techniques– which are probably adequate proxies).

What is a McNab index?  Well there are two kinds, the minor landforms and mesoscale landforms that are field-measured topographic position or terrain shape indices inform the location of ecological processes across the landscape.  So, for example, some plant forest types like ridges, and some like ravines.  The question is, quantitatively, how raviney or ridgey should it be for a given species, association, or alliance, and how is it measured?  Basically either average angle to horizon, or average angle to the local landscape, e.g. 10m away are the two McNab indices.  See i.e. http://www.treesearch.fs.fed.us/pubs/1150 and http://www.treesearch.fs.fed.us/pubs/24472.

So here’s the output (more to come), including code.  The darker the shade, the lower the relative position, the lighter the shade, the higher the valley, e.g. ridges and planes.  I know, it just looks like a hillshade, but there’s deeper stuff happening here:

Posted in Landscape Position | Tagged: , , , , , , , | 1 Comment »

YouTube Canopy Video

Posted by smathermather on February 9, 2010

Earlier, I posted some simple pictures from this short video clip using LiDAR to determine vegetation shape, animated in PovRay. Now, I post the video itself (all 7 seconds):

I did what YouTube says not to do, and that is to upload a vignetted version of video. Such is. I’ll upload a better one shortly eventually.

Posted in POV-Ray | Tagged: , , , | Leave a Comment »