# Posts Tagged ‘povray’

## Gorilla research in Musanze, Rwanda: Hillshades continued

Posted by smathermather on January 30, 2017

I’ve been working on base cartography for the research area in Rwanda. Unlike here in Cleveland, we have some great topography to work with, so we can leverage that for basemaps. But, it’s such a beautiful landscape, I didn’t want to sell these hillshades short by doing a halfway job, so I’ve been diving deep.

# Background

First, some legacy. I read three great blog posts on hillshades. One was from ESRI revealing their “Next Generation Hillshade”. Drawing on Swiss Cartographic Traditions, these are some nice looking hillshades using lighting sources from multiple directions (more on this later):

Next, we look to Peter Richardsons’s recent post on Mapzen’s blog regarding terrain simplification.

Terrain Generalization example from Mapzen

I tried (not nearly as hard as I should have) to understand their code, when I saw a link to Daniel Huffman’s blog post from 2011 on terrain generalization: On Generalization Blending for Shaded Relief.

That’s when I saw the equation:

((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax

I’ll let you read these posts, rather than rehashing, but here’s what I did toward adding to them. The gist of Daniel and Peter’s approach is to blend together a high resolution and lower resolution version of the DEM based on a weighting factor. Both use a standard deviation filter to determine where to use the high resolution DEM vs resampled version — if the location is much higher or lower than it’s neighbors, it is considered an important feature, and given detail, otherwise the low resolution version is used (actually, I suspect Mapzen’s approach is only highlighting top features based on their diagrams, but I haven’t dived into the code to verify).

## The Need

Excuse the colors, we’ll fix those at the end, but this allows us to simplify something that looks like this:

Into something that looks like this:

See how the hilltops and valleys remain in place and at full detail, but some of the minor facets of the hillsides are simplified? This is our aim.

I developed a pure GDAL approach for the simplification. It is purely command line, has hardcoded file names, etc, but could be done with a python or other API and turned into a proper function. TL:DR: this is not yet refined but quite effective.

## Landscape Position

If you’ve been following my blog for a while, you may recall a series of blog posts on determining landscape position using gdal.

Landcape position

This, with small modification, is a perfect tool for determining where to retain DEM features and where to generalize. The one modification is to calculate standard deviation from our simple difference data.

# The Tools

## Generalization

Back to those ugly colors on my hillshade version of the map. They go deeper than just color choice — it’s hard not to get a metallic look to digital hillshades. We see it in ESRI’s venerable map and in Mapbox’s Outdoor style. Mapzen may have avoided it by muting the multiple-light approach that ESRI lauds and Mapbox uses — I’m not sure.

To avoid this with our process (HT Brandon Garmin) I am using HDRI environment mapping for my lighting scheme. This allows for more complicated and realistic lighting that is pleasing to the eye and easy to interpret. Anyone who has followed me for long enough knows where this is going: straight to Pov-Ray… :

# Results

The results? Stunning (am I allowed to say that?):

Example of simplified and HDRI rendered hillshade.

The color is very simple here, as we’ll be overlaying data. Please stay tuned.

Posted in GDAL, Gorillas, Karisoke, Optics, POV-Ray, QGIS, R | Tagged: , , , , , , , | Leave a Comment »

## Gorilla research in Musanze, Rwanda: Hillshades!

Posted by smathermather on January 22, 2017

In previous posts here1, here2, and here3 discussed a then and future trip to Rwanda to help with GIS and gorilla research.

No more in depth write up yet, but I’ve been working on some of the cartographic products to show in the background of maps. Since Rwanda is so beautifully hilly (read: mountainous) and the research is focused on the Virunga Mountains (volcanoes) themselves, this is a huge opportunity for some hillshades to figure in the background of some maps.

So… the following image probably won’t make its way into production maps, but it is very painterly and beautiful, so I thought I’d share it:

Hillshade of the Virungas and surrounding areas.

Posted in Gorillas, Karisoke, QGIS, R | Tagged: , , , , , , | Leave a Comment »

## pov-viewshed on GitHub

Posted by smathermather on July 21, 2013

GitHub week for me:

https://github.com/smathermather/pov-viewshed

More to come, but in the meantime, enjoy.

Posted in 3D, Analysis, github, Optics, POV-Ray, Viewshed | Tagged: , | 1 Comment »

## Way beyond red-dot fever– bees hives and overlapping home ranges

Posted by smathermather on April 6, 2013

Last week, post Boston Code Sprint, I spent a couple of hours playing with bee data, specifically bee keeper data for Norfolk County Massachusetts.

My friend Eric (a bee keeper of 4 hives in said county) says that worker bees can fly as far as 3 miles for nectar, but after that approximate limit, they hit diminishing returns relative to the calories they burn.

Proximity of bee hives is important for reasons of competition (although bees are quite friendly with each other on the flower) but also disease transfer, which in this age of not-so-easy bee keeping, is an important factor.  So we set out to map bees.

With a list of addresses, QGIS, and the mmqgis extension (see Plugins:Fetch Python Plugins)  which uses Google services for address lookup, we get a nice little set of points for beekeepers in Norfolk County.

Having a good context map for any such set of points is important.  This is the part of the project I worried most about.  Fortunately, we also have the OpenLayers Plugin, which allows us to add a number of web maps as background (BTW, when we want print quality maps down the road, this will cause us suffering, but for now it is a nice stand-in for a good base layer).  We decided the colorless Stamen Toner map was perfect for our purposes.

Now, let’s buffer these points– give those bees 1.5 miles of free reign, and see where the overlaps in range are (we could make these 3 mile radii, but you get the point):

The map tells us something of overlapping ranges, etc., but it seems like we could use something like we could do something more sophisticated.  Anyone who has followed my blog for a while will not be surprised by the employment of Voronoi polygons… .

Eric, having suffered an explanation of different ways to calculate Voronoi earlier in the week, remembered the relationship between buffers and Voronoi, and asked if we could combine them.   In this case, picture soap bubbles of your youth.  Blow a soap bubble for each bee point, and the walls of bubbles in between form Voronoi polygons.  In our case, we use cones instead of soap bubbles, but after a couple hours of playing in PovRay, we made some code like this:

#version 3.6;

// Pov Includes
#include “colors.inc”
#include “transforms.inc”
#include “cone_coords.inc”

#declare Camera_Size = 500000; //freekin huge

background {color <0, 0, 0>}

//Set the center of image
#declare scene_center_x=759496;
#declare scene_center_y=2861454;

#declare Camera_Location = <scene_center_x,5,scene_center_y> ;
#declare Camera_Lookat = <scene_center_x,0,scene_center_y> ;

// Use orthographic camera for true plan view
camera {
orthographic
location Camera_Location
look_at Camera_Lookat
right Camera_Size*x
up Camera_Size*y
}

// Union all the cones together into one object

#declare coner = union {
cone { <0,0,0>,15840,<0,5,0>,0

} };

union {

#declare LastIndex = dimension_size(cone_coords, 1)-2;
#declare Index = 0;
#while(Index <= LastIndex)
object {
coner
translate cone_coords[Index]
}
#declare Index = Index + 1;
#end

// Pigment cones according to distance from camera
pigment {
[0 color Blue ] // Blue to orange will help us employ blue as the alpha values later
[1 color Orange ]
}
scale <vlength(Camera_Location-Camera_Lookat),1,1>
Reorient_Trans(x, Camera_Lookat-Camera_Location)
translate Camera_Location
}
finish {ambient 1}
}

We get a funky result (yes, this is the result I hoped for):

A little transparency, alternate QGIS blending mode, and using the blue band as a transparency band give us a nice output:

We’ll add in those Voronoi boundaries to emphasize the edge of these areas:

And since this is really only meant for Norfolk County bee keepers, and doesn’t represent folks outside that area, we’ll use a mask with a blending mode of “Subtract” to really offset this map:

I will be interested to see if this helps them look at disease control and spread.  In case you are wondering, Eric is luck enough to have an almost dedicated area for his bees, which includes a huge wetland complex (think: lots of flowers) that is (in his words) all his own.  Well, all his bees own, anyway.

Posted in Analysis, Optics, Other, POV-Ray, QGIS | Tagged: , , , | 2 Comments »

## Fast Calculation of Voronoi Polygons in PovRay– Applied

Posted by smathermather on January 30, 2012

In a further exploration of using PovRay to do fast calculation of Voronoi polygons, let’s look to a real stream system as an example.  Here’s where the magic comes out, and where medial axes are found.

First the simple but real example.

Here’s the povray code:

```
#include "transforms.inc"
#version 3.6;

#include "edge_coords1.inc"

#declare Camera_Location = <2258076, 10, 659923>;
#declare Camera_Lookat   = <2258076, 0, 659923>;
#declare Camera_Size = 1800;

camera {
orthographic
location Camera_Location
look_at Camera_Lookat
right Camera_Size*x
up Camera_Size*y
}

background {color <0.5, 0.5, 0.5>}

//union {

#declare Rnd_1 = seed (1153);

#declare LastIndex = dimension_size(edge_coords, 1)-2;
#declare Index = 0;
#while(Index <= LastIndex)

cone {
<0.0, -5, 0.0>, 30,  <0.0, 5, 0.0>, 0.0
hollow
material {
texture {
pigment {
rgb <rand(Rnd_1)*2 + 1, rand(Rnd_1)*5 + 2, rand(Rnd_1)*5 +5>
}
finish {
diffuse 0.6
brilliance 1.0
}
}
interior {
ior 1.3
}
}
translate edge_coords[Index]
}

#declare Index = Index + 1;

#end

```
```
#declare edge_coords = array[16842]
{<2256808.71000000000, 0.00000000000, 660094.46988100000> ,
<2256810.06170000000, 0.00000000000, 660092.99580200000> ,
<2256811.41308000000, 0.00000000000, 660091.52172400000> ,
<2256811.68965000000, 0.00000000000, 660091.21988700000> ,
<2256813.46393000000, 0.00000000000, 660090.29698900000> ,
<2256815.23820000000, 0.00000000000, 660089.37376200000> ,
<2256817.01248000000, 0.00000000000, 660088.45086400000> ,
<2256818.78675000000, 0.00000000000, 660087.52763700000> ,
<2256820.56103000000, 0.00000000000, 660086.60473900000> ,
<2256822.33530000000, 0.00000000000, 660085.68151200000> ,
<2256824.10958000000, 0.00000000000, 660084.75861400000> ,
<2256825.88352000000, 0.00000000000, 660083.83538800000> ,
<2256827.65780000000, 0.00000000000, 660082.91248900000> ,
<2256829.43207000000, 0.00000000000, 660081.98926300000> ,
<2256831.20635000000, 0.00000000000, 660081.06636400000> ,

```

## Fast Calculation of Voronoi Polygons in PovRay

Posted by smathermather on January 20, 2012

Yet further abuse to follow in the application of PovRay for spatial analyses– be forwarned… .

Calculating Voronoi polygons is a useful tool for calculating the initial approximation of a medial axis. We densify the vertices on the polygon for which we want a medial axis, and then calculate Voronoi polygons on said vertices. I’ll confess– I’ve been using ArcGIS for this step. There. I said it. My name is smathermather, and I use ArcGIS. It’s a darn useful tool, and, and, I’m not ashamed to say that I use it all the time.

But, I do like the idea of doing this with non-proprietary, open source tools. I’ve contemplated using the approach that Regina Obe blogs about using PL/R + PostGIS to calculate them, but the installation of PL/R on my system failed my initial Yak Shaving Test.

So, with some google-fu, I stumbled upon articles on using 3D approaches to calculate 2D Voronoi diagrams, and my brain went into high gear. Right cones of equal size, viewed orthogonally generate lovely Voronoi diagrams, the only caveat being you need to know what your maximum distance you want to calculate in your diagram. For my use cases, this isn’t a limitation at all.

And so we abuse PovRay, yet again, to do our dirty work in analyses. I’ll undoubtedly have some follow up posts on this, but in the mean time, some diagrams and really dirty code:

```
global_settings {
ambient_light rgb <0.723364, 0.723368, 0.723364>
}

camera {
orthographic
location < 0.0, 5, 0>
right x * 3
up y * 3
look_at < 0.0, 0.0, 0.0>
}

light_source {
< 0.0, 100, 0>
rgb <0.999996, 1.000000, 0.999996>
parallel
point_at < 0.0, 0.0, 0.0>
}

light_source {
< 0.0, 100, 100>
rgb <0.999996, 1.000000, 0.999996>
parallel
point_at < 0.0, 0.0, 0.0>
}

light_source {
< 100, 100, 0>
rgb <0.999996, 1.000000, 0.999996>
parallel
point_at < 0.0, 0.0, 0.0>
}

cone {
<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
hollow
material {
texture {
pigment {
rgb <0.089677, 0.000000, 1.000000>
}
finish {
diffuse 0.6
brilliance 1.0
}
}
interior {
ior 1.3
}
}
}

cone {
<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
hollow
material {
texture {
pigment {
rgb <1, 0.000000, 1.000000>
}
finish {
diffuse 0.6
brilliance 1.0
}
}
interior {
ior 1.3
}
}
translate <-0.5,0,-0.5>
}

cone {
<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
hollow
material {
texture {
pigment {
rgb <1, 0.000000, 000000>
}
finish {
diffuse 0.6
brilliance 1.0
}
}
interior {
ior 1.3
}
}
translate <-0.5,0,0.5>
}

cone {
<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
hollow
material {
texture {
pigment {
rgb <1, 1.000000, 000000>
}
finish {
diffuse 0.6
brilliance 1.0
}
}
interior {
ior 1.3
}
}
translate <0.3,0,0.3>
}

cone {
<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
hollow
material {
texture {
pigment {
rgb <0, 1.000000, 000000>
}
finish {
diffuse 0.6
brilliance 1.0
}
}
interior {
ior 1.3
}
}
translate <0.4,0,-0.2>
}

cone {
<0.0, -0.5, 0.0>, 0.5,  <0.0, 0.5, 0.0>, 0.0
hollow
material {
texture {
pigment {
rgb <1, 1.000000, 1.000000>
}
finish {
diffuse 0.6
brilliance 1.0
}
}
interior {
ior 1.3
}
}
translate <0,0,-.7>
}
```

Posted in Analysis, Optics, Other, POV-Ray | Tagged: , , , | 2 Comments »

## Landscape Position: Conclusion? (part 2)

Posted by smathermather on December 7, 2011

From earlier post:

“I’ve managed to pilot most of a fast high resolution landscape position workflow with PovRay as my magic tool. The final steps I hope to pipe through PostGIS Raster. In the meantime a screenshot and description: blues are riparian, raw ocre, etc upland categories, grey is mostly flat lake plain and mid slopes, all derived from just a high res DEM input (no hydro lines as input, so it works on areas where you don’t know where the streams may be). There will be more categories in the final, in the meantime, welcome to December.”

I didn’t use PostGIS Raster, but the incredibly useful gdal_calc.py to finish out my landscape position calculations.  I will endeavor to post my code to github soon, but the parts and pieces include using gdal and PovRay.  PovRay helps with the sampling (really!) of nearby neighbors in the raster DEM, and gdal does the averaging and differencing of those to get relative landscape position.  I spent some time yesterday scratching my head over how to show all the new landscape position information on a readable and useable map, and after discussion with collegues, decided to use it to divide the world into two categories– riparian & lowland + headwaters & upland (not reflected yet in the labels).  To find out more about landscape position, follow this link, or better yet this one.  (BTW, the green is park land, blue is riparian/lowland/stream networks, purple is the basin boundary).

Riparian map based on landscape position, calculated using PovRay and GDAL.

## Landscape Position: Conclusion?

Posted by smathermather on November 30, 2011

I’ve managed to pilot most of a fast high resolution landscape position workflow with PovRay as my magic tool. The final steps I hope to pipe through PostGIS Raster. In the meantime a screenshot and description: blues are riparian, raw ocre, etc upland categories, grey is mostly flat lake plain and mid slopes, all derived from just a high res DEM input (no hydro lines as input, so it works on areas where you don’t know where the streams may be). There will be more categories in the final, in the meantime, welcome to December.

## Landscape Position and McNab Indices (cont.)

Posted by smathermather on January 30, 2011

I typed that last one too quickly– too many typos, but my wife says I’m not supposed to revise blogs, but move on… .

So, for clarity, let’s talk a little more about McNab indices.  Field-derived McNab indices are a measure of average angle from the observer to the horizon (mesoscale landform index), or from the observer to another field person a set distance away, e.g. 10m or 18m, or whatever the plot size or settled upon (minor landform index).  Typically these are done in the field at a discreet number of directions, e.g. 4 cardinal directions, or 4 cardinal plus 4 ordinal (NE,NW SE, SW, SE) directions. The landform image in this post and the last is a calculation of mesoscale landform, which is harder to do in a classic GIS than minor landform (I’ll have a follow-up post on minor landform, probably using ArcGIS).

To calculate the values for mesoscale landform computationally, we require calculating the angle to the horizon for a certain number of directions for each point in the landscape.  This has the potential to be computationally intensive and complicated.  If, however, we conceive of the problem differently, as a 3D calculation we can perform in PovRay, arriving at the answer is simplified by the coding already done by the PovRay programmers.

Essentially, McNab mesoscale indices are a field proxy for the steradian of a site.

What does that mean?  Well, essentially, a map of steradians is a proxy for the shading of a white uneven surface on a cloudy day– or how much diffuse light is available to a given site.  Used in combination with site aspect, this is enough information to determine most of of the light conditions of a site, which is why McNab indices in combination with other factors are a good predictor of site productivity, and correlated with different plant communities across the landscape.

How does an uneven white surface shade on a cloudy day? Steeper areas with more of the sky occluded are darker while wide open spaces, like the bottom of river valleys and the tops of ridges and plateaus are brighter.  If you want to witness this effect, look to snow on the ground on a cloudy day (and what a great winter to do it).  (The only difference with snow is subsurface scattering which evens out the light quite a bit.)  You’ll notice the divots in the snow to be darker than the peaks, and the edges of the divots to be darkest of all.

The question then, is how to we compute this within PovRay?  We could use radiance as a global illumination model, but the calculation of inter-reflectivity that is at the core of radiance, while an important real factor in the landscape, would fail to replicate the original mesoscale landform index.  Instead, we set up a series of lights in a dome to illuminate the whole sky sphere, blur them a bit, and call it a day, a technique developed for HDRI rendering by Ben Weston, whose code I borrowed heavily.  The more lights the element of the landscape is exposed to the brighter, and vice versa, essentially making the brightness of an element proportional to the exposure to the sky sphere.  Unlike Ben, I used a simple white sphere to get even lighting.

Rendered at 16-bit resolution, we have a possible range of values from 0-65535.  Let’s assume that a linear transformation of these values will result in values representing the proportion of the sky sphere.  From there, transformation to steradians and then to solid angles in degrees is trivial.  Once it’s solid angles in degrees, it represents the same kind of value as a mesoscale landform index would give us (more later…).

## Povray Viewshed with Trees (finally) (cont.)

Posted by smathermather on October 9, 2008

Ok, your average terrain-only based viewshed (view is from a road to the southeast, viewshed is in cyan):

Note that based on these estimates, we should be able to see most of the valley walls from this little slice of road.  I don’t buy that.  That section of road is pretty closed in with trees.  Let’s add them:

As you may see, just a little cyan in the southeast mostly along the road.  Here’s where you can see the trees occluding our view of the viewshed a bit.  But, it’s a first good stab, even with such boring results… .  I’ll have to get that section of road in here that peeks out over the gorge.  But first, I’ll have to see if I can get tree stem locations from the lidar, so we can trade up for some veracity in our analysis.  Stay tuned.