Ok, here’s the basic idea– we have an orthographic scene with a height_field object scaled to real world units. The observers in the viewshed become points of light, thus for each observer, we “light” the areas visible from each observer, render, and boom, viewshed created. In addition, if we have three or fewer view points, we can render them in, say cyan, yellow, and magenta, and thus tell which observer points can view a given location. But I digress. I keep leaving the actual code at work so I’ll start out by giving you the

Povray Viewshed Calculation Meta-Metacode:

In a Povray height_field, input elevation images are scaled 1 unit in all 3 dimensions. In our case, the input image is 5000 feet on a side in real world units. So, we multiply x and y dimension by 5000 to scale it. Since the unit values in the Z-direction are scaled to one as well, and they were input as 16-bit, that means that povray effectively divides the data by 65536, so we multiply by 65536 to get back to elevation in feet. Now our x, y, and z dimensions are scaled appropriately to each other.

Since we’re working with geographic data, well put it back in the real world (just in case we want to put other data in our system as well), in this case the Ohio State Plane North NAD83 HARN (feet) projection. So, translate to location in real space (the state plane centroid of the image) and set an orthographic camera above it with an image plane equal to the x and y dimensions. The orthographic camera prevents distortion from perspective, so we retain a flat map. (Side note: I’m contemplating using an orthographic camera and DEM to correct for terrain distortions in uncorrected imagery (and maybe shading as well), but that will take more thought, and may become another post).

Ok. Now to calculate viewshed from a point, place a point light in the scene at the location in question. Render, and boom, we have a viewshed from a point. Want a few points, add a few more point lights. Want to constrain the effect in the x, y, or z direction, add some opaque baffles (I haven’t done this in my code yet), or directional lights.

Finally, just for kicks, we can drape an aerial image on top, and our viewshed lights will only light the parts of interest, something I’ve never seen done in a GIS. But again, (see previous post) the main point is to be able to simulate vegetation, buildings and other comprehensive aspects of the scene, not just a digital terrain (elevation) model. If that’s all we wanted, a GIS would suffice. So, next step, add buildings and trees. Long term, I’d like to do this in Physically Based Renderer (PBRT) or (now that I know it exists…) LuxRender. Heck with physically based rendering, we can do some inverse modeling of physical/chemical characteristics, or WiFi placement optimizations. But again, I digress… .