Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Archive for the ‘Analysis’ Category

parallel processing in PDAL

Posted by smathermather on January 28, 2016

Frankfurt Airport tunnel.JPG
Frankfurt Airport tunnel” by Peter IsotaloOwn work. Licensed under CC BY-SA 3.0 via Commons.

In my ongoing quest to process all the LiDAR data for Pennsylvania and Ohio into one gigantic usable dataset, I finally had to break down and learn how to do parallel processing in BASH. Yes, I still need to jump on the Python band wagon (the wagon is even long in the tooth, if we choose to mix metaphors), but BASH makes me soooo happy.

So, in a previous post, I wanted to process relative height in a point cloud. By relative height, I mean height relative to ground. PDAL has a nice utility for this, and it’s pretty easy to use, if you get PDAL installed successfully.


pdal translate 55001640PAN.las 55001640PAN_height.bpf height --writers.bpf.output_dims="X,Y,Z,Height";

Installing PDAL is not too easy, so I used the dockerized version of PDAL and it worked great. Problem is, the dockerized version complicates my commands on the command line, especially if I want to run it on a bunch of files.

Naturally, the next step is to run it on a whole bunch of LiDAR files. For that I need a little control script which I called pdal_height.sh, and then I need to run that in a “for” loop.

#!/bin/bash
# Get the pathname from the input value
pathname="${1%/*}";

# Get the short name of the file, sans path and sans extension
name=`basename $1 .las`

docker run -v $pathname:/data pdal/master pdal translate //data/"$name".las //data/"$name"_height.bpf height --writers.bpf.output_dims="X,Y,Z,Intensity,ReturnNumber,NumberOfReturns,ScanDirectionFlag,EdgeOfFlightLine,Classification,ScanAngleRank,UserData,PointSourceId";

Now we need a basic for loop will take care of sending the las files into our pdal_height.sh, thus looping through all available las files:


for OUTPUT in $(ls *.las); do ~/./pdal_height.sh $OUTPUT; done;

This is great, but I calculated it would take 13 days to complete on my 58366 LiDAR files. We’re talking approximately 41,000 square miles of non-water areas for Ohio, and approximately 44,000 square miles of non-water areas for Pennsylvania. I’m on no particular timeline, but I’m not really that patient. Quick duckduckgo search later, and I remember the GNU Parallel project. It’s wicked easy to use for this use case.

ls *.las | parallel -j24 ~/./pdal_height.sh

How simple! First, we list our las files, then we “pipe” them as a list to parallel, we tell parallel we want it to spawn 24 independent processes using that list as the input for our pdal_height script.

Now we can run it on 24 cores simultaneously. Sadly, I have slow disks 😦 so really I ran it like this:

ls *.las | parallel -j6 ~/./pdal_height.sh

Time to upgrade my disks! Finally, I want to process my entire LiDAR dataset irrespective of location. For this, we use the find command, name the starting directory location, and tell it we want to search based on name.

find /home/myspecialuser/LiDAR/ -name "*.las" | parallel -j6 ~/./pdal_height.sh

Estimated completion time: 2 days. I can live with that until I get better disks. Only one problem, I should make sure this doesn’t stop if my network drops for any reason. Let’s wrap this in nohup which will prevent network-based hangups:

nohup sh -c 'find /home/myspecialuser/LiDAR -name "*.las" | parallel -j6 ~/./pdal_height.sh {}'

Posted in 3D, Analysis, parallel, PDAL, pointcloud | Tagged: , , , | Leave a Comment »

ST_ApproximateMedialAxis(geom)

Posted by smathermather on January 4, 2016

Trying out PostGIS 2.2’s ST_ApproximateMedialAxis capabilities today. I’m going to use it to label parcels. So here is my parcel fabric:

1

And here’s what the skeleton of that fabric looks like:

2

It get’s pretty interesting where the parcels are more complicated:

3

more:

4

Oh, the code you say? Well, it couldn’t be much easier:

SELECT gid, ST_ApproximateMedialAxis(geom) AS geom FROM cuy_parcel_2015;

Or the full version for a new table:

DROP TABLE IF EXISTS cuy_parcel_2015_medial;

CREATE TABLE cuy_parcel_2015_medial AS (
 SELECT gid, ST_ApproximateMedialAxis(geom) AS geom FROM cuy_parcel_2015
 );
ALTER TABLE cuy_parcel_2015_medial ADD PRIMARY KEY (gid);
CREATE INDEX cuy_parcel_2015_medial_geom_idx ON cuy_parcel_2015_medial USING GIST ( geom );

Happy New Year.

Posted in Analysis, Database, PostGIS, PostgreSQL, SQL | Tagged: , , , | 2 Comments »

Landscape Position using GDAL — PT 3

Posted by smathermather on November 25, 2014

More landscape position pictures — just showing riparianess. See also

https://smathermather.wordpress.com/2014/11/22/landscape-position-using-gdal/

and

https://smathermather.wordpress.com/2014/11/24/landscape-position-using-gdal-pt-2/

valleyz3 valleyz2 valleyz1 valleyz

Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL

Posted in Analysis, Ecology, GDAL, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , | Leave a Comment »

Landscape Position using GDAL — PT 2

Posted by smathermather on November 24, 2014

Just pretty pics today of estimated riparianess. If you prefer a bit of code, see previous post https://smathermather.wordpress.com/2014/11/22/landscape-position-using-gdal/

valleys_improved valleys_improved_1 valleys_improved_2 valleys_improved_3 valleys_improved_3.1 valleys_improved_4 valleys_improved_5

Posted in Analysis, Ecology, GDAL, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , | 4 Comments »

Landscape Position using GDAL

Posted by smathermather on November 22, 2014

Hat tip again to Seth Fitzsimmons. I’ve been looking for a good, easy to use smoothing algorithm for rasters. Preferably something so easy, I don’t even need to write a little python, and so efficient I can run it on 30GB+ datasets and have it complete before I get distracted again by the next shiny project (a few hours).

Seth’s solution? Downsample to a low resolution using GDAL, then sample back up to a higher resolution in order to smooth the raster. My innovation to his approach? Use Lanczos resampling to keep location static, and get a great smooth model:

Unsmoothed DEM

Unsmoothed DEM

Smoothed DEM

Smoothed DEM

Code to do this in gdal follows. “-tr” sets our resamping resolution, “-r lanczos” sets our resampling algorithm, and the “-co” flags are not strictly necessary, but I’ve got a 30GB dataset, so it helps to chop up the inside of the TIFF in little squares to optimize subsequent processing.

gdalwarp -tr 50 50 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem.tif oh_leap_dem_50.tif
gdalwarp -tr 10 50 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem_50.tif oh_leap_dem_10-50.tif

At first this excited me for cartographic reasons. We can use this to simplify contours, and then use simplified contours at different zoom levels for maps:

But, we can also use this for analyses. For example, if we difference these smoothed images with our original digital elevation model, we get a measurement of local elevation difference, the first step in establishing where valleys, ridges, and other land forms are.

# Resample to lower resolution
gdalwarp -tr 328.0523587211646 328.0523587211646 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem.tif oh_leap_dem_328.tif
# Upsample again to get nicely smoothed data
gdalwarp -tr 3.048293887897243 3.048293887897243 -srcnodata "0 -32767" -r lanczos  -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" oh_leap_dem_328.tif oh_leap_dem_3-328.tif
# Merge two datasets together into single image as separate bands to ensure they are the same dimensions
# (gdal_calc, as a wrapper for numpy requires this)
gdal_merge -separate -o oh_leap_dem_3-328_m.tif oh_leap_dem.tif oh_leap_dem_3-328.tif
# And now we'll use gdal_calc to difference our elevation model with the smoothed one to get relative elevation 
gdal_calc -A oh_leap_dem_3-328_m.tif -B oh_leap_dem_3-328_m.tif --A_band=1 --B_band=2 --outfile=oh_leap_dem_lp_328.tif --calc="A-B"

So, if we want a good proxy for riparian zones, we can use a technique like this, instead of buffering our streams and rivers a fixed distance (in this case, I’ve used 4 different distances:

Map of landscape position estimated valleys in Cuyahoga County, Ohio

Map of landscape position estimated valleys in Cuyahoga County, Ohio

Pretty snazzy riparian finder. It seems to also find upland headwater wetlands (most of them historic and drained for Cuyahoga County). I am now running on 4 million acres of Ohio at a 10ft (~3 meter) resolution. It’s that efficient.

Addendum: It also finds escarpment edges, like the Portage Escarpment in the above, so it is a mix of a few landforms. Darn handy nonetheless.

Posted in Analysis, Ecology, GDAL, Landscape Position, Other, POV-Ray | Tagged: , , , , , , , , , , | 2 Comments »

From Dobongsan to Mount Saint Helens — FOSS4Gs and OpenDroneMap

Posted by smathermather on September 8, 2014

I’ve touched down and started to settle into Tri-Met country not too long after returning from FOSS4G Korea.

Photo of mountain in the Cascades from airplaneThe public transit is great, the city vibrant with the warmth of a small town, and the energy of an enclave in a big city, and it’s also slightly surreal, now that I’m back where I am no longer, as Paul Ramsey put it, functionally illiterate. Functionally illiterate is what I was for a week and a couple of days in Korea, and thinking of it like that, this was the first time since early gradeschool that one could consider me to be that. Fascinating. I recommend being functionally illiterate every now and then. (Quick aside, I started to learn Hangul, the Korean alphabet today, and it is not so bad. It’s very clever and logical, and I’m looking forward to getting deeper into it).

On Friday I unveil the beginnings of OpenDroneMap, an open source project intended address computer vision in the context of geospatial datasets. This is a project at its infancy, so it’s is as much a call to action, a plea for help, and a “Hey, does anyone want to come out and play?” sort of moment. And I do it now, before the project is too far along so that a wide variety of people can, if they are interested, get involved and have a say in structure, direction, and outcomes associated with it.IMG_20140906_231000

So what is it? Well as a starting place, it’s a fork of an existing computer vision framework (https://github.com/qwesda/BundlerTools) that allows one to go from unreferenced arbitrary photos to structured information. It’s intended to make computer vision techniques useful and usable for geospatial professionals.

At a moment when PostGIS is so very mature that it does most everything, the waves of invention and reinvention of geospatial are passing again and again through the C, Java, Python, Javascript, and Go (etc.) communities, it is now that we have new possibilities of rich spatial datasets, from crowd-sourced Google Street View like projects such as Mapillary, to the impending deluge of images from civilian Unmanned Aerial Systems (drones). Sadly, many existing tools in this sphere are hard to use, expensive, and or don’t scale well to large datasets. The aim with ODM is to create a toolkit that is easy to use, Free and Open Source (and thus free to modify and use), and scales well to large use cases and moreover to do so within the context of the broader community through a transparent development process.

ODM then is a toolset to augment and supplement existing, now conventional, geospatial technologies like GPS, and give structure to the vast catalog of unstructured data being collected everyday. (I won’t delve into it here, but the other part of the puzzle is the common archiving of unstructured data, ala OpenStreetMap. More on that later).

Pre and post transformed points compared in single figureIn our previous example, we used the implicit information in photos using Multi-Stereo View analysis to correctly restructure bad GPS information. This is a trivial example but mighty powerful stuff.

Let me frame this with a quick diversion: I was blessed with the opportunity to keynote at FOSS4G Korea. I wrote and practiced a clever speech, I flew there to Seoul, and on my ride into town from the airport had a revelatory conversation with Sanghee Shin, my host, as to the nature and future of FOSS4G in Korea. So, I promptly and humbly re-wrote my speech.

In short, this is what I said in my keynote: the South Korean Government is interested in fostering the best endemic resources of the Korean Peninsula, the minds and talents of it’s people, to do great work in software. We know the South Korea for hardware innovations (Samsung, LG, etc.) and world class internet speeds. They want also to be known for their software innovations. I suggested that endemicity (the characteristic of being endemic), government by the people, and commons-based peer production (like open source), are three arenas heavily overlapping in a common space, and that it is natural for Koreans to embrace Free and Open Source Software as an answer to bringing to bear Korean initiative and creativity in software.

Kombucha stand, Portland, Oregon

Kombucha stand, Portland, Oregon

Or put another way — Open Source software is Folk software. Of, by, and for the people, endemic to the creators and users of that software, and if the Korean Government is interested in endemic Korean software communities, then it is in their interest to sponsor and support Free and Open Source Software development and application.

So, my call to action today is that we as the FOSS4G community embrace and lead the development of tools which structure our unstructured information — that together, we build OpenDroneMap and the host of related tools that will aid us in building common geospatial understanding from unstructured data from the street, from the forest, from the fields, and from the air. Next up — the next generation of GeoSpatial software. I am excited to apply and invent it with you, our next generation of Folk GeoSpatial Software. Please join me.

 

PostScript: Could you hear the crescendo of music in the background? You could? Good. I hope it moved you. If you are at FOSS4G Portland, I hope it moves you to my presentation at 10:30 to 10:55 on Friday in track 5. There will be more crescendoing music. Oh Yes.
PostPostScript: Should OpenDroneMap be part of PostGIS long term? Is that a crazy idea? How about a front end as an extension in QGIS? Come, let’s discuss the madness.PostPostPostScript: I think switching time zones is starting to affect my writing style. Is it better? Worse? Neither better nor worse? Good.PostPostPostPostScript: Goodnight all. 좋은 밤.

Posted in 3D, Analysis, Bundler, Camera Calibration, Conference, Conferences, Image Processing, OpenDroneMap | Leave a Comment »

FOSS4G Korea 2014, poor GPS photos, and mapillary (part 2 of n)

Posted by smathermather on September 2, 2014

A classic and age old problem in GPS is collecting potentially wonderful data in the field, getting back the office, and realizing a lot of manual scrubbing, data massaging, and other such careful work will need to be done to make the GPS data useful and meaningful. This assumes we can even meaningfully correct it at all.

This is true too (maybe especially) for GPS enabled cameras in canyons and urban canyons. This is a problem we started to explore in https://smathermather.wordpress.com/2014/08/31/foss4g-korea-2014-poor-gps-photos-and-mapillary/

Let’s return to the problem briefly. Were the GPS readings to be consistent and accurate, we should see a relatively straight line of points as the photos were taken along the length of sidewalk on Teheran-Ro in the Gangnam District of Seoul

Figure of raw data points showing anything other than a straight line

In addition to not looking straight, though it is supposed to follow a road, we previously demonstrated that there are a lot of points duplicated where, presumably, the camera was using cached GPS data rather than the latest available at the time of the photo. We can see this density of overlapping points even more clearly using the heatmap tool in QGIS:

Heatmap showing clumping of data points

The clump of red shows a clear issue of overlapping points. As these points are the GPS positions of photographs, we can match features between photographs (using structure from motion) to map out the relative location of these photos to each other. The points in the below figure show the matched points in 3 or more photos, the blue shapes represent camera positions within the scene.

Image of sparse point cloud and relative camera positions

If we look at just the camera locations on a map, we see something like the following:

Figure of camera center points in relative space

For the astute student however, it should not be surprising that the coordinates of these points are not recognizable as any known coordinate system. For example let’s view the X, Y, and Z of the first three points:

id	X	Y	Z
1	-0.357585	-0.390081	-3.48026
2	-0.326079	-0.367529	-3.24815
3	-0.295885	-0.348935	-2.98469
4	-0.272306	-0.334949	-2.79409

This means we require some equation to convert between our unreferenced (but consistent) data to a known coordinate system. To build this equation, we just need to know four things about our data with some certainty — the start point and end point X and Y positions. We will ignore Z for this exercise.

Point 1:
X-Local: -0.357585
X-KUCS: 958632.326047712
Y-Local: 1.29161
Y-KUCS: 958744.221397964

If we remember our trigonometry (or google our trigonometry…) then we’ll be able to solve for our X and Y values independently. For example for X:

X1 = 67.8485 * X + 958657

With that and our Y equation:

Y1 = 27.2400 * Y + 19444469

Now we can transform our local coordinates into the Korean 2000 Unified Coordinate system, and get a much nicer result:

Figure showing corrected camera position points

If we perform a heat map on this output, we’ll see that we have spread out our duplicate geometries to their correct, non-overlapping spacing:

Figure showing corrected camera position heat map

Whew! Now to write some code which does this for us… .

Oh, wait! We forgot the final test. How do they look together (pre and post transformed — post transformed as stars of course):
Pre and post transformed points compared in single figure

But, as we know Google (or in the case of Korea, Naver) is the all knowing authority on where thing are. How does this bear out against satellite imagery?:

Pre and post transformed points compared in single figure with aerial for comparison

Woah! That works for me. Notice, we can even see where I walked a bit to the left side at intersections to move around people and trees.

Posted in 3D, Analysis, Conference, Conferences, FOSS4G Korea | Tagged: , , , , | Leave a Comment »

FOSS4G Korea 2014, poor GPS photos, and mapillary

Posted by smathermather on August 31, 2014

As I have been moving around, whether traveling to Seoul or within Seoul, I have taken a lot of pictures. Some have GPS and I’ve processed to sent to Mapillary, like a few hundred I took on a day wandering Seoul:

Screen shot of Mapillary overview of SeoulI’ve taken a lot of strange videos too. I took a couple videos of my feet in the subway train just to get the music that plays to notify passengers of an approaching stop. Walking around Bukhansan National Park, I have taken many sign pictures. As I work for a park district, how signage and wayfinding are handled here is facinating, both from what I can understand, i.e. choice of material, color, frequency, how the letters are carved, to those elements that I cannot yet understand, i.e. exactly how the Korean Language wayfinding portions work.

But mostly I have been cataloging as much as I can in order to give my children a sense and feel for the city. I am realizing this imperative has given me a child-like view of the city. (Of course, my enthusiasm for the mundane does get me the occasional funny look from locals… . But hey! What could feel more like home than people thinking I am a little strange.)

This blog wouldn’t be mine without a technical twist to the narrative, so let’s dive into some geographic problems worth solving: The camera I have has built in GPS and compass, which makes it seemingly ideal for mapillary uploads. Except the GPS isn’t that accurate, doesn’t always update from photo to photo, struggles in urban areas in general, etc. etc.. And so it is that I am working on a little solution for that problem. First let me illustrate the problem better.

A sanity check on the GPS of the data can easily be done in QGIS using the Photo2Shape plugin:

Screen snapshot of photo2shape plugin install screen

Screenshot of distribution of camera GPS points in QGIS

Let’s do two things to improve our map. For old-time sake, we’ll add a little red-dot-fever, and use one of the native tile maps, Naver, via the TMS for Korea plugin.

Naver map with photo locations overlayed as red dots

We can see our points are both unevenly distributed and somewhat clumped. How clumped? Well, according to my fellow GeoHipsters on twitter, hex bin maps so 2013, so instead we’ll just use some feature blending (multiply) plus low saturation on our red (i.e. use pink) to show intensity of overlap:

Capture of map showing overlap of points with saturation of color increasing where overlaps exist.

Ok, that’s a lot of overlaps for pictures that were taken in a straight line series. Also, note the line isn’t so straight. Yes, I was sober. No, not even with soju can I walk though so many buildings.

Like all problems when I’m obsessed with a particular technology: “The solution here is to use <particular technology with which I am currently obsessed>”. In this case, we substitute <particular technology with which I am currently obsessed> with ‘Structure from Motion’ or OpenDroneMap. ODM would give us the correct relative locations of the original photos. Combined with the absolute locations (as bad as they are) perhaps we could get a better estimate. Here’s a start (confession — mocked up in Agisoft Photoscan. Sssh. Don’t tell) showing in blue the correct relative camera positions:

Image of sparse point cloud and relative camera positions

See how evenly spaced the camera positions are? You can also see the sparse point cloud which hints at the tall buildings of Gangnam and the trees in the boulevard.

  • Next step: Do this in OpenDroneMap.
  • Following Step: Find appropriate way to correlate with GPS positions.
  • Then: Correct model to match real world.
  • Finally: Update GPS ephemeris in original photos.

So, Korea has inspired another multi-part series. Stay tuned.

Posted in 3D, Analysis, Conference, Conferences, FOSS4G Korea | Tagged: , , , , | 2 Comments »

Drivetime analyses, pgRouting

Posted by smathermather on August 6, 2014

Map of overlapping 5-minute drive times from park access points
We’ve got some quick and dirty pgRouting-based code up on github. I say quick and dirty because it directly references the table names in both of the functions. I hope to fix this in the future.

The objective with this code is to input a point, use a nearest neighbor search to find the nearest intersection, and from that calculate a drive time alpha shape.

First, the nearest neighbor search:

CREATE OR REPLACE FUNCTION pgr.nn_search (geom geometry) RETURNS
int8 AS $$

SELECT id FROM pgr.roads_500_vertices_pgr AS r
ORDER BY geom <#> r.the_geom
LIMIT 1;

$$ LANGUAGE SQL VOLATILE;

This function takes one argument– pass it a point geometry, and it will do a knn search for the nearest point. Since we are leveraging pgRouting, it simply returns the id of the nearest intersection point. We wrote this as a function in order to run it, e.g. on all the points in a table. As I stated earlier, it directly references a table name, which is a little hackerish, but we’ll patch that faux pax later.

Now that we have the ID of the point in question, we can do a driving distance calculation, wrap that up in an alpha shape, and return our polygon. Again, we write this as a function:

CREATE OR REPLACE FUNCTION pgr.alpha_shape (id integer, minutes integer) RETURNS
geometry AS $$

WITH alphashape AS(SELECT pgr_alphaShape('WITH
DD AS (
SELECT seq, id1 AS node, cost
FROM pgr_drivingDistance(''SELECT id, source, target, cost FROM pgr.roads_500'',' || id || ', ' || minutes || ', false, false)),
dd_points AS (
SELECT id_ AS id, x, y
FROM pgr.roads_500_vertices_pgr v, DD d
WHERE v.id = d.node)
SELECT * FROM dd_points')),

alphapoints AS (
SELECT ST_Makepoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape),

alphaline AS (
SELECT ST_MakeLine(ST_MakePoint) FROM alphapoints)

SELECT ST_MakePolygon(ST_AddPoint(ST_MakeLine, ST_StartPoint(ST_MakeLine))) AS the_geom FROM alphaline

$$ LANGUAGE SQL VOLATILE;

Finally, we’ll use these functions in conjunction with a set of park feature access points to map our our 5-minute drive time. Overlaps in 5-minute zones we’ve rendered as a brighter green in the image above.

CREATE TABLE pgr.alpha_test AS
WITH dest_ids AS (
SELECT pgr.nn_search(the_geom) AS id FROM pgr.dest_pts
)
SELECT pgr.alpha_shape(id::int, 5)::geometry, id FROM dest_ids;

Oh, and I should point out that for the driving distance calculation, we have pre-calculated the costs of the roads based on the speed limit, so cost is really travel time in minutes.

Posted in Analysis, Database, pgRouting, PostGIS, PostgreSQL, SQL | Tagged: , , | 4 Comments »

Using Spatial Data in R to Estimate Home Ranges (guest blog post)

Posted by smathermather on August 5, 2014

Overlay of daytime and nightime home ranges and coyote points (A guest blog post from Dakota Benjamin today, a Case Western Reserve University senior, and 3 year Summer intern we’ve been luck enough to recruit)

This post has two big take-aways: using spatial data in R, and applying that knowledge to estimating home ranges. If you are not familiar with the R environment, there are many great resources to familiarizing yourself with this powerful language, such as [here](http://cran.r-project.org/doc/manuals/R-intro.pdf) and [here](http://www.statmethods.net/index.html). You will need a basic understanding of R before proceeding.

The package [rgdal](http://cran.r-project.org/web/packages/rgdal/) is essential for utilizing spatial data in R. There are two functions that we will use. First is readOGR(). This function will import a shape file into a special type of data frame, in this case a SpatialPointsDataFrame, which contains the data, coordinates, bounding box, etc. which makes it all accessible in R.

 coyote <- readOGR("nc_coyote_centroids_dn","nc_coyote_centroids_dn") 

The second function we will use is writeOGR(), which as you can probably guess will write a SpatialDataFrame to a shape file (or other format).

 writeOGR(ver, "hr_coyote_centroids_dn", "hr_centroids_dn", "ESRI Shapefile") 

Now we can work with the coyote data in R and use some tools to estimate the home range. The previous blog post shows how to clean up the data using PostGIS (https://smathermather.wordpress.com/2014/07/31/cleaning-animal-tracking-data-throwing-away-extra-points/) will be useful for cleaning up and framing the data in a useful way. What if we want to look at how the home range changes between night and day? I wrote a quick function to add a column to the data. calcDayNight() uses a function from the [RAtmosphere package](http://cran.r-project.org/web/packages/RAtmosphere/) called suncalc(), which calculates the sunrise and sunset for a given date and location. calcDayNight() compares each data point to the sunrise and sunset times and determines whether the point was taken during the day or at night. Here’s the code:

calcDayNight <- function(x) {
  #if time is greater than the sunrise time or less than the sunset time, apply DAY
  suntime <- suncalc(as.numeric(as.Date(x["LMT_DATE"], format="%m-%d-%y") - as.Date("2013-01-01")), Lat=41.6, Long=-81.4)
  coytime <- as.numeric(strptime(x["LMT_TIME"], "%T") - strptime("00:00:00", "%T"))
 
  if(coytime > suntime$sunrise & coytime < suntime$sunset){
    x["dayornight"] <- "DAY"
  } else x["dayornight"] <- "NIGHT"
}

After we get the day and night centroids (from the previous post), we can do our home range estimation. The package we need is [adehabitatHR](http://cran.r-project.org/web/packages/adehabitatHR/). There’s two steps here: create the utilization distribution, and produce a home range contour from that distribution.

The function for creating the distribution is kernelUD(). More information about this model can be found [here](http://cran.r-project.org/web/packages/adehabitatHR/adehabitatHR.pdf) on page 33.

ud <- kernelUD(coyote[,2], h="href")

coyote[,2] refers to the “dayornight” column that contains the “DAY” and “NIGHT” values. Two different distributions will be calculated. Then we will produce a SpatialPolygonsDataFrame using the getverticeshr(), which will be the 95% estimation of the home range:

ver <- getverticeshr(ud, 95)

Export that as I showed above and we get a shape file with two polygons. Above is the home range estimation with the day/night centroids overlaid (blue for day and purple for night).

Posted in Analysis, R | Tagged: , | Leave a Comment »