Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Archive for July, 2011

GDAL, MrSid, and nearblack

Posted by smathermather on July 29, 2011

Translating MrSid lossy compressed files into uncompressed imagery has its drawbacks, including licensing and artifacts.
Old versions of fwtools, which includes the GDAL utilities (and more), were compiled with a license that allowed for the translation of MrSid into e.g. Erdas Imagine images or GeoTiff. The licensing changed on that library, so FWTools and MS4W don’t do this anymore.

If you have a compiled version of the GDAL utilities with a legal license for converting, you still run into the issue of artifacts, e.g.

Nearly, but not quite black pixels at the edge of MrSid lossy compressed image

GDAL’s nearblack is a great utility for correcting this problem. If we wanted to batch this up on a windows system, it might look something like this:


FOR %f IN (*.sid) DO nearblack -o %~nf.img %f

Posted in GDAL, Image Processing | Tagged: , , , | 5 Comments »

Dialog– What qualifies as a benchmark? Part 1

Posted by smathermather on July 25, 2011

Normally, my blog is a bit of a monologue.  It’s not a bad thing, but can be a little lonely.  Every now and then I get a great (and often doubtful) comments, which enhances things considerably.

What follows is some dialog about the LiDAR shootout series, largely between Etienne and Pierre, posted with their permission:

Pierre:

“Etienne, Stephen,

“I really appreciate the benchmark work you are doing to compare different technologies to solve your lidar data processing problem.

“However, if you want your tests to have as much credibility as possible and mostly if you wishes to publish the results on the web I would recommend that you compare horses with horses, dogs with dogs, general vector technologies with general vector technologies, raster technologies with raster technologies, specialized lidar technologies with specialized lidar technologies. Otherwise the dog will always lose against the horses. You have to be clear on that and well explain the differences and the pro’s and con’s between all the tech you will be using in your benchmark. Are you doing the test in ArcGIS using ESRI lidar technology (or general vector one)? Same question for R and PostGIS Raster (in the last case it is clear that the answer is no). In your graph you are comparing very different kind of animals without giving much chances to dogs.

“For every technologies you will also have to test different approaches in order to find the fastest one for this technology in order to compare it with the fastest one in the other technologies. Comparing a slow approach in one tech with the fastest in the other would be a bit unfair. This is what we did for PostGIS Raster. Did you do the same with R and ArcGIS?

“Basically I think it is just unfair to compare ArcGIS, R and PostGIS Raster with LAStools for working with Lidar data. Mostly if you are not using ArcGIS and R packages for dealing with lidar. Teach us you did not use them if you did not. We will all learn something.”

Etienne:

“As you said, the point is to compare different strategies to get the job done. That’s it. It’s not a question of being fair or not, we report facts. Maybe the bottom line is that pgraster shouldn’t be used to extract lidar height. However it can be used for other things. Well, we all learned something.

“It also emphasize the fact that there is no point_cloud type in postgis. I guess this could improve a lot the postgis results as it seems handling point cloud might be really fast. I learned that.

“Each solution has its drawbacks, for ArcGIS I think we’ll all agree about the cost of the licence and furthermore the cost of the lidar plugin. For lastools, it require a licensing for commercial use (which is really low btw compared to ArcGIS + Spatial analyst + LP360). PG raster require a postgres installation and some SQL knowledge, etc. R require to learn a new language, etc.

“Now I must acknowledge, the tests we did were on only 500k points. That could be considered a lot, and not. When dealing with lidar, one could easily expect to get more than a thousand times this quantity. But it was the limit imposed by the LP360 version I used and also a reasonable size for benchmarking. Note that pg raster was really stable in the timing (whatever the size).

“Finally, please note that Martin’s approach is new and he just released the tool a week ago. It’s to the better of my knowledge, the first one the do it.

“Pierre, explain your point better, I’m not sure I get it (or I disagree, as you can see).”

Pierre:
“My point is just that you have to be clear in your post about the fact that LAStool is a technology developed explicitly for dealing with lidar data. PostGIS raster is not and I guess you are not using ESRI lidar technology and I have no idea how you do it with R. The point is to compare potatoes with potatoes.

“If I would be from the ESRI side I would quickly post a comment in the post saying “Did you read this article?” http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#/Estimating_forest_canopy_density_and_height/00q8000000v3000000/

“A don’t know how you did it in ArcGIS but it seems that they are storing lidar data as multipoints.

“A good benchmark should precise all this.

Etienne (Pierre > quoted):

> If I would be from the ESRI side I would quickly post a comment in the post saying “Did you read this article?”

> http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#/Estimating_forest_canopy_density_and_height/00q8000000v3000000/
“I would reply: apart from the fact that it uses lidar data, it has nothing to do with the object of the present article 🙂 But I’d be happy to discover a better way than what I did (for all the tools btw).

“Maybe you should (re-)read the post, it’s written. For R as well. Maybe Steve, you should precise (this is my fault) that for ArcGIS, I’ve used LP360 (evaluation) to load lidar data, which was limited to 500k points. But it wasn’t involved in the computation as I used (as written in the post) «ExtractValuesToPoints».”

Pierre:

> So it’s a good benchmark ?

“I just think it would be more fair to create three test categories:

“1) Lidar technologies (apparently your ArcGIS test should be in this category)

“2) Vector/raster technologies (why not use non lidar techs if lidar techs are so much better?)

“3) Nearest neighbor technique using vector only technology

“I won’t comment further…

Etienne:

“I personally would not encourage these categories as there is some confusion (at least in my mind) on the definition of what is a «LiDAR thechnology». However I think it would be a good idea to nuance the conclusions, which I think will be done in further posts (or not). It’s also pretty clear from the RMSE measures what is the main drawback of each solution (with speed).

“I’m still open for discussion…

Stephen (me):

“Hi Pierre,
     “I agree with you in principle, but the ArcGIS aside, there are some very real opportunities for optimization on point in raster analyses with tiling, indexing, etc.  In other words, I don’t see any practical reasons why PostGIS Raster couldn’t be (nearly) as fast as an nn technique, so long as the tile size is optimized for density of the analysis in question.  The only thing that makes it a dog is its age as a technology.

(moments later after discovering the lengthy dialog…)

“It looks like I missed all the discussion here…

“The reason such benchmarks are interesting is (in part) that they are not fair.  PostGIS isn’t optimized for LiDAR analyses.  PostGIS Raster isn’t optimized for LiDAR analyses.  PostGIS Raster is very young technology which really isn’t fair.  The PostGIS development folks know this, and were interested in seeing my initial benchmarks with nn analyses for LiDAR height (last year) because they are aware of the optimizations that are not yet there for handling point clouds.  Vector and raster analyses (as we’ve run them here) should not in principle or practice substantially different in speed, so long as the query analyzer maximizes the efficiency of the techniques.  Add in multipoint optimizations and other voodoo, and well, that’s a different story.  Another aspect to this is that comparing lastools to the rest is unfair not just because it’s optimized for lidar but because it’s working at a much lower level (file level) than the abstractions a database provides.  The trade-off, off course, is that a database can provide transaction support, standardized language, automated query analyzers, etc. etc. that don’t apply for file level processing.

“What I see here, categories aside, is a need for the other 2-3 parts of this 3-4 part series– the breakdown of which techniques are used, how they’re implemented, and the rationale behind them.  Don’t worry, we’ll get there.  That’s the danger of starting with the punchline, but we’ll build the nuance in.

 

So, there we are.  Looks like I’m committed to building this out with a little more nuance.  🙂  Stay tuned.

Posted in Database, Database Optimization, LiDAR Shootout, PostGIS, SQL | Tagged: , , , , , , , | Leave a Comment »

Writing Your Name Big Enough to be Seen from Space

Posted by smathermather on July 23, 2011

Every now and then I delve into tag surfer and find something new and interesting.  This time, it led me to a post on writing your name large enough to be seen from space:

http://andrewlainton.wordpress.com/2011/07/23/writing-your-name-in-the-sand-big-enough-to-be-seen-from-space/

Sometimes though, what you write is not about you personally, but a larger obsession.  In the same venue, but less personal:

True and dedicated fans.

I should note, I’m not a Cavs fan, being a Southeast Michigan boy myself, my allegiances tend toward Detroit teams, but regardless, I appreciate this little bit a geospatial scrawl.

Posted in Other | 1 Comment »

LiDAR Shootout! — New Chart, Final Results

Posted by smathermather on July 21, 2011

Comparison of lidar height calculations

In reviewing the final numbers and charts from Etienne and Pierre, above are the results we see.  The only revision is a moderate increase in speed for the PG Raster query.

Final results in speed for lastools– ~350,000 points per second.  In other words– off-the-charts fast.  And the initial RMSE of ~25 feet was a mistake– it is probably closer to 0.2 feet.

Stay tuned for detailed reviews of these techniques (with code).

 

Posted in Database, Database Optimization, LiDAR Shootout, PostGIS, SQL | Tagged: , , , , , , , | Leave a Comment »

LiDAR Shootout! — Revisions

Posted by smathermather on July 18, 2011

Not surprisingly, I made some mistakes in the Lidar Shootout blog, not the least of which is the location and affiliation of Etienne Racine and Pierre Racine out of Montreal Laval University in Quebec City.  I jumped the gun on stats too, so we’re working on pounding out the details of the final numbers for the shootout… .  More charts and numbers forthcoming… .

Posted in Other | Leave a Comment »

LiDAR Shootout!

Posted by smathermather on July 18, 2011

For a couple of months now I’ve been corresponding with Etienne Racine and Pierre Racine out of Montreal Laval University in Quebec City.  They decided to take on the problem of finding the speed and accuracy of a number of different techniques for extracting canopy height from LiDAR data.  They have been kind enough to allow me to post the results here.  This will be a multipart series.  We’ll start with the punchline, and then build out the code behind each of the queries/calculations in turn (category will be “LiDAR Shootout“) with separate posts.

In the hopper for testing were essentially 4 different ways to extract canopy height from a LiDAR point cloud and (in three of the cases) a DEM extracted from the LiDAR point cloud.  The 4 techniques were as follows:

  • Use ArcGIS ExtractValuesToPoints.
  • Use R using the raster and rgdal packages with raster::extract() function with in-memory results.
  • Use postgis (2.0 svn) Raster datatype (formerly WKT Raster)
  • And use a simple nearest neighbor approach with ST_DWithin within PostGIS.

The data are from the Ohio Statewide Imagery Program (OSIP) program, run by Ohio Geographically Referenced Information Program (OGRIP).  Ohio is blessed with an excellent 2006-2008 LiDAR dataset and statewide DEM derived from the (effectively) 3.5 foot horizontal posting data (specs may say 5-ft, I can’t remember…).

Speeeed:

So, first to speed, initial results.  Who wins in the speed category?  Measured as points per second (on consistently the same hardware), nearest neighbor wins by a landslide (bigger is better here):

Application:  Points per Second

ArcGIS:  4504
R:  3228
PostGIS Raster:  1451 (maybe as high as 4500)
Postgis nn:  18208

Now, later on, Pierre discovered changes to indexing may help an the EXPLAIN query analyzer optimization which tripled the PostGIS Raster query speed, making it about the same speed as ArcGIS.  More on that later.

Figure removed– to be replaced in another post

Accuracy:

A fast calculation is always better, unless you trade off accuracy in some detrimental way.  With PostGIS NN, we’re not interpolating our LiDAR ground point cloud before calculating the difference, so relative to the other three techniques, we could be introducing some bias/artifacts here, a point Jeff Evans makes here.  Overall error relative to the interpolated solution introduced by using an NN technique on this dataset: 0.268 feet.  Whew!  Sigh of relief for me!  We may spend some time looking at the geographic distribution of that error to see if it’s random or otherwise, but that’s very near the error level for the input data.

Addendum:

8 days ago, Martin Isenburg’s let the lasheight tool drop. Lastools is impressively fast.  Preliminary tests by Etienne: 140450 points per second.

Oy!  A factor of 8 faster than PostGIS NN  And it uses an on-the-fly calculated DEM using delaunay triangulation.  Preliminary results indicate a very different DEM than the one calculated by the state:

RMSE: 25.4227446600656

More research will need done to find out the source of the differences: they are not random.

In other news:

PostGIS will get a true Knn technique in the near future.  Such a project is funded, so stay tuned for faster results:

http://www.postgis.org/pipermail/postgis-devel/2011-June/013931.html

Also, index changes to PostGIS could come down the pike, if funded, that will result in bounding box queries that are 6-times faster, ala:

http://blog.opengeo.org/2011/05/27/pgcon-notes-3/

Between the two optimizations, we could give Lastools a run for its money on speed  🙂 .  In the meantime, preliminary RMSE aside, Lastools (the 5th tool not in this test) may win hands down.

Posted in Database, Database Optimization, LiDAR Shootout, PostGIS, SQL | Tagged: , , , , , , , , | 9 Comments »

HTML Tags for code pre & code tags

Posted by smathermather on July 16, 2011

I’ve spent little time looking at the formatting on my blog.  As the code has gotten longer, it’s gotten unacceptable that tabs and other formatting don’t show up correctly.  A little google-fu and thanks to this blog post:
http://www.sohtanaka.com/web-design/styling-pre-tags-with-css-code-block/
I’ve reformated 25+ of my posts with more readable code with tabs and the whole bit.  I could go through and reformat everything, but I probably wont.

Anyway, the short and long of it is to use <pre> tags and <code> tags in tandem, which circumvents the problem of escaping all the special characters, including tabs.  Yay for a more readable blog.  (Yes, it happens to be a very quiet Saturday night :).

Posted in Other | Tagged: , | Leave a Comment »

Contours– Symbolized from a single table

Posted by smathermather on July 15, 2011

In a previous post, I restructured the contour data for display in GeoServer, e.g.:


UPDATE base.contours_2
	SET 	div_10 = CAST( contours_2.elevation % 10 AS BOOLEAN ),
		div_20 = CAST( contours_2.elevation % 20 AS BOOLEAN ),
		div_50 = CAST( contours_2.elevation % 50 AS BOOLEAN ),
		div_100 = CAST( contours_2.elevation % 100 AS BOOLEAN ),
		div_250 = CAST( contours_2.elevation % 250 AS BOOLEAN );

The SLD styling for the contours based on the new data structure will be forthcoming with my intern’s upcoming guest post, but in the meantime, I have a teaser of a map snapshot, served up through a GeoExt interface (note the correct contour intervals showing in the legend simply and elegantly because of the data structure and a single SLD):

Contour Map, 10ft and 50ft contours

Over-zoom showing 2ft and 10ft Contours

Posted in Database, Database Optimization, GeoServer, PostGIS | Tagged: , , , , , , , | Leave a Comment »

Landscape Position Continued– absolutely relative position calculation <– Pics!

Posted by smathermather on July 14, 2011

Input:

Output:

Posted in Ecology, Image Processing, ImageMagick, Landscape Position, POV-Ray | Tagged: , , , , | Leave a Comment »

Landscape Position Continued– Median and ImageMagick

Posted by smathermather on July 14, 2011

Highlighting ridges with 250ft buffer (on 2.5ft DEM) with just ImageMagick:


convert lscape_posit.png -median 100 median100.png
composite -compose difference lscape_posit.png median100.png difference_median100.png

Input:

Output:

BTW, median calculations of this size are slow, even in ImageMagick.

Posted in Ecology, Image Processing, ImageMagick, Landscape Position | Tagged: , , | Leave a Comment »