Smathermather's Weblog

Remote Sensing, GIS, Ecology, and Oddball Techniques

Posts Tagged ‘3D’

OpenDroneMap — Paris Code Sprint

Posted by smathermather on February 29, 2016

I failed to make it to the Paris Code Sprint. It just wasn’t in the cards. But, my colleague Dakota and I sprinted anyway, with some help and feedback from the OpenDroneMap community.

So, what did we do? Dakota did most of the work. He hacked away at the cmake branch of ODM, a branch set up by Edgar Riba to substantially improve the installation process for ODM.

  • Fixed odm_orthophoto in the branch so that it produces geotiffs
  • Fixed PMVS so that it is multithreaded again
  • Added rerun-all and rerun-from function
  • Integrated @lupas78’s additions for an xyz point cloud output
  • Added benchmarking which is an important soft number for when we have code changes
  • (Technically before the sprint) wrote the first test for OpenDroneMap
  • Cleaned code
What did I do? Mostly, I got caught up with the project. I haven’t been very hands on since the python port, let alone the cmake branch, so I became a little more pythonistic by just trying to successfully modify the code.
  • I also added PDAL to the build processs
  • And I inserted PDAL into the point cloud translation process.

Currently, this means we’ve dropped support for LAZ output, as I haven’t successfully built PDAL with LAZ support, but it stages the project for LAZ support through PDAL, and allows us to tap into additional PDAL functionality in the future.

It was an intensive couple of days that would have been improved with French wine, but we were in Parma (Ohio). So, a shout out to the coders in Paris at the same time, and cheers to all.

Posted in 3D, Drone, OpenDroneMap, OpenDroneMap, Photogrammetry, PMVS, UAS | Tagged: , | Leave a Comment »

parallel processing in PDAL

Posted by smathermather on January 28, 2016

Frankfurt Airport tunnel.JPG
Frankfurt Airport tunnel” by Peter IsotaloOwn work. Licensed under CC BY-SA 3.0 via Commons.

In my ongoing quest to process all the LiDAR data for Pennsylvania and Ohio into one gigantic usable dataset, I finally had to break down and learn how to do parallel processing in BASH. Yes, I still need to jump on the Python band wagon (the wagon is even long in the tooth, if we choose to mix metaphors), but BASH makes me soooo happy.

So, in a previous post, I wanted to process relative height in a point cloud. By relative height, I mean height relative to ground. PDAL has a nice utility for this, and it’s pretty easy to use, if you get PDAL installed successfully.


pdal translate 55001640PAN.las 55001640PAN_height.bpf height --writers.bpf.output_dims="X,Y,Z,Height";

Installing PDAL is not too easy, so I used the dockerized version of PDAL and it worked great. Problem is, the dockerized version complicates my commands on the command line, especially if I want to run it on a bunch of files.

Naturally, the next step is to run it on a whole bunch of LiDAR files. For that I need a little control script which I called pdal_height.sh, and then I need to run that in a “for” loop.

#!/bin/bash
# Get the pathname from the input value
pathname="${1%/*}";

# Get the short name of the file, sans path and sans extension
name=`basename $1 .las`

docker run -v $pathname:/data pdal/master pdal translate //data/"$name".las //data/"$name"_height.bpf height --writers.bpf.output_dims="X,Y,Z,Intensity,ReturnNumber,NumberOfReturns,ScanDirectionFlag,EdgeOfFlightLine,Classification,ScanAngleRank,UserData,PointSourceId";

Now we need a basic for loop will take care of sending the las files into our pdal_height.sh, thus looping through all available las files:


for OUTPUT in $(ls *.las); do ~/./pdal_height.sh $OUTPUT; done;

This is great, but I calculated it would take 13 days to complete on my 58366 LiDAR files. We’re talking approximately 41,000 square miles of non-water areas for Ohio, and approximately 44,000 square miles of non-water areas for Pennsylvania. I’m on no particular timeline, but I’m not really that patient. Quick duckduckgo search later, and I remember the GNU Parallel project. It’s wicked easy to use for this use case.

ls *.las | parallel -j24 ~/./pdal_height.sh

How simple! First, we list our las files, then we “pipe” them as a list to parallel, we tell parallel we want it to spawn 24 independent processes using that list as the input for our pdal_height script.

Now we can run it on 24 cores simultaneously. Sadly, I have slow disks 😦 so really I ran it like this:

ls *.las | parallel -j6 ~/./pdal_height.sh

Time to upgrade my disks! Finally, I want to process my entire LiDAR dataset irrespective of location. For this, we use the find command, name the starting directory location, and tell it we want to search based on name.

find /home/myspecialuser/LiDAR/ -name "*.las" | parallel -j6 ~/./pdal_height.sh

Estimated completion time: 2 days. I can live with that until I get better disks. Only one problem, I should make sure this doesn’t stop if my network drops for any reason. Let’s wrap this in nohup which will prevent network-based hangups:

nohup sh -c 'find /home/myspecialuser/LiDAR -name "*.las" | parallel -j6 ~/./pdal_height.sh {}'

Posted in 3D, Analysis, parallel, PDAL, pointcloud | Tagged: , , , | Leave a Comment »

OpenDroneMap — the future that awaits (part 삼)

Posted by smathermather on October 27, 2015

Two posts precede this one, ODM — the future that awaits, and ODM — the future that awaits (part 이)

Ben Discoe has a good point on the first post, specifically:

As I see it, the biggest gap is not in smoother uploading or cloud processing in the cloud. The biggest gap is Ground Control Points. Until there’s a way to capture those accurately at a prosumer price point, we are doomed to a patchwork of images that don’t align, which is useless for most purposes, like overlaying other geodata.

Ben’s right of course. If drone data is produced, analyzed, and combined in isolation, especially while prosumer and consumer grade drones don’t have verifiable ground control, the data can’t be combined with other geodata.

The larger framework that I’m proposing here side-steps those issues in two ways:

  1. Combine drone data with other data from the start. Drones are a platform and a choice. Open aerial imagery, the best available, should always be used in a larger mosaic. If Landsat is the best you’ve got… Use it. If a local manned flight has better data… use it. If an existing open dataset from a photogrammetric / engineering company is available… use it. And if the drone data gets you those extra pixels… use it. But if you don’t have ground control (which you likely don’t), tie it into the larger mosaic. Use that mosaic as the consistency check.
  2. The above isn’t always practical. Perhaps the existing data are really old, or are too low in resolution. Maybe the campaign is so big and other data sources so poor that the above is impractical. In this case, internal consistency is key. Since OpenDroneMap now leverages OpenSfM, we have the option of doing incremental calculation of camera positions and sparse point clouds. If we have 1000 images and need to add 50, we don’t have to reprocess the first 1000.

Posted in 3D, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits (part 이)

Posted by smathermather on October 25, 2015

In my previous post, ODM — the future that awaits, I start to chart out OpenDroneMap beyond the toolchain. Here’s a bit more, in outline form. More narrative and breakdown to come. (this is the gist)

Objectives:

Take OpenDroneMap from simple toolchain to an online processing tool + open aerial dataset. This would be distinct from and complementary to OpenAerialMap:

  1. Explicitly engage and provide a platform for drone enthusiasts to contribute imagery in return for processing services.
  2. Address and serve:
    • Aerial imagery
    • Point clouds
    • Surface models
    • Terrain models
  3. Additionally, as part of a virtuous circle, digitizing to OSM from this aerial imagery would refine the surface models and thus the final aerial imagery
    • More on this later: in short digitizing OSM on this dataset would result in 3D photogrammetric breaklines which would in turn refine the quality of surface and terrain models and aerial imagery.

Outputs / Data products:

  • Aerial basemap
    • (ultimately with filter for time / season?)
  • Point cloud (see e.g. http://plas.io)
  • Digital surface model (similar to Open Terrain)
  • Digital elevation model (in conjunction with Open Terrain)

Likely Software / related projects

Back of the envelope calculations — Mapping a city with drones

If ODM is to take submissions of large portions of data, data collection campaigns may come into play. Here are some back-of-the-envelope calculations for flying whole cities, albeit the medium size cities of San Francisco and Cleveland. This ignores time needed for all sort so things, including coordinating with local air traffic control. As such, this is a best case scenario set of estimates.

Drone Flight Height Pixel Overlap Per flight City Name City Area Total # of Flights Total Flight time
E384 400 ft 3 cm 60% 1.5 sq mile San Francisco 50 sq miles 33 flights 66 hours
E384 400 ft 5 cm 90 % 0.5 sq mile San Francisco 50 sq miles 100 flights 200 hours
E384 400 ft 3 cm 60 % 0.5 sq mile Cleveland 80 sq miles 54 flights 108 hours
E384 400 ft 5 cm 90 % 0.5 sq mile Cleveland 80 sq miles 160 flights 320 hours
Iris+ 400 ft 3 cm 60% 0.078 sq mile San Francisco 50 sq miles 640 flights 213 hours
Iris+ 400 ft 5 cm 90% 0.026 sq mile San Francisco 50 sq miles 1920 flights 640 hours

Posted in 3D, OpenDroneMap, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — the future that awaits

Posted by smathermather on October 24, 2015

Do you recall this 2013 post on GeoHipster?:

Screen shot of geohipster write-up

Later on, I confessed my secret to making accurate predictions:

screen shot of 2014 predictions


In all this however, we are only touching the surface of what is possible. After all, while we have a solid start on a drone imagery processing toolchain, we still have gaps. For example, when you are done producing your imagery from ODM, how do you add it to OpenAerialMap? There’s no direct automatic work flow here; there isn’t even a guide yet.

Screenshot of openaerialmap


And then once this is possible, is there a hosted instance of ODM to which I can just post my raw imagery, and the magical cloud takes care of the rest? Not yet. Not yet.


So, this is the dream. But the dream is bigger and deeper:

I remember first meeting Liz Barry of PublicLab at CrisisMappers in New York in 2014. She spoke about how targeted (artisanal?) PublicLab projects are. They aren’t trying to replace Google Maps one flight at a time, but focus on specific problems and documenting specific truths in order to empower community. She said it far more articulately and precisely, of course, with all sorts of sociological theory and terms woven into the narrative. I wish I had been recording.

Then, Liz contrasted PublicLab with OpenDroneMap. OpenDroneMap could map the world. OpenDroneMap could piece together from disparate parts all the pixels for the world:

  • At a high resolution (spatial and temporal)
  • For everywhere we can fly
  • One drone, balloon, and kite flight at a time
  • And all to be pushed into common and public dataset, built on open source software commonly shared and developed.

Yes. Yes it could, Liz. Exactly what I was thinking, but trying hard to focus on the very next steps.


This future ODM vision (the “How do we map the world with ODM) relies on a lot of different communities and technologies, from PublicLab’s MapKnitter, to Humanitarian OpenStreetMap Team’s (HOT’s) OpenAerialMap / OpenImageryNetwork, to KnightFoundation / Stamen’s OpenTerrain, ++ work by Howard Butler’s team on point clouds in the browser (Greyhound, PDAL, plas.io, etc.).

Over the next while, I am going to write more about this, and the specifics of where we are now in ODM, but I wanted to let you all know, that while we fight with better point clouds, and smoother orthoimagery, the longer vision is still there. Stay tuned.

Posted in 3D, OpenDroneMap | Tagged: , | 5 Comments »

Reflections on Goldilocks, Structure from Motion, near scale remote sensing, and the special problems therein

Posted by smathermather on October 19, 2015

Goldilocks and getting your reflection just right…

I have been reading a bit about drone remote sensing of agriculture fields. On one hand, it’s amazing, world changing technology. On the other hand, some part of all of it is bunk. What do I mean? Well, applying techniques created for continent size analyses may not scale down well. Why? Well for one, all those clever techniques (like Normalized Difference Vegetation Index, as well as its non-normalized siblings) rely heavily on two things: 1– being on average right over a large area; 2 — painting with such a broad brush as to be difficult to confirm or refute.

There. I said it.

Ok, tangible example: you fly a drone over your ag field, stitch the images together, calculate a vegetation index of your choice, and you get a nice map of productivity, or plant stress, or whatever it is that some vendor is selling. One problem: which camera view do you use for each spot on the ground?

Diagram of reflectance gradient on leaf.

Diagram of reflectance gradient on leaf.

I call this the Goldilocks problem in remote sensing — reflectance messing with what you are hoping are absolute(ish) values of reflectance:

If you use the forward image (away from the sun), you are going to get a hot spot because the light from the sun reflects more strongly in this direction. If you take the image in line with the sun, you are going to get something a little too dark, because of lack of backscatter. But if you use the image just above, you’ll get something just right.

Fix this problem (or only fly on cloudy days), and you are going to eliminate a lot of bias in your data. Long-term, addressing this when there is adequate data / images is on my mental wish list for texturing in OpenDroneMap. BTW, the big kids with satellites at their command have to deal with this too. They call it all sorts of things, but “Bi-Directional Reflectance Function” or BRDF is a common moniker.


Meshing — Why do we build a mesh after we build a point cloud?

Ok, another problem I have been giving some thought to… . In my previous post, I address some of the issues with point cloud density as well as appropriate (as opposed to generic) meshing techniques. We take a point cloud (exhibit A):

Dense point cloud

And we convert it to a mesh:

Dense Mesh

Dense Mesh

As we established yesterday, if we look to closely at the mesh, it’s disappointing:

Un-textured mesh of buildings

Un-textured mesh of buildings

And so I asserted that the problem is that we aren’t dealing with different types of objects in different ways when building a mesh. I stand by that assertion.

But… why are we doing a point cloud independently of the mesh? Why not build them at the same time? Here, maybe these crude and inaccurate figures will help me communicate the idea:

Diagram of leaf with three camera observations

Diagram of leaf with three camera observations

Diagram of building roof with 3 camera observations

Diagram of building roof with 3 camera observations

Why aren’t we building that whole surface, rather than just the points that we find as we go? Is this something that something like LSD-SLAM can help with? We would have to establish gradient cut-offs for where we decide where the roof line ends and e.g. the ground begins, but that seems a reasonable compromise. (Perhaps while that’s happening we detect / region grow that geometry, classify it as roof, and wrap it in a set of break lines).

The advantage here is that if we build the structure of the mesh directly from the images, then when we texture the mesh, we don’t have to make any guesses about which cameras to use for the mesh. More importantly, we are making minimal a priori assumptions about structures when building the mesh. I think this will lead to superior vegetation meshes.  One disadvantage is that we can’t guarantee our mesh is ever complete, and it will likely never be continuous, but hopefully as a trade-off becomes a much better approximation of structure which will help its use in, e.g. generating orthophotos.

Too abstract? Too dumb? IDK. Curious what you think.

Posted in 3D, OpenDroneMap | Tagged: , | Leave a Comment »

OpenDroneMap — Improvements Needed

Posted by smathermather on October 18, 2015

Talking about the future sometimes requires critiquing the present. The wonderful thing about an open source project is we can be quite open about limitations, and discuss ways forward. OpenDroneMap is a really interesting and captivating project… and there’s more work to do.

To understand what work needs done, we need to understand OpenDroneMap / structure from motion in general. Some of the limitations endemic to ODM are specific to its maturity as a project. Some of the limitations to ODM are extant in commercial closed-source industry leaders. I’ll highlight each as I do the walk through.

A simplified version of Structure from Motion (SfM) workflows as they apply to drone image processing are as follows:

Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products

This misses some steps, but gives the major themes. Let’s visualize these as drawings and screen shots. (In the interest of full disclosure, the screen shots are from a closed source solution so that I can demonstrate the problems endemic across all software I have tested to date.)

Diagrams / screenshots of the toolchain parts:

Find features & Match features --> Find scene structure

Find features & Match features –> Find scene structure


 

Create Dense Point Cloud

Create Dense Point Cloud


 

Create mesh

Create mesh


 

Texture mesh

Texture mesh


 

And then generate orthophoto and secondary products (no diagram)

Problem space:

Of these, let’s highlight in bold known deficiencies in ODM:

Find features & Match features –> Find scene structure / camera positions –> Create dense point cloud –> Create mesh –> Texture mesh –> Generate orthophoto and other products

(These highlights assume that our new texturing engine that’s being written will address deficiencies there. Time and testing will tell… . This also assumes that the inclusion of OpenSfM in the toolchain fixes the scene structure /camera issues. This assumption also requires more testing.)

Each portion of the pipeline is dependent upon the next, if for example the camera positions are poor, point cloud won’t be great, and the texturing will be very problematic. If the dense point cloud isn’t as dense as possible, features will be lost, and the mesh, textured mesh, orthophoto, and other products will be degraded as well. For example, see these two different densities of point clouds:

Create Dense Point Cloud

More sparse point cloud


 

Less sparse (dense) point cloud

Less sparse point cloud


It becomes clear that the density and veracity of that point cloud lays the groundwork for the remainder of the pipeline.

ODM Priority 1: Improve density / veracity of point cloud

So what about the mesh issues? The meshing process for ODM and its closed source siblings (with possible exceptions) is problematic. Take for example this mesh of a few buildings:

Textured mesh of building

Textured mesh of building

The problems with this mesh become quite apparent when we view the un-textured counterpart:

Un-textured mesh of buildings

Un-textured mesh of buildings

We can see many issues with this mesh. This is a problem with all drone image processing tools I have tested to date — geometric surfaces are not treated as planar, meshing processes treat vegetation, ground, built environment equally, and thus don’t model any of them well.

ODM Priority 2: Improve meshing process

Priority 2 is difficult space, probably requires automated or semi-automated classification of the point cloud &/or input imagery, and while simple in the case of buildings, may be quite complicated in the case of vegetation. Old-school photogrammetry would have hand digitized hard and soft breaklines for built environments. How we handle this for ODM is an area we have yet to explore.

Conclusions

I am optimistic that ODM’s Find features & Match features –> Find scene structure / camera positions step is much improved with the integration of OpenSfM (please comment if you’ve found otherwise and have test cases to demonstrate). I am hopeful that the upcoming Texture mesh –> Generate orthophoto improvements will be a good solution. Where we need to improve will be in the near future is in the Create dense point cloud step. Where every software I have tested needs improvement, closed source and open source, is in the Create mesh step.

Posted in 3D, OpenDroneMap | Tagged: , | 1 Comment »

MSF Canada Drone Day follow-up

Posted by smathermather on July 13, 2015

Dirk’s MSF Canada Drone Day is officially the first blog post I have “re-blogged”. Please read: https://smathermather.wordpress.com/2015/07/13/msf-canada-drone-day/

or better yet here: http://dirkgorissen.com/2015/07/14/msf-canada-drone-day/

I had the pleasure of co-presenting with Dirk and Ivan, and the rest is well covered in Dirk’s post. It came together as an excellent day and I think you would be hard pressed to have had a better introduction to drones.

The day was valuable to me as an emerging practitioner. I learned more about the state of the art in hardware, software, regulations, philosophy, and RC control from this day, and it was inspiring to inhabit the same space with such dedicated practitioners for a short time.

Beyond the value of the workshop to the participants, the outcomes were the following, this quoted from Dirk’s post:

As a first milestone we are looking to pull together a proposal to the Humanitarian Innovation Fund in collaboration with OpenDroneMap and supported by the Missing Maps consortium.

I love the extension of ODM into this space. This is the real value of open source, the opportunity to collaborate across the world, across industries and use cases, and across organizations. Expect to see improvements to ODM in usability, performance, and output qualities from this initiative. More on this later.

Another outcome / learning for me was observing Ivan’s OpenUAV. From his repo:

This is intended to be a repository for design files, instructions, photos, documentation, and everything else needed for people wishing to build a and operate UAV (drone) in a low-income, resource-poor environment. This is not about cutting-edge UAVs, it’s about democratizing the technology and getting it into the hands of more people, particularly in poorer countries and humanitarian settings.

Photo of OpenUAV example

Ivan undersells it. This is a pro quality quad copter on a very nice price diet — a brilliant piece of pragmatic engineering.

This little quad copter will make its way into drone building workshops I’ll be offering in Cleveland and Columbus Ohio and Seoul, South Korea in August and September. More details forthcoming.

If you are in Cleveland, plan to be at FOSS4G Seoul, or Ohio GIS, come build Ivan’s capable quad.

 

(BTW, Ivan says with a couple of 4C 8000mAh batteries, this sucker flies for 50 minutes… .)

Posted in 3D, Bundler, Camera Calibration, FOSS4G, Image Processing, OpenDroneMap, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: , , , , , , , , , | Leave a Comment »

OpenAerialMap, OpenImageryNetwork, MapKnitter, OpenTerrain, and OpenDroneMap (cont. 1)

Posted by smathermather on June 7, 2015

Citing my previous post, let’s move on to more specifics on my thoughts regarding the integration of OpenAerialMap, OpenDroneMap, and MapKnitter as projects.

Image from kite over Seneca Golf Course

OpenDroneMap ❤ OpenAerialMap.

OpenAerialMap will become a platform by which drone users can share their imagery under an open license.

So, as the metadata spec for OpenAerialMap and OpenImageryNetwork matures, and as soon as a publicly available place for drone users to push their data comes online, ODM will write appropriate metadata and geotiffs to go into OIN to be indexed by OAM. Probably as an added bonus, ODM should be able to optionally auto-upload outputs from to the appropriate node on the OpenImageryNetwork.

Lincoln Peak Vinyard

OpenDroneMap ❤ MapKnitter.

MapKnitter / ODM integration is pretty straight forward in my mind too. There are ways that MapKnitter complements ODM, and vice versa. ODM does not have a graphical user interface at this time. MapKnitter promises to fill that role in a future OpenDroneMap implementation. MapKnitter has no image blending or auto-matching tools. OpenDroneMap will soon have both.

  • Ways MapKnitter may help OpenDroneMap:
    • MapKnitter’s clever use of Leaflet to handle affine transformation of images is really exciting, and may help with improving final georeferencing for ODM datasets.
    • Regarding the above, one really useful thing for fliers launching balloons, drones, and kites without GPS would be the ability to quickly and easily perform really approximate georeferencing. I would envision a workflow where a user moves an image to its approximate position and size relative to a background aerial. ODM would be able to take advantage of this approximate georeferencing to optimize matching.
  • Ways OpenDroneMap could benefit MapKnitter
    • For large image datasets, matching images can be very tedious. Automatic feature extraction and matching can help. OpenDroneMap could be adapted to serve back match information to Mapknitter to ease this process. This will become increasingly important as MapKnitter raises the ~60 image limit on images that it can process.
    • A near future version of ODM will have image blending / smoothing / radiometric matching. For the server portion of the MapKnitter infrastructure, this feature could be a really useful addition for production of final mosaics.

These projects (plus OpenTerrain…) are really exciting in their own right. Together, they represent amazing opportunities to foster, cultivate, process, and serve a large community of imagery providers, from individuals and small entities capturing specific datasets using kites, drones, and balloons, to satellite imagery providers hosting their own “image buckets” of open imagery data. Exciting times.

Image over Groth Memorial from kite

Posted in 3D, Bundler, Camera Calibration, FOSS4G, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: , , , , , , , , , , | Leave a Comment »

OpenAerialMap, OpenImageryNetwork, MapKnitter, OpenTerrain, and OpenDroneMap

Posted by smathermather on May 29, 2015

This tweet:

is the beginning of some fruitful discussion, I suspect. There are some really awesome projects gaining momentum. I’ll give an overview of them as best I am able.

Kite aerial photography image over bridle riding ring.

Kite aerial photography image over bridle riding ring.

Let’s start with the one nearest and dearest to my heart (if you’ve been reading my blog, you can skip this part): OpenDroneMap. OpenDroneMap is an open source toolkit for processing drone, balloon, kite imagery into geographic data. It does this by using fully automated feature-matching between images, which create a 3D point cloud. From that, we can create a 3D surface (mesh), textured mesh, and orthophoto. This guy says it better:

But, it’s just a stand alone, Linux (Ubuntu)-based tool. It requires some geekiness to run, and it does not (at least not yet) act as a platform.  By that I mean, generically, you can’t just upload images to it and get the wonderful output from a service, and we don’t have a place to store and share all this wonderful data that comes from and will be coming from drones and other aerial platforms. This is where (from my selfish perspective) the other projects are so well timed… .

Screenshot of DevelopmentSeed's introductory post for OpenAerialMap

Let’s start with OpenAerialMap. From the Development Seed blog post on it (yes, you should follow the link. Don’t worry, I’ll wait until you return):

OpenAerialMap is a set of tools for searching, sharing, and using open satellite and drone imagery. This initial release includes the core infrastructure to catalog petabytes of open imagery. It also includes an extremely usable API and an elegant web interface to submit, search and download available imagery.

This is a reboot of a couple of previous attempts at solving this problem space, and it’s really exciting to watch passionate and brilliant work take place to make this happen. Also, this is not an easy problem space, and is being really thoughtfully simplified and implemented.

(As a side note, I’m not going to get into the distinction between OpenAerialMap and OpenImageryNetwork — not today anyway)

Preview of Open Terrain tumblr page

Preview of Open Terrain tumblr page

 

Open Terrain is a project for which a portion of its scope is to do for terrain models what OpenAerialMap and OpenImageryNetwork will do for open aerial datasets. The projects are informing each other and growing together, which is awesome collaboration to observe.

Finally, Mapknitter has recently been rebooted too, and it’s now a really elegant tool for taking a few aerial images and knitting them into a usable map (ok, it always was p. cool — now it’s even more elegant). What’s great about MapKnitter is it specifically addresses the problem of georeferencing balloon, kite, or drone images in a simple-to-use interface in the browser.

Snapshot of MapKnitter landing page

So, back to the question:

Bravo, yes. Lets. I have been thinking about, talking to people, discerning the strengths, overlaps, and complementary fittings of these projects as they have emerged. We are headed toward some really great things… . More specific thoughts to come.

Posted in 3D, Bundler, Camera Calibration, FOSS4G, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: , , , , , , , , , , | 1 Comment »