Posts Tagged ‘pmvs’
Posted by smathermather on April 27, 2015
Posted in 3D, Bundler, Image Processing, OpenDroneMap, OpenDroneMap, Optics, Photogrammetry, PMVS, UAS | Tagged: 3D, bundler, CMVS, KAP, Kite Aerial Photography, opendronemap, Photogrammetry, pmvs, sUAS, UAS | Leave a Comment »
Posted by smathermather on April 20, 2015
Posted in 3D, Bundler, Image Processing, OpenDroneMap, OpenDroneMap, Photogrammetry, PMVS, UAS | Tagged: 3D, bundler, CMVS, KAP, Kite Aerial Photography, opendronemap, Photogrammetry, pmvs, sUAS, UAS | Leave a Comment »
Posted by smathermather on September 22, 2014
Apparently travelling for 20 days straight back and forth through 3 time zones across 13 hours of time difference makes me calmer, more rational, and a better presenter than normal. All 27 minutes and 35 seconds.
And then don’t forget to check out the rest: http://vimeo.com/foss4g
Posted in 3D, Bundler, Camera Calibration, Conference, FOSS4G, FOSS4G 2014, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: 3D, bundler, FOSS4G, MeshLab, opendronemap, pmvs | Leave a Comment »
Posted by smathermather on September 20, 2014
(Yes, I’m using rsync, not tar. Old dog. New tricks.)
edit: let’s throw some code up there, ugly though it may be:
START=$(date +%s) && cd /media/user/USB\ DISK/ && rsync -avz /home/user/Desktop/* . && cd .. && \ END=$(date +%s) && DIFF=$(( $END - $START )) && echo && echo "Processing took $DIFF seconds" & \ START=$(date +%s) && cd /media/user/USB\ DISK1/ && rsync -avz /home/user/Desktop/* . && cd .. && \ END=$(date +%s) && DIFF=$(( $END - $START )) && echo && echo "Processing took $DIFF seconds" && \ cd .. &
Posted in 3D, Bundler, Camera Calibration, Conference, FOSS4G, FOSS4G 2014, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: 3D, bundler, FOSS4G, MeshLab, opendronemap, pmvs | 6 Comments »
Posted by smathermather on September 16, 2014
I consider myself an artist and scientist. I’ll confess I have let the art go fallow some in recent years, but these are two sides of one coin. If you like either, and especially if you like both, you should check out Tobias Research.
I met Michele at FOSS4G, where from the moment she saw my presentation on OpenDroneMap to using it to create a point cloud was a few short hours. I sat with Michele and her partner in crime, Alex, for a little while walking them through the (until then) undocumented steps of creating a mesh and texturing it inside MeshLab (to be fair to MeshLab, there’s plenty of docs on this, but there were none yet within the OpenDroneMap repo.
So, here’s some quick shots of her Kite Aerial Photography images for studying plant / dune dynamics processed through OpenDroneMap. Stunning kite aerial photography (KAP) work. The groundwork for great and beautiful science:
Posted in 3D, Bundler, Camera Calibration, Conference, FOSS4G, FOSS4G 2014, Image Processing, OpenDroneMap, OpenDroneMap, Optics, Photogrammetry, PMVS | Tagged: 3D, bundler, FOSS4G, MeshLab, opendronemap, pmvs | Leave a Comment »
Posted by smathermather on September 15, 2014
This past Friday at FOSS4G in Portland, I announced the (early) release of OpenDroneMap, a software toolchain for civilian (and humanitarian?) UAS/UAV image processing. The software is currently a simple fork of https://github.com/qwesda/BundlerTools, and will process from unreferenced overlapping photos to an unreferenced point cloud. Directions are included in the repo to create a mesh and UV textured mesh as the subsequent steps, but the aim is to have this all automated in a single work flow.
Projects like Google Streetview, Mapillary, PhotoSynth, and most small UAS (drone) postprocessing software, such as that offered by senseFly, share a commonality– they all use computer vision techniques to create spatial data from un-referenced photography.
OpenDroneMap is an open source project to unlock and make easy-to-use related related computer vision techniques, so that whether from street level photos or from civilian drone imagery, the average user will be able to generate point clouds, 3D surfaces models, DEMs, and orthophotography from processing unreferenced photos.
To those who may be wondering — wow cool, but what happens to the data at the end of the day? How do we share it back to a common community? The aim is for the toolchain to also be able to optionally push to a variety of online data repositories, pushing hi-resolution aerials to OpenAerialMap, point clouds to OpenTopography, and pushing digital elevation models to an emerging global repository (yet to be named…). That leaves only digital surface model meshes and UV textured meshes with no global repository home. (If anyone is working on global storage of geographically referenced meshes and textured meshes, please get in touch…).
So, try it out: http://opendronemap.org will point you to the repo. Clone it, fork it, try it out. Let me know what you think.
Test data can be found here: https://github.com/OpenDroneMap/odm_data (credit Fred Judson, Ohio Department of Transportation)
Presentations on it can be found here: https://github.com/OpenDroneMap/presentations and eventually here:
PostScript: Re: meshes and pointclouds on the web, Howard Butler and others are working on some pretty cool tools for handling just this problem technologically. Check out, for example http://plas.io/ and https://github.com/hobu/greyhound
Posted in 3D, Bundler, Camera Calibration, Drone, Image Processing, OpenDroneMap, Optics, Photogrammetry, PMVS, UAS | Tagged: bundler, CMVS, opendronemap, OpenMVG, pmvs, SfM, Structure from Motion | 3 Comments »
Posted by smathermather on May 6, 2014
In my previous post on Getting Bundler and friends running, I suggested how to modify an existing script to get Bundler and other structure from motion parts/pieces up and running. Here’s my follow up.
Download (or clone) this repo:
Navigate into the cloned or unzipped directory (on the command line), run
Go have a cup of coffee. Come back. Run
Congratulations. You have an Ubuntu machine capable of all sorts of StructureFromMotion / OpenDroneMap goodness.
Next: A tutorial on how to use the tools you just compiled… .
Posted by smathermather on April 27, 2014
Anyone who has jumped down the rabbit hole of computer vision has run into dependency h*ll getting software to run. I jumped down that hole again today with great success that I don’t want to forget (these directions are for Ubuntu, fyi).
First, clone BundlerTools:
This will download and compile (almost) everything for you, which is a wonderful thing. The one exception is graclus. This doesn’t have a direct download access anymore– you have to register, and then you will receive and e-mail with the download. So, to get the BundlerTools to work, you will need to post this someplace web accessible. Then open and modify install.sh (https://github.com/qwesda/BundlerTools/blob/master/install.sh) and change the following line (128)
to match your new download location.
Now change install.sh to executable:
chmod 700 install.sh
Posted by smathermather on February 5, 2014
In my previous post, https://smathermather.wordpress.com/2014/02/04/big-dmn-post-photogrammetrically-derived-point-clouds/, I briefly cover software for creating photogrammetrically derived point clouds. I didn’t summarize, like this, but PDPCs can be created in three easy steps:
- Structure from Motion for unordered image collections
- Clustering Views for Multi-view Stereo
- Multi-view stereo (dense point cloud reconstruction)
But, unfairly, I gloss over some of the complications of creating meaningful data from PDPC processing. Truth told, it’s probably a 9 step process
- Optimize image order according to geography to optimize step 2
- Structure from Motion
- Clustering Views for Multi-view Stereo
- Multi-view stereo
- Create breaklines or disparity mapping or find some other process for refining our surface model
- Generate surface model (mesh)
- Texture mesh
- Render mesh as ortho
Posted by smathermather on February 4, 2014
I chatted with Howard Butler (@howardbutler) today about a project he’s working on with Uday Verma (
@udayverma @udaykverma) called Greyhound (https://github.com/hobu/greyhound) a pointcloud querying and streaming framework over websockets for the web and your native apps. It’s a really promising project, and I hope to kick the tires of it really soon.
The conversation inspired this post, which I’ve been meaning to do for a while summarizing Free and Open Source software for photogrammetrically derived point clouds. Why? Because storage. It’s so easy now to take so many pictures, use a structure from motion (SfM) approach to reconstruct camera positions and sparse point cloud, and then use that with a Multi-View Stereo approach to construct dense point clouds in order to… okay. Getting ahead of myself.
PDPCs (photogrammetrically derived point clouds). Hate the term but haven’t found a better one are 3D reconstructions of the world based on 2D pictures from multiple perspective. This is ViewMaster on steroids. Scratch that. This ViewMaster on blood transfusions and Tour de France level micro-doping. This is amazing technology FTW.
So, imagine taking a couple of thousand unreferenced tourist images used to reconstruct the original camera positions and a sparse cloud of colorized points representing the shell of the Colosseum:
Magical. This is step one. For this we use bundler:
Or maybe OpenMVG.
Ok, now that we know where all our cameras are at, and have a basic sense of the structure of the scene, we can go deeper and reconstruct dense point clouds from a multi-view stereo approach. But, let’s wait a second– this can be memory intensive, so first let us split our scene into chunks we can process in a way that we can put it all back together at the end. Enter Clustering Views for Multi-view Stereo (CMVS).
Honestly, the above image is the calculations from our final step, multi-view stereo (MVS), but broken into bite-size chunks for us in the previous step. Here we apply PMVS2:
Now, if you’re like me, you want binaries. We have many options here.
If you are willing to depart from the pure open source, VisualSFM is an option:
But, you’ll have to pay for commercial use. Same is true for CMPMVS:
But it has the bonus of returning textured meshes, which is mighty handy. If you are a hobbiest UAS/UAV (Drone) person, this might be a good option. See FlightRiot’s directions for it here:
Me, I’m a bit of a FOSS purist. For this you can roll your own, or get binaries from the Python Photogrammetry Toolbox (the image is the link):
Finally, Mike James is working on some image level georeferencing for point clouds “coming soon”, so stay tuned for that:
PDPCs for the win!