In previous posts, we’ve scanned whole cities, cliff faces, down to small things like ukuleles and tiny pits to identify difficult to identify plants. This post goes in a slight different direction, doing a bit of a meta-scan of another kind of voucher: animal bones.
I wanted to figure out what other interesting use cases there are for OpenDroneMap. For example, could we use it to scan museum collections? As I don’t currently have a museum collection to test on, I did the virtual version. I rotated a model posted on sketchfab by the California Academy of Sciences, capturing video as I did the rotation and used that video for the reconstruction. Let’s step through that process.
In these rotations, I am sure to cover all the angles of this polar bear skull. I think of it like orbiting the skull at multiple latitudes, which helps me ensure I get all the angles I need.
Now that I have this video after a couple minutes of capture, I need these angles as individual frames. We do have a video mode for OpenDroneMap, but is currently broken, so this is an acceptable work around. As a side note: techniques for video approaches to structure from motion are surprisingly different from using still images. We will be grabbing a subset of stills from the video, as image-based structure from motion requires larger changes in angle between views than video approaches do.
To convert from videos to stills, we will use my second favorite Free and Open Source project, blender. We use blender in video editing mode, load our screen capture of the rotated model, set the render mode to jpeg which will render individual frames as images, and we can also set the step to 10 to sample every 10th frame.
Now we have a big old pile of images for processing in OpenDroneMap. Only one problem: that gray and white background is going to reconstruct too, so we need to remove it. For this, I used irfanview, although I probably could have done this in Gimp. I just set a color to replace, with a tolerance value to select the white and gray colors, and replace it iwth a black background. We can also optionally crop the image a little, which in this case, allowed me to use a less greedy color replace so that the teeth of the polar bear weren’t too adversely affected.
Voila! Nice and clean object. If only things could always be so easy. Now we are ready to load it into OpenDroneMap. We’ll use –camera-lens as fisheye. Brown could work here too, except we have no camera info like focal distance in the exif because our camera is virtual. Fisheye helps us get around this, and might even better match the type of virtual camera the website is using.
How does the model turn out? Rather well, frankly. Check out all that concavity that reconstructs quite nicely: