A complete analytic solution for geometric distortions in remote sensing (ahem, ignoring atmosphere, of course)
Ok, so here is another thought experiment. This time, it will take a few months before I have time to write the code for this, but maybe the thought experiment will inspire someone. A major problem of photogrammetry, remote sensing, and computer vision is correction of lens distortion. Thanks to almost a century of working with this problem, and recent developments in computer vision, your average geek can now calibrate her camera with freely available code. see the Camera Calibration Toolbox for code written in Matlab– there are also links to other free camera calibration projects.
I have a real interest in using cheap off-the-shelf cameras for solving small remote sensing problems, and it would be nice to be able to correct distortions in the images with my favorite computational tool, PovRay.
So, how to do this? Well, let’s not think of this as an empirical problem like most of the corrections of today. The Camera Calibration Toolbox, for example, requires image inputs from the camera to correct for the distortion. What if, instead, we arrive at a completely analytic solution using the known information about the array of lenses, their geometry and index of refraction to correct for distortion? What if we created a virtual array of lenses identical to our camera, and pass our image back through the lenses to cancel the lens distortion effects? This would be the virtual version of an late 19th century technique discussed by Clarke and Fryer, 1998:
For mapping applications the earliest solutions to the problems associated with large radial lens distortions were by direct optical correction whereby the image was re-projected through the camera and lens system which had captured it. This system was termed the Porro-Koppe Principle after the scientists who perfected it in the latter part of the 19th century. In this manner the geometric distortions in the image were canceled.
How do we build such a system? Well, we start with a photon scene, ala Henrik Wann Jensen, and take advantage of the constructive geometry lens set ala Don Barron, and project our image back through a virtual version of our lens set, to be captured on a flat surface with an orthographic camera in PovRay. But here’s an opportunity– If our original scene captured by the camera was not flat, and we knew its three dimensional properties from other information (stereo pair or lidar), we could project the scene back on to it’s 3D geometry, then capture with an orthographic camera, and then we could correct for terrain distortion and camera distortion all in one go. Fun stuff huh– all implemented in free software, a complete analytic solution for geometric distortions in remote sensing.
Just one problem– I’m not quite sure how to project an image in PovRay. Oh, I could do it brute force and create a square transparent pane of color for each pixel (which I may end up doing), but if anyone has a better idea, I’m open to it.
T.A. Clarke and J.G. Fryer, Photogrammetric Record, 16(91): 51-66, April 1998 found at: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/ref.html)