Non-parallax image stitching (i.e. stitching images which are taken from a fixed camera position) is a very well-known problem. In the past couple of weeks, I’ve been coding up a full panorama stitching pipeline in Python, which implements the well-known algorithm presented by Matthew Brown and David Lowe at ICCV’03. My code currently implements a couple of optimizations as well:
- ORB over SIFT. ORB features are faster to compute, easier to match, and provide a similar level of scale-invariance when compared with SIFT. This occurs due to the binary nature of ORB features, which inherently makes them easier to manage.
- Faster bundle adjustment. Direct bundle adjustment of homography elements does indeed use around 2x more memory and computational power vs. camera parameters, but almost always completes in much fewer iterations. One-by-one addition into the bundle adjuster is also no longer needed, which greatly improves runtime. I’ll provide more details on this process later.
- Cylindrical projection. Purely for viewing purposes, since planar coordinates look quite unnatural as the field-of-view approaches 180 degrees in either direction. To the best of my knowledge, no Python-based open source stitcher is able to do this yet.
I have some TODOSs lined up as well:
- Adaptive local feature matching. Inspired by this paper, I’d like to add in an iterative, cascade-like process which searches for local feature matches between images. This should make the stitching process much more accurate for images which do not contain globally distinctive features.
You can find it here. Feel free to use the code where you see fit, but please cite it if you do so. Feel free to email me or file an issue on Github if you find any bugs. Enjoy!
UPDATE 1: I have been notified that the Github link is broken. I will fix this momentarily.
UPDATE 2: Link has been fixed.
UPDATE 3: Added some features.
UPDATE 4: Updated link.