Computer Vision: OpenCV, Feature Tracking, and Beyond--From <<Make Things See>> by Greg
In the 1960s, the legendary Stanford artificial intelligence pioneer, John McCarthy, famously gave a graduate student the job of “solving” computer vision as a summer project. It has occupied an entire community of academic researchers for the past 40 years. And, in many ways, the first real breakthroughs have only come in the last decade or so, with the Kinect being one of the crown jewels of these recent developments.
One major product of the last 40 years of computer vision research is an open source library called OpenCV (http://opencv.willowgarage.com).
And, lucky for us, there’s a great library that makes it really easy to use OpenCV with Processing: OpenCV for Processing http://ubaa.net/shared/processing/opencv/).
The documentation for that library will get you started, and O’Reilly’s book on the topic is the definitive reference: Learning OpenCV by Gary Bradski and Adrian Kaehler (http://shop.oreilly.com/product/9780596516130.do).
OpenCV’s tools are designed to process individual images. While we can use them to analyze recorded footage or live video, very few of them actually account for the movement of objects over time. In the last decade or so, though, researchers have developed new techniques that use the time dimension of oving images to extract additional information. This has led to a number of breakthrough techniques including camera tracking, panorama stitching, and 3D scene reconstruction. All of these applications are based on the fundamental idea called “feature detection.” The software starts with a single still frame. It detects small pieces of this frame that are particularly recognizable, called “features.” Then, when examining subsequent frames, the software looks for the same features in adjacent
parts of the image to see if they’ve moved. If these features correspond to parts of the world that are themselves fixed (for example, the corner of a windowsill or the edge of fence post), then the movement of the features tells you about the movement of the camera itself. If you track enough of these features, you can combine the multiple frames into a single panorama, calculate the movement of the camera, or if your camera is a depth camera, build a full 3D reconstruction of the entire scene or room.
If you want to learn more about feature tracking and the other advanced techniques that have arisen in recent computer vision research, I highly recommend Computer Vision: Algorithms and Applications by Richard Szeliski of Microsoft Research (http://szeliski.org/Book). It presents a rigorous approach to the contemporary state of the art. The book arose from Szeliski’s teaching work at the University of Washington computer science department and so definitely has some math in it. However, if you’re excited about the field, and you go slowly and use the Internet to fill in the gaps in your background, there’s no better way to really dive deeply into the field.