Lucas-Kanade published a sparse tracking method. It asserts some properties for a pixel-in-motion. Devises a velocity equation and track each feature point from one frame to the next. It does so by iterative approximation (Newton Method?). The properties are: there is relatively little change in (1) brightness, and (2) distance, and (3) neighboring pixels move in the same direction. The last property is used to avoid Aperture Effect. The 'matching' is done by comparing the pixel intensities between patch areas. The best-match is the one that gives minimum value of sum of squared-differences.
Image pyramid is built to detect large movement. The algorithm starts from the tip of the pyramid (lowest res), and works its way down to the original resolution. The displacement vector dL calculated at each level is passed down to the next level as the initial estimates. So the final vector d is like a sum of vectors from each level.
Sample (lkdemo) - Lucas-Kanade Pyramid
- The demo app uses GoodFeaturesToTrack to find feature points and refine the location with cornerSubPix(). User needs to press a key to (re)initialize the set of feature points from the current frame. User could select points from the video and add to the set. The current frame becomes the reference frame for the next frame. Reference feature points that failed to be tracked will be dropped. Re-initializing set of feature points from is required to track new objects entering the screen.
- Noticed that the feature points would quickly be lost track with fewer pyramid levels, as they move away from the camera (tested with the road-side camera video).
- Using the default parameters give pretty good results in terms of tracking accuracy despite too slow for a real-time video running on this PC.
- Learning OpenCV (Book)
- Pyramidal Implementation of the Lucas-Kanade Feature Tracker Description of the algorithm, by Bouguet.