The motion templates records location of blobs at each time-stamp. It estimates the direction of one(local, segmented) or more blobs (global) with the corresponding 'spatial-timestamp gradients'.
It makes use of a motion-history-image keep track of the latest movements together with timestamps. There is a predefined time window of which history will be kept. The detail mechanics could be referred from Davis99, Bradski00.
The 3 functions that together provide motion template technique:
The demo application uses simple frame-differencing and thresholding to get the object silhouettes. It uses a cyclical buffer to store the successive frames. The difference is computed with respect to the Nth previous frame where N is the buffer size.
Visualization: The values of motion-history-image (mhi) is converted into the Blue channel of a RGB image by scaling the time-stamp values to fit the 256 discrete levels. This results in a blue-on-black image, where newer and older blobs are shown, overlaid with estimated orientations. Older blob locations will have faded blue colors. Both global and local orientations are shown.
Test Video 1: Road-Side Camera Video
The orientation values perturbed quite a bit even as the direction of where the car is going does not change. Increasing the cyclical buffer size from 4 to 16 helps. The code comment remarked choosing this value according to video frame rate.
Test Video 2: Aquarium
The perturbation is less serious than the road-side-camera video even with default values. The relatively smooth movements of fish is probably the reason.
G Bradski is one of the authors of the Learning OpenCV and also of the Motion Templates paper?
- Learning OpenCV
- Motion segmentation and pose recognition with motion history gradients, Bradski and Davis.
- Real-time motion template gradients using Intel CVLib, Davis and Bradski