According to the paper, real-time video tracking (augmented reality) is the primary goal of devising FAST.
The method involves training a decision tree with attribute vector of 16 elements. Each one is a point around a circle. The center of the circle is the candidate feature-point (corner). The candidate is classified as a 'corner' or 'non-corner' with the resulting tree.
- Segment-Test: Put a Bresenham circle, of 3-pixel-radius (r=3), around our study point. There will be 16 perimeter pixels. The study point is a corner if there are 12 (N=12) consecutive pixels from the perimeter that are consistently brighter than the center (study pixel) by a threshold 't'. This case is dark-in-center. It would also count if those N pixels are darker than the center (bright-in-center).
- Classify all the training images by finding out all the corner points using Segment-Test.
- Build the decision tree in two phases
- Choose an attribute, let's say pixel-5 from the perimeter. Classify the current center as either 'Brighter', 'Similar' or 'Darker' with respect to this point. Repeat this for all pixels as center. At the end of this round, the pixels will be divided into 3 groups. Brighter, Similar or Darker.
- The choice of pixel-5 is based on that fact that it would reduce the most entropy (highest information gain). That would like to introduce the most uneven split of classifications going into the next level. This is ID3 Decision Tree. Each node is split 3-ways. There are 2 classifications (corner - yes , no).
- Repeat the above with other perimeter pixels to grow the decision tree until all the leave nodes have 100% yes-no corner (entropy of 0).
- Apply non-maxima suppression to detected corners to eliminate overlapping features. Compare absolute sum of the intensity-difference from the circle to find the the local maxima.
- The Rosten paper suggests that it takes on average 2.2 questions (tree-level?) to classified a point.
- Probably need a DetectorParam struct to group together separate 'threshold' and 'nonMaxSuppression' argument.
- The FAST implementation in OpenCV performs a 9-point segment test. It is not a learned method. Meaning that it does not build a decision tree. It uses Segment Test only.
- It's very fast even with the large Obama picture, taking less than 1 second to detect 5000+ corners.
- Using default threshold (intensity-difference), it detects 4 times as much corners as SURF.
- Higher threshold, fewer detected points and vice-versa.
- Machine learning for high-speed corner detection, Edward Rosten and Tom Drummond
- (see resources)
- Edward Rosten Page: http://mi.eng.cam.ac.uk/~er258/work/fast.html.
A few iPhone projects is using FAST.
Edward has submitted a newer and FAST-er method.
- On ID3 Decision Tree http://www.cse.unsw.edu.au/~billw/cs9414/notes/ml/06prop/id3/id3.html