Features from accelerated segment test: Difference between revisions

Content deleted Content added
Dekart (talk | contribs)
Not orphane anymore, added link to it from Corner detection
No edit summary
 
(44 intermediate revisions by 29 users not shown)
Line 1:
'''Features from accelerated segment test (FAST)''' is a [[corner detection]] method, which could be used to extract [[Feature (computer vision)|feature]] points and later used to track and map objects in many [[computer vision]] tasks. The FAST corner detector was originally developed by Edward Rosten and Tom Drummond, and was published in 2006.<ref>
{{abbreviations|date=April 2012}}
{{cite book
'''Features from Accelerated Segment Test (FAST)''' is a corner detection method, which could be used to extract feature points and later used to track and map objects in many [[computer vision]] tasks. FAST corner detector was originally developed by Edward Rosten and Tom Drummond. The most promising advantage of FAST [[corner detector]] is its computational efficiency. Referring to its name, it is fast and indeed it is faster than many other well-known feature extraction methods, such as [[Difference of Gaussian]](DoG) used by [[Scale-invariant feature transform|SIFT]], [[SUSAN]] and [[Harris affine region detector|Harris]]. Moreover when machine learning method is applied, a better performance could be achieved which takes less time and computational resources. FAST corner detector is very suitable for real-time video processing application because of high-speed performance.
|last1=Rosten |first1=Edward
|last2=Drummond |first2=Tom
|title=Computer Vision – ECCV 2006
|s2cid=1388140
|date=2006
|chapter=Machine Learning for High-speed Corner Detection
|series=Lecture Notes in Computer Science
|volume=3951
|pages=430–443
|doi=10.1007/11744023_34
|isbn=978-3-540-33832-1
}}
'''Features from Accelerated Segment Test (FAST)''' is a corner detection method, which could be used to extract feature points and later used to track and map objects in many [[computer vision]] tasks. FAST corner detector was originally developed by Edward Rosten and Tom Drummond.</ref> The most promising advantage of the FAST [[corner detector]] is its computational efficiency. Referring to its name, it is fast and indeed it is faster than many other well-known feature extraction methods, such as [[Differencedifference of GaussianGaussians]] (DoG) used by the [[Scale-invariant feature transform|SIFT]], [[Corner detection#The SUSAN corner detector|SUSAN]] and [[Harris affine region detector|Harris]] detectors. Moreover, when machine learning methodtechniques isare applied, a bettersuperior performance couldin beterms achievedof which takes lesscomputation time and computational resources can be realised. The FAST corner detector is very suitable for real-time video processing application because of this high-speed performance.
 
== Segment test detector ==
[[File:FAST Corner Detector.jpg|thumb|The pixels used by the FAST corner detector]]
FAST corner detector uses a circle of 16 pixels (a [[Midpoint circle algorithm|Bresenham circle]] of radius 3) to classify whether a candidate point p is actually a corner. As could be seen from Figure 1, eachEach pixel in the circle is labeled from integer number 1 to 16 clockwise. If a set of N contiguous pixels in the circle are all brighter than the intensity of candidate pixel p (denoted by I<sub>p</sub>) plus a threshold value t or all darker than the intensity of candidate pixel p minus threshold value t, then p is classified as corner. The conditions can be written as:
*Condition 1: A set of N contiguous pixels S, <math>\forall x \in S</math>, the<math>I_x intensity> ofI_p x+ (I<sub>xt</submath>) >(a dark I<sub>p</sub>corner +on thresholda tbright background)
*Condition 2: A set of N contiguous pixels S, <math>\forall x \in S</math>, I<sub>x</submath>I_x < I_p I<sub>p- t</submath> -(a tbright corner on a dark background)
So when either of the two conditions is met, candidate p can be classified as a corner. There is a tradeoff of choosing N, the number of contiguous pixels and the threshold value t. On the one hand, the number of detected corner points should not be too many; on the other hand, the high performance should not be achieved by sacrificing computational efficiency. Without the improvement of [[machine learning]], N is usually chosen as 12. Since that a high speed test method could be applied to exclude non-corner points.
 
So when either of the two conditions is met, candidate p can be classified as a corner. There is a tradeoff of choosing N, the number of contiguous pixels and the threshold value t. On the one hand, the number of detected corner points should not be too many;, on the other hand, the high performance should not be achieved by sacrificing computational efficiency. Without the improvement of [[machine learning]], N is usually chosen as 12. Since that aA high -speed test method could be applied to exclude non-corner points.
== High speed test ==
 
== High -speed test ==
The high speed test for rejecting non-corner points is operated by examining 4 example pixels, namely pixel 1, 9, 5 and 13. Because there should be at least 12 contiguous pixels that are whether all brighter or darker than the candidate corner, so there should be at least 3 pixels out of these 4 example pixels that are all brighter or darker than the candidate corner. Firstly pixels 1 and 9 are examined, if both I<sub>1</sub> and I<sub>9</sub> are within [I<sub>p</sub> - t, I<sub>p</sub> + t], then condidate p is not a corner. Otherwise pixels 5 and 13 are further examined to check whether three of them are brighter than I<sub>p</sub> + t or darker than I<sub>p</sub> - t. If there exists 3 of them that are either brighter or darker, the rest pixels are then examined for final conclusion. And according to the inventor in his first paper,<ref>Edward Rosten, [http://edwardrosten.com/work/rosten_2005_annotations.pdf Real-time Video Annotations for Augmented Reality]</ref> on average 3.8 pixels are needed to check for candidate corner pixel. Compared with 16 pixels for each candidate corner, 3.8 is really a great reduction which could highly improve the performance.
 
The high -speed test for rejecting non-corner points is operated by examining 4 example pixels, namely pixel 1, 9, 5 and 13. Because there should be at least 12 contiguous pixels that are whether all brighter or darker than the candidate corner, so there should be at least 3 pixels out of these 4 example pixels that are all brighter or darker than the candidate corner. Firstly pixels 1 and 9 are examined, if both I<sub>1</sub> and I<sub>9</sub> are within [I<sub>p</sub> - t, I<sub>p</sub> + t], then condidatecandidate p is not a corner. Otherwise pixels 5 and 13 are further examined to check whether three of them are brighter than I<sub>p</sub> + t or darker than I<sub>p</sub> - t. If there exists 3 of them that are either brighter or darker, the rest pixels are then examined for final conclusion. And according to the inventor in his first paper,<ref>Edward Rosten, [http://edwardrosten.com/work/rosten_2005_annotations.pdf Real-time Video Annotations for Augmented Reality]</ref> on average 3.8 pixels are needed to check for candidate corner pixel. Compared with 168.5 pixels for each candidate corner, 3.8 is really a great reduction which could highly improve the performance.
 
However, there are several weaknesses for this test method:
# The high-speed test cannot be generalized well for N < 12. If N < 12, it would be possible that a candidate p is a corner and only 2 out of 4 example test pixels are both brighter I<sub>p</sub> + t or darker than I<sub>p</sub> - t.
# The efficiency of the detector depends on the choice and ordering of these selected test pixels. However it is unlikely that the chosen pixels are optimal which take concerns about the distribution of corner appearances.
# MulitipleMultiple features are detected adjacent to one another
 
== Improvement with machine learning ==
In order to address the first two weakness points of high-speed test, a [[machine learning]] approach is introduced to help improve the detecting algorithm. This [[machine learning]] approach operates in two stages. Firstly, corner detection with a given N is processed on a set of training images which are preferable from the target application ___domain. Corners are detected through the simplest implementation which literally extracts a ring of 16 pixels and compares the intensity values with an appropriate threshold.
 
For candidate p, each ___location on the circle x ∈ {1, 2, 3, ..., 16} can be denoted by p→x. The state of each pixel, S<sub>p→x</sub> must be in one of the following three states:
Line 26 ⟶ 40:
* b, I<sub>p→x</sub>≥ I<sub>p</sub> + t (brighter)
 
Then choosing an x could(same for all p) partitions P (the set of all pixels of all training images) into 3 different subsetsubsets, P<sub>d</sub>, P<sub>s</sub>, P<sub>b</sub> where:
* P<sub>d</sub> = {p ∈ P : S<sub>p→x</sub> = d }
* P<sub>s</sub> = {p ∈ P : S<sub>p→x</sub> = s }
* P<sub>b</sub> = {p ∈ P : S<sub>p→x</sub> = b }
 
Secondly, a [[decision tree]] algorithm, the [[ID3 algorithm]] is applied to the 16 locations in order to achieve the maximum [[information gain]]. Let K<sub>p</sub> be a boolean variable which indicates whether p is a corner, then the [[Entropy (information theory)|entropy]] of K<sub>p</sub> is used to measure the information of p being a corner. For a set of pixels Q, the total [[entropy]] of K<sub>Q</sub> (not normalized) is:
*H(Q) = ( c + n ) log<sub>2</sub>( c + n ) - clog<sub>2</sub>c - nlog<sub>2</sub>n
** where c = |{ i ∈ Q: K<sub>i</sub> is true}| (number of corners)
** where n = |{ i ∈ Q: K<sub>i</sub> is false}| (number of non-corners)
Line 39 ⟶ 53:
*H<sub>g</sub>= H(P) - H(P<sub>b</sub>) - H(P<sub>s</sub>) - H(P<sub>d</sub>)
 
A recursive process is applied to each subsets in order to select each x that could maximize the [[information gain]]. For example, at first an x is selected to partition P into P<sub>d</sub>, P<sub>s</sub>, P<sub>b</sub> with the most information; then for each subset P<sub>d</sub>, P<sub>s</sub>, P<sub>b</sub>, another y is selected to yield the most information gain (notice that the y could be the same as x ). This recursive process ends when the entropy is zero so that either all pixels in that subset are corners or non-corners.
 
This generated [[decision tree]] can then be converted into programming code, such as [[C (programming language)|C]] and [[C++]], which is just a bunch of nested if-else statements. For optimization purpose, [[profile-guided optimization]] is used to compile the code. The compliedcompiled code is used as corner detector later for other images.
 
Notice that the corners detected using this [[decision tree]] algorithm should be slightly different from the results using segment test detector. This is because that [[decision tree model]] depends on the training data, which could not cover all possible corners.
 
== Non-maximum suppression ==
"Since the segment test does not compute a corner response function, [[non-maximum suppression]] couldcan not be applied directly to the resulting features." However, if N is fixed, for each pixel p the corner strength is defined as the maximum value of t that makes p a corner. Two approaches therefore could be used:
*A [[binary search algorithm]] could be applied to find the biggest t for which p is still a corner. So each time a different t is set for the decision tree algorithm. When it manages to find the biggest t, that t could be regarded as the corner strength.
*Another approach is an iteration scheme, where each time t is increased to the smallest value of which pass the test.
 
== FAST-ER: Enhanced repeatability ==
FAST-ER detector is actually just an improvement of the FAST detector by using a [[metaheuristic]] algorithm, in this case [[simulated annealing]]. So that after the optimization, the structure of the decision tree would be optimized and suitable for points with high repeatability. However, since [[simulated annealing]] is a metaheurisic algorithm, each time the algorithm would generate a different optimized decision tree. So it is better to take efficiently large amount of iterations to find a solution that is close to the real optimal. According to Rosten, it takes about 200 hours on a [[Pentium 4]] at 3&nbsp;GHz which is 100 repeats of 100,000 iterations to optimize the FAST detector.
 
== Comparison with other detectors ==
In Rosten's research,<ref>Edward Rosten, [http://lanl.arXiv.org/pdf/0810.2434 FASTER and better: A machine learning approach to corner detection]</ref> FAST and FAST-ER detector are evaluated on several different datasets and compared with the [[DoG]], [[Harris affine region detector|Harris]], [[Harris–Laplace detector|Harris-Laplace]], [[Shi-Tomasi]], and [[Corner detection#The SUSAN corner detector|SUSAN]] corner detectors.
 
The parameter settings for the detectors (other than FAST) are as follows:
Line 124 ⟶ 138:
|}
 
* Speed tests were performed on a 3.0&nbsp;GHz [[Pentium D|Pentium 4-D]] computer. The dataset are divided into a training set and a test set. The training set consists of 101 [[monochrome]] images with a resolution of 992×668 pixels. The test set consists of 4968 frames of [[monochrome]] 352×288 video. And the result is:
{| class="wikitable"
|-
Line 150 ⟶ 164:
 
== Bibliography ==
* {{cite journalbook | last=Rosten | first=Edward | coauthorsauthor2=Tom Drummond | title=Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 | chapter=Fusing points and lines for high performance tracking | journalurl=IEEE International Conference on Computer Visionhttp://edwardrosten.com/work/rosten_2005_tracking.pdf
| year=2005 | doi=10.1109/ICCV.2005.104 | volume=2| pages=1508–1511| isbn=978-0-7695-2334-7 | citeseerx=10.1.1.60.4715 | s2cid=1505168 }}
| url=http://edwardrosten.com/work/rosten_2005_tracking.pdf
* {{cite journal | last=Rosten | first=Edward |author2=Reid coauthorsPorter |author3=Tom Drummond | title=MachineFASTER and better: A machine learning forapproach high-speedto corner detection | journal=EuropeanIEEE ConferenceTransactions on ComputerPattern Analysis and Machine VisionIntelligence
| year=2005 | doi=10.1109/ICCV.2005.104 | volume=2| pages=1508–1511}}
| arxiv=0810.2434| year=2010 | doi=10.1109/TPAMI.2008.275 | pmid=19926902 | volume=32| issue=1 | pages=105–119| s2cid=206764370 }}
 
* {{cite journalbook | last=Rosten | first=Edward | coauthorsauthor2=Reid Porter, Tom Drummond | title=FASTERComputer andVision better: AECCV machine2006 learning| approachchapter=Machine toLearning cornerfor detectionHigh-Speed Corner Detection | journal=IEEEEuropean Trans.Conference Patternon Analysis and MachineComputer IntelligenceVision
| url=http://lanl.arXiv.org/pdf/0810.2434
| year=2010 | doi=10.1109/TPAMI.2008.275 | volume=32| pages=105–119}}
 
* {{cite journal | last=Rosten | first=Edward | coauthors=Tom Drummond | title=Machine learning for high-speed corner detection | journal=European Conference on Computer Vision
| url=http://edwardrosten.com/work/rosten_2006_machine.pdf
| year=2006 | doi=10.1007/11744023_34 | volume=1| pages=430–443| series=Lecture Notes in Computer Science | isbn=978-3-540-33832-1 | citeseerx=10.1.1.64.8513 | s2cid=1388140 }}
 
== External links ==
Line 166 ⟶ 176:
 
<!--- Categories --->
[[Category:Feature detection (computer vision)]]
 
[[Category:Feature detection]]
[[Category:Artificial intelligence]]