Content deleted Content added
GreenC bot (talk | contribs) Move 2 urls. Wayback Medic 2.5 |
GreenC bot (talk | contribs) Rescued 1 archive link; reformat 1 link. Wayback Medic 2.5 |
||
Line 212:
Since 1981 RANSAC has become a fundamental tool in the [[computer vision]] and image processing community. In 2006, for the 25th anniversary of the algorithm, a workshop was organized at the International [[Conference on Computer Vision and Pattern Recognition]] (CVPR) to summarize the most recent contributions and variations to the original algorithm, mostly meant to improve the speed of the algorithm, the robustness and accuracy of the estimated solution and to decrease the dependency from user defined constants.
RANSAC can be sensitive to the choice of the correct noise threshold that defines which data points fit a model instantiated with a certain set of parameters. If such threshold is too large, then all the hypotheses tend to be ranked equally (good). On the other hand, when the noise threshold is too small, the estimated parameters tend to be unstable ( i.e. by simply adding or removing a datum to the set of inliers, the estimate of the parameters may fluctuate). To partially compensate for this undesirable effect, Torr et al. proposed two modification of RANSAC called MSAC (M-estimator SAmple and Consensus) and MLESAC (Maximum Likelihood Estimation SAmple and Consensus).<ref>P.H.S. Torr and A. Zisserman, [http://www.academia.edu/download/3436793/torr_mlesac.pdf MLESAC: A new robust estimator with application to estimating image geometry]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}, Journal of Computer Vision and Image Understanding 78 (2000), no. 1, 138–156.</ref> The main idea is to evaluate the quality of the consensus set ( i.e. the data that fit a model and a certain set of parameters) calculating its likelihood (whereas in the original formulation by Fischler and Bolles the rank was the cardinality of such set). An extension to MLESAC which takes into account the prior probabilities associated to the input dataset is proposed by Tordoff.<ref>B. J. Tordoff and D. W. Murray, [https://ieeexplore.ieee.org/abstract/document/1498749/ Guided-MLESAC: Faster image transform estimation by using matching priors], IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (2005), no. 10, 1523–1535.</ref> The resulting algorithm is dubbed Guided-MLESAC. Along similar lines, Chum proposed to guide the sampling procedure if some a priori information regarding the input data is known, i.e. whether a datum is likely to be an inlier or an outlier. The proposed approach is called PROSAC, PROgressive SAmple Consensus.<ref>[https://dspace.cvut.cz/bitstream/handle/10467/9496/2005-Matching-with-PROSAC-progressive-sample-consensus.pdf?sequence=1 Matching with PROSAC – progressive sample consensus], Proceedings of Conference on Computer Vision and Pattern Recognition (San Diego), vol. 1, June 2005, pp. 220–226</ref>
Chum et al. also proposed a randomized version of RANSAC called R-RANSAC <ref>O. Chum and J. Matas, Randomized RANSAC with Td,d test, 13th British Machine Vision Conference, September 2002. http://www.bmva.org/bmvc/2002/papers/50/</ref> to reduce the computational burden to identify a good consensus set. The basic idea is to initially evaluate the goodness of the currently instantiated model using only a reduced set of points instead of the entire dataset. A sound strategy will tell with high confidence when it is the case to evaluate the fitting of the entire dataset or when the model can be readily discarded. It is reasonable to think that the impact of this approach is more relevant in cases where the percentage of inliers is large. The type of strategy proposed by Chum et al. is called preemption scheme. Nistér proposed a paradigm called Preemptive RANSAC<ref>D. Nistér, [https://pdfs.semanticscholar.org/e712/35d9e17f13186a4da6ee11eede0b64b01c95.pdf Preemptive RANSAC for live structure and motion estimation], IEEE International Conference on Computer Vision (Nice, France), October 2003, pp. 199–206.</ref> that allows real time robust estimation of the structure of a scene and of the motion of the camera. The core idea of the approach consists in generating a fixed number of hypothesis so that the
|