Simultaneous localization and mapping: Difference between revisions

Content deleted Content added
Nonacronym proper noun WP:ALLCAPS, nonlead-word nonproper noun MOS:CAPS > WP:LOWERCASE sentence case. WP:LINKs: update-standardizes, needless WP:PIPEs > WP:NOPIPEs, adds. Small WP:COPYEDITs WP:EoS: WP:TERSE, clarify. WP:REFerence WP:CITation parameters: adds, fills, author > last + first, updates, reorders, conform to master templates. MOS:FIRSTABBReviations define before WP:ABBRs in parentheses. Proper noun > MOS:CAPS.
Cut needless carriage return whitespace characters in sections: standardize, aid work via small screens.
Line 24:
 
== Algorithms ==
 
Statistical techniques used to approximate the above equations include [[Kalman filter]]s and [[particle filter]]s. They provide an estimation of the [[posterior probability distribution]] for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using [[covariance intersection]] are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications.<ref>{{cite conference|last1= Julier|first1=S.|last2=Uhlmann|first2=J.|title=Building a Million-Beacon Map.|conference=Proceedings of ISAM Conference on Intelligent Systems for Manufacturing|year=2001|doi=10.1117/12.444158}}</ref> Other approximation methods achieve improved computational efficiency by using simple bounded-region representations of uncertainty.<ref>{{cite conference|last1= Csorba|first1=M.|last2=Uhlmann|first2=J.|title=A Suboptimal Algorithm for Automatic Map Building.|conference=Proceedings of the 1997 American Control Conference|year=1997|doi=10.1109/ACC.1997.611857}}</ref>
 
Line 45 ⟶ 44:
 
=== Mapping ===
 
[[Topological map]]s are a method of environment representation which capture the connectivity (i.e., [[topology]]) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.<ref name=cummins2008>
{{cite journal
Line 64 ⟶ 62:
 
=== Sensing ===
{{seeSee also|3D scanner}}
 
[[File:Ouster OS1-64 lidar point cloud of intersection of Folsom and Dore St, San Francisco.png|thumb|Accumulated registered point cloud from [[lidar]] SLAM.]]
SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms.<ref name="magnabosco13slam">{{cite journal|last1=Magnabosco|first1=M.|last2=Breckon|first2=T.P.|title=Cross-Spectral Visual Simultaneous Localization And Mapping (SLAM) with Sensor Handover|journal=Robotics and Autonomous Systems|date=February 2013|volume=63|issue=2|pages=195–208|doi=10.1016/j.robot.2012.09.023|url=http://www.durham.ac.uk/toby.breckon/publications/papers/magnabosco13slam.pdf|access-date=5 November 2013}}</ref> Statistical independence is the mandatory requirement to cope with metric bias and with noise in measurements. Different types of sensors give rise to different SLAM algorithms which assumptions are most appropriate to the sensors. At one extreme, laser scans or visual features provide details of many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via [[image registration]]. At the opposite extreme, [[tactile sensor]]s are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.
Line 111 ⟶ 108:
 
=== Collaborative SLAM ===
 
''Collaborative SLAM'' combines images from multiple robots or users to generate 3D maps.<ref>Zou, Danping, and Ping Tan. "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.463.8135&rep=rep1&type=pdf Coslam: Collaborative visual slam in dynamic environments]." IEEE transactions on pattern analysis and machine intelligence 35.2 (2012): 354–366.</ref>
 
=== Moving objects ===
 
Non-static environments, such as those containing other vehicles or pedestrians, continue to present research challenges.<ref>{{Cite journal|last1=Perera|first1=Samunda|last2=Pasqual|first2=Ajith|date=2011|editor-last=Bebis|editor-first=George|editor2-last=Boyle|editor2-first=Richard|editor3-last=Parvin|editor3-first=Bahram|editor4-last=Koracin|editor4-first=Darko|editor5-last=Wang|editor5-first=Song|editor6-last=Kyungnam|editor6-first=Kim|editor7-last=Benes|editor7-first=Bedrich|editor8-last=Moreland|editor8-first=Kenneth|editor9-last=Borst|editor9-first=Christoph|title=Towards Realtime Handheld MonoSLAM in Dynamic Environments|journal=Advances in Visual Computing|volume=6938|series=Lecture Notes in Computer Science|language=en|publisher=Springer Berlin Heidelberg|pages=313–324|doi=10.1007/978-3-642-24028-7_29|isbn=9783642240287}}</ref><ref name=":1">{{Citation|last1=Perera|first1=Samunda|title=Exploration: Simultaneous Localization and Mapping (SLAM)|date=2014|work=Computer Vision: A Reference Guide|pages=268–275|editor-last=Ikeuchi|editor-first=Katsushi|publisher=Springer US|language=en|doi=10.1007/978-0-387-31439-6_280|isbn=9780387314396|last2=Barnes|first2=Dr.Nick|last3=Zelinsky|first3=Dr.Alexander|s2cid=34686200}}</ref> SLAM with DATMO is a model which tracks moving objects in a similar way to the agent itself.<ref name=Wang2007>{{cite journal
|last1=Wang
Line 139 ⟶ 134:
 
=== Loop closure ===
 
Loop closure is the problem of recognizing a previously-visited ___location and updating beliefs accordingly. This can be a problem because model or algorithm errors can assign low priors to the ___location. Typical loop closure methods apply a second algorithm to compute some type of sensor measure similarity, and reset the ___location priors when a match is detected. For example, this can be done by storing and comparing [[Bag-of-words model in computer vision|bag of words]] vectors of [[scale-invariant feature transform]] (SIFT) features from each previously visited ___location.
 
=== Exploration ===
 
''Active SLAM'' studies the combined problem of SLAM with deciding where to move next to build the map as efficiently as possible. The need for active exploration is especially pronounced in sparse sensing regimes such as tactile SLAM. Active SLAM is generally performed by approximating the [[entropy]] of the map under hypothetical actions. "Multi agent SLAM" extends this problem to the case of multiple robots coordinating themselves to explore optimally.
 
=== Biological inspiration ===
 
In neuroscience, the [[hippocampus]] appears to be involved in SLAM-like computations,<ref name="Howard">{{cite journal|last1=Howard|first1=MW|last2=Fotedar|first2=MS|last3=Datey|first3=AV|last4=Hasselmo|first4=ME|title= The temporal context model in spatial navigation and relational learning: toward a common explanation of medial temporal lobe function across domains|journal=Psychological Review|volume=112|issue=1|pages=75–116|pmc=1421376|year=2005|pmid=15631589|doi=10.1037/0033-295X.112.1.75}}</ref><ref name="Fox & Prescott">{{cite book|last1=Fox|first1=C|title= The 2010 International Joint Conference on Neural Networks (IJCNN)|pages=1–8|last2=Prescott|first2=T|chapter= Hippocampus as unitary coherent particle filter|doi=10.1109/IJCNN.2010.5596681|year=2010|isbn=978-1-4244-6916-1|s2cid=10838879|url=http://eprints.whiterose.ac.uk/108622/1/Fox2010_HippocampusUnitaryCoherentParticleFilter.pdf}}</ref><ref name="RatSLAM">{{cite book|last1=Milford|first1=MJ|last2=Wyeth|first2=GF|last3=Prasser|first3=D|title=IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004|chapter=RatSLAM: A hippocampal model for simultaneous localization and mapping|year=2004|pages=403–408 Vol.1|doi=10.1109/ROBOT.2004.1307183|isbn=0-7803-8232-3|s2cid=7139556|url=https://eprints.qut.edu.au/37593/1/c37593.pdf}}</ref> giving rise to [[place cells]], and has formed the basis for bio-inspired SLAM systems such as RatSLAM.
 
== Implementation methods ==
{{Further|List of SLAM methods}}
 
Various SLAM algorithms are implemented in the [[open-source software]] [[Robot Operating System]] (ROS) libraries, often used together with the [[Point Cloud Library]] for 3D maps or visual features from [[OpenCV]].
 
=== EKF SLAM ===
 
In [[robotics]], ''EKF SLAM'' is a class of algorithms which uses the [[extended Kalman filter]] (EKF) for SLAM. Typically, EKF SLAM algorithms are feature based, and use the maximum likelihood algorithm for data association. In the 1990s and 2000s, EKF SLAM had been the de facto method for SLAM, until the introduction of [[FastSLAM]].<ref name=Montemerlo2002>{{cite conference
|last1=Montemerlo|first1=M.|last2=Thrun|first2=S.|last3=Koller|first3=D.|last4=Wegbreit|first4=B.
Line 179 ⟶ 169:
 
== History ==
 
A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986.<ref name=Smith1986>{{cite journal
|last1=Smith|first1=R.C.