Content deleted Content added
m Open access bot: doi added to citation with #oabot. |
Jerryobject (talk | contribs) Nonacronym proper noun WP:ALLCAPS, nonlead-word nonproper noun MOS:CAPS > WP:LOWERCASE sentence case. WP:LINKs: update-standardizes, needless WP:PIPEs > WP:NOPIPEs, adds. Small WP:COPYEDITs WP:EoS: WP:TERSE, clarify. WP:REFerence WP:CITation parameters: adds, fills, author > last + first, updates, reorders, conform to master templates. MOS:FIRSTABBReviations define before WP:ABBRs in parentheses. Proper noun > MOS:CAPS. |
||
Line 1:
{{Short description|Computational navigational technique used by robots and autonomous vehicles}}
[[File:Stanley2.JPG|thumb|[[Stanley (vehicle)|2005 DARPA Grand Challenge winner
[[File:RoboCup Rescue arena map generated by robot Hector from Darmstadt at 2010 German open.jpg|thumb|A map generated by a SLAM Robot.]]
'''Simultaneous localization and mapping''' ('''SLAM''') is the computational problem of constructing or updating a [[map]] of an unknown environment while simultaneously keeping track of an [[Intelligent agent|agent]]'s ___location within it. While this initially appears to be a [[
SLAM algorithms are tailored to the available resources and are not aimed at perfection but at operational compliance. Published approaches are employed in [[self-driving car]]s, [[unmanned aerial vehicle]]s, [[autonomous underwater vehicle]]s, [[Rover (space exploration)|planetary rovers]], newer [[domestic robot]]s and even inside the human body.
Line 15:
Applying [[Bayes' rule]] gives a framework for sequentially updating the ___location posteriors, given a map and a transition function <math>P(x_t|x_{t-1})</math>,
:<math>
Similarly the map can be updated sequentially by
:<math>
Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of an [[expectation–maximization algorithm]].
Line 25:
== Algorithms ==
Statistical techniques used to approximate the above equations include [[Kalman filter]]s and [[particle filter]]s. They provide an estimation of the [[posterior probability distribution]] for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using [[covariance intersection]] are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications.<ref>{{cite conference
[[Set estimation|Set-membership techniques]] are mainly based on [[interval propagation|interval constraint propagation]].<ref>
Line 40:
url=http://www.ensta-bretagne.fr/jaulin/paper_dig_slam.pdf|doi=10.1109/TRO.2011.2147110|s2cid=52801599}}
</ref>
They provide a set which encloses the pose of the robot and a set approximation of the map. [[Bundle adjustment]], and more generally [[
New SLAM algorithms remain an active research area,<ref name=":0">{{Cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|date=2016|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|language=en-US|volume=32|issue=6|pages=1309–1332|arxiv=1606.05830|bibcode=2016arXiv160605830C|doi=10.1109/tro.2016.2624754|issn=1552-3098|hdl=2440/107554|s2cid=2596787}}</ref> and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be viewed as combinations of choices from each of these aspects.
Line 48:
[[Topological map]]s are a method of environment representation which capture the connectivity (i.e., [[topology]]) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.<ref name=cummins2008>
{{cite journal
|last1=Cummins
|last2=Newman
|title=FAB-MAP: Probabilistic localization and mapping in the space of appearance
|journal=The International Journal of Robotics Research
Line 59:
|access-date=23 July 2014}}</ref>
In contrast, [[grid map]]s use arrays (typically square or hexagonal) of discretized cells to represent a topological world, and make inferences about which cells are occupied. Typically the cells are assumed to be statistically independent
Modern [[self driving cars]] mostly simplify the mapping problem to almost nothing, by making extensive use of highly detailed map data collected in advance.
=== Sensing ===
Line 67:
[[File:Ouster OS1-64 lidar point cloud of intersection of Folsom and Dore St, San Francisco.png|thumb|Accumulated registered point cloud from [[lidar]] SLAM.]]
SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms.<ref name="magnabosco13slam">{{cite journal|
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) [[laser rangefinder]]s, 3D
}}</ref>
For some outdoor applications, the need for SLAM has been almost entirely removed due to high precision differential [[GPS]] sensors. From a SLAM perspective, these may be viewed as ___location sensors
=== Kinematics modeling ===
The <math>P(x_t|x_{t-1})</math> term represents the kinematics of the model, which usually include information about action commands given to a robot. As a part of the model, the [[robot kinematics|kinematics of the robot]] is included, to improve estimates of sensing under conditions of inherent and ambient noise. The dynamic model balances the contributions from various sensors, various partial error models and finally comprises in a sharp virtual depiction as a map with the ___location and heading of the robot as some cloud of probability. Mapping is the final depicting of such model, the map is either such depiction or the abstract term for the model.
Line 106 ⟶ 105:
=== Acoustic SLAM ===
An extension of the common SLAM problem has been applied to the acoustic ___domain, where environments are represented by the three-dimensional (3D) position of sound sources, termed.<ref>{{Cite journal|last1=Evers|first1=Christine|last2=Naylor|first2=Patrick A.|date=September 2018|title=Acoustic SLAM|journal=IEEE/ACM Transactions on Audio, Speech, and Language Processing|volume=26|issue=9|pages=1484–1498|doi=10.1109/TASLP.2018.2828321|issn=2329-9290|url=https://eprints.soton.ac.uk/437941/1/08340823.pdf|doi-access=free}}</ref> Early implementations of this technique have
=== Audiovisual SLAM ===
Line 117 ⟶ 116:
=== Moving objects ===
Non-static environments, such as those containing other vehicles or pedestrians, continue to present research challenges.<ref>{{Cite journal|last1=Perera|first1=Samunda|last2=Pasqual|first2=Ajith|date=2011|editor-last=Bebis|editor-first=George|editor2-last=Boyle|editor2-first=Richard|editor3-last=Parvin|editor3-first=Bahram|editor4-last=Koracin|editor4-first=Darko|editor5-last=Wang|editor5-first=Song|editor6-last=Kyungnam|editor6-first=Kim|editor7-last=Benes|editor7-first=Bedrich|editor8-last=Moreland|editor8-first=Kenneth|editor9-last=Borst|editor9-first=Christoph|title=Towards Realtime Handheld MonoSLAM in Dynamic Environments|journal=Advances in Visual Computing|volume=6938|series=Lecture Notes in Computer Science|language=en|publisher=Springer Berlin Heidelberg|pages=313–324|doi=10.1007/978-3-642-24028-7_29|isbn=9783642240287}}</ref><ref name=":1">{{Citation|last1=Perera|first1=Samunda|title=Exploration: Simultaneous Localization and Mapping (SLAM)|date=2014|work=Computer Vision: A Reference Guide|pages=268–275|editor-last=Ikeuchi|editor-first=Katsushi|publisher=Springer US|language=en|doi=10.1007/978-0-387-31439-6_280|isbn=9780387314396|last2=Barnes|first2=Dr.Nick|last3=Zelinsky|first3=Dr.Alexander|s2cid=34686200
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
}}</ref>
=== Loop closure ===
Loop closure is the problem of recognizing a previously-visited ___location and updating beliefs accordingly. This can be a problem because model or algorithm errors can assign low priors to the ___location. Typical loop closure methods apply a second algorithm to compute some type of sensor measure similarity, and
=== Exploration ===
=== Biological inspiration ===
In neuroscience, the [[hippocampus]] appears to be involved in SLAM-like computations,<ref name="Howard">{{cite journal|last1=Howard|first1=MW|last2=Fotedar|first2=MS|last3=Datey|first3=AV|last4=Hasselmo|first4=ME|title= The temporal context model in spatial navigation and relational learning: toward a common explanation of medial temporal lobe function across domains|journal=Psychological Review|volume=112|issue=1|pages=75–116|pmc=1421376|year=2005|pmid=15631589|doi=10.1037/0033-295X.112.1.75}}</ref><ref name="Fox & Prescott">{{cite book|last1=Fox|first1=C|title= The 2010 International Joint Conference on Neural Networks (IJCNN)|pages=1–8|last2=Prescott|first2=T|chapter= Hippocampus as unitary coherent particle filter|doi=10.1109/IJCNN.2010.5596681|year=2010|isbn=978-1-4244-6916-1|s2cid=10838879|url=http://eprints.whiterose.ac.uk/108622/1/Fox2010_HippocampusUnitaryCoherentParticleFilter.pdf}}</ref><ref name="RatSLAM">{{cite book|last1=Milford|first1=MJ|last2=Wyeth|first2=GF|last3=Prasser|first3=D
== Implementation methods ==
{{
Various SLAM algorithms are implemented in the [[open-source software]] [[
=== EKF SLAM ===
In [[robotics]],
}}</ref>
Associated with the EKF is the gaussian noise assumption, which significantly impairs EKF SLAM's ability to deal with uncertainty. With greater amount of uncertainty in the posterior, the linearization in the EKF fails.<ref name=Trun2005>{{cite book
}}</ref>
Line 181:
A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986.<ref name=Smith1986>{{cite journal
}}</ref><ref name=Smith1986b>{{cite conference
}}</ref> Other pioneering work in this field was conducted by the research group of [[Hugh F. Durrant-Whyte]] in the early 1990s.<ref name=Leonard1991>{{cite journal
|isbn=978-0-7803-0067-5
The self-driving STANLEY and JUNIOR cars, led by [[Sebastian Thrun]], won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners.<ref>{{Cite news|last=Knight|first=Will|url=https://www.technologyreview.com/s/541326/the-roomba-now-sees-and-maps-a-home/|title=With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots
== See also ==
Line 232:
* [[Neato Robotics]]
* [[Particle filter]]
* [[Recursive Bayesian estimation]]
* [[Robotic mapping]]
*''[[Stanley (vehicle)
* [[Stereophotogrammetry]]
* [[Structure from motion]]
* [[Tango (platform)]]
* [[Visual odometry]]
{{Div col end}}
== References ==
{{
== External links ==
* [http://www.probabilistic-robotics.org/ Probabilistic Robotics] by [[Sebastian Thrun]], [[Wolfram Burgard]] and [[Dieter Fox]] with a clear overview of SLAM.
* [https://dspace.mit.edu/bitstream/handle/1721.1/36832/16-412JSpring2004/NR/rdonlyres/Aeronautics-and-Astronautics/16-412JSpring2004/A3C5517F-C092-4554-AA43-232DC74609B3/0/1Aslam_blas_report.pdf SLAM For Dummies (A Tutorial Approach to Simultaneous Localization and Mapping)].
* [http://www.doc.ic.ac.uk/%7Eajd/
* [https://openslam-org.github.io/ openslam.org] A good collection of open source code and explanations of SLAM.
* [http://eia.udg.es/~qsalvi/Slam.zip Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping] Vehicle moving in 1D, 2D and 3D.
* [https://web.archive.org/web/20120313064730/http://www.kn-s.dlr.de/indoornav/footslam_video.html FootSLAM research page] at [[German Aerospace Center
* [https://www.youtube.com/watch?v=B2qzYCeT9oQ&list=PLpUPoM7Rgzi_7YWn14Va2FODh7LzADBSm SLAM lecture] Online SLAM lecture based on Python.
|