Simultaneous localization and mapping: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: doi added to citation with #oabot.
Nonacronym proper noun WP:ALLCAPS, nonlead-word nonproper noun MOS:CAPS > WP:LOWERCASE sentence case. WP:LINKs: update-standardizes, needless WP:PIPEs > WP:NOPIPEs, adds. Small WP:COPYEDITs WP:EoS: WP:TERSE, clarify. WP:REFerence WP:CITation parameters: adds, fills, author > last + first, updates, reorders, conform to master templates. MOS:FIRSTABBReviations define before WP:ABBRs in parentheses. Proper noun > MOS:CAPS.
Line 1:
{{Short description|Computational navigational technique used by robots and autonomous vehicles}}
[[File:Stanley2.JPG|thumb|[[Stanley (vehicle)|2005 DARPA Grand Challenge winner STANLEYStanley]] performed SLAM as part of its autonomous driving system]]
[[File:RoboCup Rescue arena map generated by robot Hector from Darmstadt at 2010 German open.jpg|thumb|A map generated by a SLAM Robot.]]
 
'''Simultaneous localization and mapping''' ('''SLAM''') is the computational problem of constructing or updating a [[map]] of an unknown environment while simultaneously keeping track of an [[Intelligent agent|agent]]'s ___location within it. While this initially appears to be a [[Chickenchicken andor eggthe problem|chicken-and-egg]] problem]], there are several [[algorithm]]s known forto solvingsolve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the [[particle filter]], extended [[Kalman filter]], covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in [[computational geometry]] and [[computer vision]], and are used in [[robot navigation]], [[robotic mapping]] and [[odometry]] for [[virtual reality]] or [[augmented reality]].
 
SLAM algorithms are tailored to the available resources and are not aimed at perfection but at operational compliance. Published approaches are employed in [[self-driving car]]s, [[unmanned aerial vehicle]]s, [[autonomous underwater vehicle]]s, [[Rover (space exploration)|planetary rovers]], newer [[domestic robot]]s and even inside the human body.
Line 15:
Applying [[Bayes' rule]] gives a framework for sequentially updating the ___location posteriors, given a map and a transition function <math>P(x_t|x_{t-1})</math>,
 
:<math> P(x_t | o_{1:t},u_{1:t},m_t) = \sum_{m_{t-1} } P(o_{t}|x_t, m_t,u_{1:t}) \sum_{x_{t-1}} P(x_t|x_{t-1}) P(x_{t-1}|m_t, o_{1:t-1},u_{1:t}) /Z </math>
 
Similarly the map can be updated sequentially by
 
:<math> P(m_t | x_t,o_{1:t},u_{1:t}) = \sum_{x_t} \sum_{m_t} P(m_t | x_t, m_{t-1}, o_t,u_{1:t} ) P(m_{t-1},x_t | o_{1:t-1},m_{t-1},u_{1:t}) </math>
 
Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of an [[expectation–maximization algorithm]].
Line 25:
== Algorithms ==
 
Statistical techniques used to approximate the above equations include [[Kalman filter]]s and [[particle filter]]s. They provide an estimation of the [[posterior probability distribution]] for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using [[covariance intersection]] are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications.<ref>{{cite conference | last1= Julier |first1=S. |last2=Uhlmann |first2=J. | title = Building a Million-Beacon Map. | conference = Proceedings of ISAM Conference on Intelligent Systems for Manufacturing | year = 2001|doi=10.1117/12.444158 }}</ref> Other approximation methods achieve improved computational efficiency by using simple bounded-region representations of uncertainty.<ref>{{cite conference | last1= Csorba |first1=M. |last2=Uhlmann |first2=J. | title = A Suboptimal Algorithm for Automatic Map Building. | conference = Proceedings of the 1997 American Control Conference | year = 1997|doi=10.1109/ACC.1997.611857 }}</ref>
 
[[Set estimation|Set-membership techniques]] are mainly based on [[interval propagation|interval constraint propagation]].<ref>
Line 40:
url=http://www.ensta-bretagne.fr/jaulin/paper_dig_slam.pdf|doi=10.1109/TRO.2011.2147110|s2cid=52801599}}
</ref>
They provide a set which encloses the pose of the robot and a set approximation of the map. [[Bundle adjustment]], and more generally [[Maximummaximum a posteriori estimation]] (MAP), is another popular technique for SLAM using image data, which jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM systems such as Google's ARCore which replaces their previousprior [[augmented reality]] project '[[Projectcomputing platform]] named [[Tango (platform)|Tango]], formerly ''Project Tango''. MAP estimators compute the most likely explanation of the robot poses and the map given the sensor data, rather than trying to estimate the entire posterior probability.
 
New SLAM algorithms remain an active research area,<ref name=":0">{{Cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|date=2016|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|language=en-US|volume=32|issue=6|pages=1309–1332|arxiv=1606.05830|bibcode=2016arXiv160605830C|doi=10.1109/tro.2016.2624754|issn=1552-3098|hdl=2440/107554|s2cid=2596787}}</ref> and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be viewed as combinations of choices from each of these aspects.
Line 48:
[[Topological map]]s are a method of environment representation which capture the connectivity (i.e., [[topology]]) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.<ref name=cummins2008>
{{cite journal
|last1=Cummins |first1=Mark
|last2=Newman |first2=Paul
|title=FAB-MAP: Probabilistic localization and mapping in the space of appearance
|journal=The International Journal of Robotics Research
Line 59:
|access-date=23 July 2014}}</ref>
 
In contrast, [[grid map]]s use arrays (typically square or hexagonal) of discretized cells to represent a topological world, and make inferences about which cells are occupied. Typically the cells are assumed to be statistically independent in order to simplify computation. Under such assumption, <math>P(m_t | x_t, m_{t-1}, o_t ) </math> are set to 1 if the new map's cells are consistent with the observation <math>o_t</math> at ___location <math>x_t</math> and 0 if inconsistent.
 
Modern [[self driving cars]] mostly simplify the mapping problem to almost nothing, by making extensive use of highly detailed map data collected in advance. This can include map annotations to the level of marking locations of individual white line segments and curbs on the road. Location-tagged visual data such as Google's [[StreetView]] may also be used as part of maps. Essentially such systems simplify the SLAM problem to a simpler [[Robot localization|localization]] only task, perhaps allowing for moving objects such as cars and people only to be updated in the map at runtime.
 
=== Sensing ===
Line 67:
 
[[File:Ouster OS1-64 lidar point cloud of intersection of Folsom and Dore St, San Francisco.png|thumb|Accumulated registered point cloud from [[lidar]] SLAM.]]
SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms.<ref name="magnabosco13slam">{{cite journal| last1=Magnabosco |first1=M. |last2=Breckon |first2=T.P. |title=Cross-Spectral Visual Simultaneous Localization And Mapping (SLAM) with Sensor Handover| journal=Robotics and Autonomous Systems|date=February 2013 |volume=63 |issue=2 |pages=195–208 |doi=10.1016/j.robot.2012.09.023 |url=http://www.durham.ac.uk/toby.breckon/publications/papers/magnabosco13slam.pdf |access-date=5 November 2013}}</ref> Statistical independence is the mandatory requirement to cope with metric bias and with noise in measurements. Different types of sensors give rise to different SLAM algorithms whosewhich assumptions are most appropriate to the sensors. At one extreme, laser scans or visual features provide details of many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via [[image registration]]. At the opposite extreme, [[tactile sensor]]s are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.
 
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world whosewhich ___location can be estimated by a sensor—suchsensor, such as wifi[[Wi-Fi]] access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model <math>P(o_t|x_t)</math> directly as a function of the ___location.
 
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) [[laser rangefinder]]s, 3D Highhigh Definitiondefinition LiDAR,light detection and ranging ([[lidar]]), 3D Flashflash LIDAR]]lidar, 2D or 3D [[sonar]] sensors, and one or more 2D [[camera]]s.<ref name="magnabosco13slam" /> Since 2005, there has been intense research into VSLAM (visual SLAM (VSLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices.<ref name=KarlssonEtAl2005>{{cite conference
| author last1= Karlsson, |first1=N.
| collaboration = Di Bernardo, E.; Ostrowski, J; Goncalves, L.; Pirjanian, P.; Munich, M.
| year = 2005
| title = The vSLAM Algorithm for Robust Localization and Mapping
| conference = Int. Conf. on Robotics and Automation (ICRA)
| doi = 10.1109/ROBOT.2005.1570091
}}</ref> Visual and [[Lidarlidar]] sensors are informative enough to allow for landmark extraction in many cases. Other recent forms of SLAM include tactile SLAM<ref>{{cite conference | last1= Fox |first1=C. |last2=Evans |first2=M. |last3=Pearson |first3=M. |last4=Prescott |first4=T. | title = Tactile SLAM with a biomimetic whiskered robot. | conference = Proc. IEEE Int. Conf. on Robotics and Automation (ICRA) | year = 2012|url=http://eprints.uwe.ac.uk/18384/1/fox_icra12_submitted.pdf}}</ref> (sensing by local touch only), radar SLAM,<ref>{{cite conference | last1=Marck |first1=J.W. |last2=Mohamoud |first2=A. |last3=v.d. Houwen |first3=E. |last4=van Heijster |first4=R. | title = Indoor radar SLAM A radar application for vision and GPS denied environments. | conference = Radar Conference (EuRAD), 2013 European | year = 2013|url=http://publications.tno.nl/publication/34607287/4nJ48k/marck-2013-indoor.pdf}}</ref> acoustic SLAM,<ref>Evers, Christine, Alastair H. Moore, and Patrick A. Naylor. "[https://spiral.imperial.ac.uk/bitstream/10044/1/38877/2/2016012291332_994036_4133_Final.pdf Acoustic simultaneous localization and mapping (a-SLAM) of a moving microphone array and its surrounding speakers]." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016.</ref> and wifiWi-Fi-SLAM (sensing by strengths of nearby wifiWi-Fi access points).<ref>Ferris, Brian, Dieter Fox, and Neil D. Lawrence. "[https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-399.pdf WifiWi-Fi-slam using gaussian process latent variable models]." IJCAI. Vol. 7. No. 1. 2007.</ref> Recent approaches apply quasi-optical [[wireless]] ranging for [[Trilateration|multi-lateration]] ([[Realreal-time locating system|RTLS]] (RTLS)) or [[Triangulation|multi-angulation]] in conjunction with SLAM as a tribute to erratic wireless measures. A kind of SLAM for human pedestrians uses a shoe mounted [[inertial measurement unit]] as the main sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of buildings by an [[indoor positioning system]].<ref name=RobertsonEtAl2009>{{cite conference
|last1 = Robertson
|first1 = P.
|last2 = Angermann
|first2 = M.
|last3 = Krach
|first3 = B.
|year = 2009
|title = Simultaneous Localization and Mapping for Pedestrians using only Foot-Mounted Inertial Sensors
|conference = Ubicomp 2009
|___location = Orlando, Florida, USA
|publisher = ACM
|url = http://www.kn-s.dlr.de/indoornav/ubicomp2009_final_my_pub.pdf
|doi = 10.1145/1620545.1620560
|url-status = dead
|archive-url = https://web.archive.org/web/20100816040331/http://www.kn-s.dlr.de/indoornav/ubicomp2009_final_my_pub.pdf
|archive-date = 2010-08-16
}}</ref>
 
For some outdoor applications, the need for SLAM has been almost entirely removed due to high precision differential [[GPS]] sensors. From a SLAM perspective, these may be viewed as ___location sensors whosewhich likelihoods are so sharp that they completely dominate the inference. However, GPS sensors may occasionally decline or go down entirely, e.g. during times of military conflict, which are of particular interest to some robotics applications.
 
=== Kinematics modeling ===
 
The <math>P(x_t|x_{t-1})</math> term represents the kinematics of the model, which usually include information about action commands given to a robot. As a part of the model, the [[robot kinematics|kinematics of the robot]] is included, to improve estimates of sensing under conditions of inherent and ambient noise. The dynamic model balances the contributions from various sensors, various partial error models and finally comprises in a sharp virtual depiction as a map with the ___location and heading of the robot as some cloud of probability. Mapping is the final depicting of such model, the map is either such depiction or the abstract term for the model.
 
Line 106 ⟶ 105:
 
=== Acoustic SLAM ===
An extension of the common SLAM problem has been applied to the acoustic ___domain, where environments are represented by the three-dimensional (3D) position of sound sources, termed.<ref>{{Cite journal|last1=Evers|first1=Christine|last2=Naylor|first2=Patrick A.|date=September 2018|title=Acoustic SLAM|journal=IEEE/ACM Transactions on Audio, Speech, and Language Processing|volume=26|issue=9|pages=1484–1498|doi=10.1109/TASLP.2018.2828321|issn=2329-9290|url=https://eprints.soton.ac.uk/437941/1/08340823.pdf|doi-access=free}}</ref> Early implementations of this technique have utilizedused Directiondirection-of-Arrivalarrival (DoA) estimates of the sound source ___location, and rely on principal techniques of [[Soundsound localization]] to determine source locations. An observer, or robot must be equipped with a [[microphone array]] to enable use of Acoustic SLAM, so that DoA features are properly estimated. Acoustic SLAM has paved foundations for further studies in acoustic scene mapping, and can play an important role in human-robot interaction through speech. In order toTo map multiple, and occasionally intermittent sound sources, an Acousticacoustic SLAM system utilizesuses foundations in Randomrandom Finitefinite Setset theory to handle the varying presence of acoustic landmarks.<ref>{{Cite journal|last=Mahler|first=R.P.S.|date=October 2003|title=Multitarget bayes filtering via first-order multitarget moments|journal=IEEE Transactions on Aerospace and Electronic Systems|language=en|volume=39|issue=4|pages=1152–1178|doi=10.1109/TAES.2003.1261119|bibcode=2003ITAES..39.1152M|issn=0018-9251}}</ref> However, the nature of acoustically derived features leaves Acoustic SLAM susceptible to problems of reverberation, inactivity, and noise within an environment.
 
=== Audiovisual SLAM ===
Line 117 ⟶ 116:
=== Moving objects ===
 
Non-static environments, such as those containing other vehicles or pedestrians, continue to present research challenges.<ref>{{Cite journal|last1=Perera|first1=Samunda|last2=Pasqual|first2=Ajith|date=2011|editor-last=Bebis|editor-first=George|editor2-last=Boyle|editor2-first=Richard|editor3-last=Parvin|editor3-first=Bahram|editor4-last=Koracin|editor4-first=Darko|editor5-last=Wang|editor5-first=Song|editor6-last=Kyungnam|editor6-first=Kim|editor7-last=Benes|editor7-first=Bedrich|editor8-last=Moreland|editor8-first=Kenneth|editor9-last=Borst|editor9-first=Christoph|title=Towards Realtime Handheld MonoSLAM in Dynamic Environments|journal=Advances in Visual Computing|volume=6938|series=Lecture Notes in Computer Science|language=en|publisher=Springer Berlin Heidelberg|pages=313–324|doi=10.1007/978-3-642-24028-7_29|isbn=9783642240287}}</ref><ref name=":1">{{Citation|last1=Perera|first1=Samunda|title=Exploration: Simultaneous Localization and Mapping (SLAM)|date=2014|work=Computer Vision: A Reference Guide|pages=268–275|editor-last=Ikeuchi|editor-first=Katsushi|publisher=Springer US|language=en|doi=10.1007/978-0-387-31439-6_280|isbn=9780387314396|last2=Barnes|first2=Dr.Nick|last3=Zelinsky|first3=Dr.Alexander|s2cid=34686200 }}</ref> SLAM with DATMO is a model which tracks moving objects in a similar way to the agent itself.<ref name=Wang2007>{{cite journal
| last1 = Wang
| first1 = Chieh-Chih
| last2 = Thorpe
| first2 = Charles
| last3 = Thrun
| first3 = Sebastian
| last4 = Hebert
| first4 = Martial
| last5 = Durrant-Whyte
| first5 = Hugh
| title = Simultaneous Localization, Mapping and Moving Object Tracking
| journal = Int. J. Robot. Res.
| volume = 26
| number = 9
| year = 2007
| pages = 889–916
| url = https://www.ri.cmu.edu/pub_files/pub4/wang_chieh_chih_2007_1/wang_chieh_chih_2007_1.pdf
| doi = 10.1177/0278364907081229
| s2cid = 14526806
}}</ref>
 
=== Loop closure ===
 
Loop closure is the problem of recognizing a previously-visited ___location and updating beliefs accordingly. This can be a problem because model or algorithm errors can assign low priors to the ___location. Typical loop closure methods apply a second algorithm to compute some type of sensor measure similarity, and re-setreset the ___location priors when a match is detected. For example, this can be done by storing and comparing [[Bag-of-words model in computer vision|bag of words]] vectors of [[Scalescale-invariant feature transform|SIFT]] (SIFT) features from each previously visited ___location.
 
=== Exploration ===
 
"''Active SLAM"'' studies the combined problem of SLAM with deciding where to move next in order to build the map as efficiently as possible. The need for active exploration is especially pronounced in sparse sensing regimes such as tactile SLAM. Active SLAM is generally performed by approximating the [[entropy]] of the map under hypothetical actions. "Multi agent SLAM" extends this problem to the case of multiple robots coordinating themselves to explore optimally.
 
=== Biological inspiration ===
 
In neuroscience, the [[hippocampus]] appears to be involved in SLAM-like computations,<ref name="Howard">{{cite journal|last1=Howard|first1=MW|last2=Fotedar|first2=MS|last3=Datey|first3=AV|last4=Hasselmo|first4=ME|title= The temporal context model in spatial navigation and relational learning: toward a common explanation of medial temporal lobe function across domains|journal=Psychological Review|volume=112|issue=1|pages=75–116|pmc=1421376|year=2005|pmid=15631589|doi=10.1037/0033-295X.112.1.75}}</ref><ref name="Fox & Prescott">{{cite book|last1=Fox|first1=C|title= The 2010 International Joint Conference on Neural Networks (IJCNN)|pages=1–8|last2=Prescott|first2=T|chapter= Hippocampus as unitary coherent particle filter|doi=10.1109/IJCNN.2010.5596681|year=2010|isbn=978-1-4244-6916-1|s2cid=10838879|url=http://eprints.whiterose.ac.uk/108622/1/Fox2010_HippocampusUnitaryCoherentParticleFilter.pdf}}</ref><ref name="RatSLAM">{{cite book|last1=Milford|first1=MJ|last2=Wyeth|first2=GF|last3=Prasser|first3=D |title=IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004|chapter=RatSLAM: A hippocampal model for simultaneous localization and mapping|year=2004|pages=403–408 Vol.1|doi=10.1109/ROBOT.2004.1307183 |isbn=0-7803-8232-3|s2cid=7139556|url=https://eprints.qut.edu.au/37593/1/c37593.pdf}}</ref> giving rise to [[place cells]], and has formed the basis for bio-inspired SLAM systems such as RatSLAM.
 
== Implementation methods ==
{{furtherFurther|List of SLAM methods}}
 
Various SLAM algorithms are implemented in the [[open-source software]] [[robotRobot operatingOperating systemSystem]] (ROS) libraries, often used together with the [[Point Cloud Library]] for 3D maps or visual features from [[OpenCV]].
 
=== EKF SLAM ===
 
In [[robotics]], '''EKF SLAM''' is a class of algorithms which utilizesuses the [[extended Kalman filter]] (EKF) for SLAM. Typically, EKF SLAM algorithms are feature based, and use the maximum likelihood algorithm for data association. In the 1990s and 2000s, EKF SLAM had been the de facto method for SLAM, until the introduction of [[FastSLAM]].<ref name=Montemerlo2002>{{cite conference
| author last1= Montemerlo, |first1=M. |author2last2=Thrun, |first2=S. |author3last3=Koller, |first3=D. |author4last4=Wegbreit, |first4=B.
| year = 2002
| title = FastSLAM: A factored solution to the simultaneous localization and mapping problem
| book-title = Proceedings of the AAAI National Conference on Artificial Intelligence
| pages = 593–598
| url = https://www.cs.cmu.edu/~mmde/mmdeaaai2002.pdf
}}</ref>
 
Associated with the EKF is the gaussian noise assumption, which significantly impairs EKF SLAM's ability to deal with uncertainty. With greater amount of uncertainty in the posterior, the linearization in the EKF fails.<ref name=Trun2005>{{cite book
| author last1= Thrun, |first1=S. |author2last2=Burgard, |first2=W. |author3last3=Fox, |first3=D.
| title = Probabilistic Robotics
| publisher = The MIT Press
| ___location = Cambridge
| year = 2005
| isbn = 0-262-20162-3
}}</ref>
 
Line 181:
 
A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986.<ref name=Smith1986>{{cite journal
| author last1= Smith, |first1=R.C.
|author2last2=Cheeseman, |first2=P.
| year = 1986
| title = On the Representation and Estimation of Spatial Uncertainty
| journal = The International Journal of Robotics Research
| volume = 5
| issue = 4
| pages = 56–68
| url = http://www.frc.ri.cmu.edu/~hpm/project.archive/reference.file/Smith&Cheeseman.pdf
| doi = 10.1177/027836498600500404
|s2cid=60110448
| access-date = 2008-04-08
}}</ref><ref name=Smith1986b>{{cite conference
|author last1= Smith, |first1=R.C.
|author2 last2= Self, |first2=M.
|author3 last3= Cheeseman, |first3=P.
|year = 1986
|title = Estimating Uncertain Spatial Relationships in Robotics
|conference = UAI '86
|___location = University of Pennsylvania, Philadelphia, PA, USA
|book-title = Proceedings of the Second Annual Conference on Uncertainty in Artificial Intelligence
|pages = 435–461
|publisher = Elsevier
|url = http://www-robotics.usc.edu/~maja/teaching/cs584/papers/smith90stochastic.pdf
|url-status = dead
|archive-url = https://web.archive.org/web/20100702155505/http://www-robotics.usc.edu/~maja/teaching/cs584/papers/smith90stochastic.pdf
|archive-date = 2010-07-02
}}</ref> Other pioneering work in this field was conducted by the research group of [[Hugh F. Durrant-Whyte]] in the early 1990s.<ref name=Leonard1991>{{cite journal
| author last1= Leonard, |first1=J.J.
|author2last2=Durrant-whyte, |first2=H.F.
| year = 1991
| title = Simultaneous map building and localization for an autonomous mobile robot
| journal = Intelligent Robots and Systems' 91.'Intelligence for Mechanical Systems, Proceedings IROS'91. IEEE/RSJ International Workshop on
| pages = 1442–1447
| doi = 10.1109/IROS.1991.174711
|isbn=978-0-7803-0067-5
|s2cid=206935019
}}</ref> which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution. The acronym SLAM was coined within the paper, "Localization of Autonomous Guided Vehicles" which first appeared in [[Information Systems Research|ISR]] in 1995.<ref>{{Cite journal|last1=Durrant-Whyte|first1=H.|last2=Bailey|first2=T.|date=June 2006|title=Simultaneous localization and mapping: part I|url=https://ieeexplore.ieee.org/document/1638022|journal=IEEE Robotics Automation Magazine|volume=13|issue=2|pages=99–110|doi=10.1109/MRA.2006.1638022|s2cid=8061430|issn=1558-223X|doi-access=free}}</ref>
 
The self-driving STANLEY and JUNIOR cars, led by [[Sebastian Thrun]], won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners.<ref>{{Cite news|last=Knight|first=Will|url=https://www.technologyreview.com/s/541326/the-roomba-now-sees-and-maps-a-home/|title=With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots|last=Knight|first=Will|work=MIT Technology Review |date=September 16, 2015 |access-date=2018-04-25|language=en}}</ref>
 
== See also ==
Line 232:
* [[Neato Robotics]]
* [[Particle filter]]
* [[Project Tango]]
* [[Recursive Bayesian estimation]]
* [[Robotic mapping]]
*''[[Stanley (vehicle)|Stanley]]'', [[DARPA]] Grand Challenge vehicle
* [[Stereophotogrammetry]]
* [[Structure from motion]]
* [[Tango (platform)]]
* [[Visual odometry]]
{{Div col end}}
 
== References ==
{{reflistReflist}}
 
== External links ==
* [http://www.probabilistic-robotics.org/ Probabilistic Robotics] by [[Sebastian Thrun]], [[Wolfram Burgard]] and [[Dieter Fox]] with a clear overview of SLAM.
* [https://dspace.mit.edu/bitstream/handle/1721.1/36832/16-412JSpring2004/NR/rdonlyres/Aeronautics-and-Astronautics/16-412JSpring2004/A3C5517F-C092-4554-AA43-232DC74609B3/0/1Aslam_blas_report.pdf SLAM For Dummies (A Tutorial Approach to Simultaneous Localization and Mapping)].
* [http://www.doc.ic.ac.uk/%7Eajd/index.html Andrew Davison] research page at the [[Department of Computing, Imperial College London|Department of Computing]], [[Imperial College London]] about SLAM using vision.
* [https://openslam-org.github.io/ openslam.org] A good collection of open source code and explanations of SLAM.
* [http://eia.udg.es/~qsalvi/Slam.zip Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping] Vehicle moving in 1D, 2D and 3D.
* [https://web.archive.org/web/20120313064730/http://www.kn-s.dlr.de/indoornav/footslam_video.html FootSLAM research page] at [[German Aerospace Center|DLR]] (DLR) including the related WifiWi-Fi SLAM and PlaceSLAM approaches.
* [https://www.youtube.com/watch?v=B2qzYCeT9oQ&list=PLpUPoM7Rgzi_7YWn14Va2FODh7LzADBSm SLAM lecture] Online SLAM lecture based on Python.