Simultaneous localization and mapping: Difference between revisions

Content deleted Content added
Todo
Tags: Reverted references removed Visual edit Mobile edit Mobile web edit
m clean up, replaced: IEEE Robotics Automation Magazine → IEEE Robotics & Automation Magazine
 
(One intermediate revision by one other user not shown)
Line 1:
{{Short description|Computational navigational technique used by robots and autonomous vehicles}}
{{Short description|Computational technique used by robots and autonomous vehicles}} ('''SLAM''') is the computational problem of or a map of an unknown environment while keeping track of an ___location within it. While this to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, time for certain environments. Popular solution methods include the particle filter, extended Kalman filter, intersection, and . SLAM algorithms are based on concepts in computational geometry and computer vision, and are used robot navigation, robotic mapping and [[]] for virtual reality or augmented reality.
[[File:Stanley2.JPG|thumb|[[Stanley (vehicle)|2005 DARPA Grand Challenge winner Stanley]] performed SLAM as part of its autonomous driving system.]]
[[File:RoboCup Rescue arena map generated by robot Hector from Darmstadt at 2010 German open.jpg|thumb|A map generated by a SLAM Robot]]
 
{{Short'''Simultaneous description|Computational technique used by robotslocalization and autonomous vehicles}}mapping''' ('''SLAM''') is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an [[Intelligent agent|agent]]'s ___location within it. While this initially appears to be a [[chicken or the egg]] problem, there are several algorithms[[algorithm]]s known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the [[particle filter]], extended [[Kalman filter]], covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in [[computational geometry]] and [[computer vision]], and are used in [[robot navigation]], [[robotic mapping]] and [[odometry]] for [[virtual reality]] or [[augmented reality]].
SLAM algorithms are tailored to the available resources and are not at perfection but at operational compliance. Published approaches are employed self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, domestic robots and even inside the human body.
 
SLAM algorithms are tailored to the available resources and are not aimed at perfection but at operational compliance. Published approaches are employed in [[self-driving carscar]]s, [[unmanned aerial vehiclesvehicle]]s, [[autonomous underwater vehiclesvehicle]]s, [[Rover (space exploration)|planetary rovers]], newer [[domestic robotsrobot]]s and even inside the human body.
 
== Mathematical description of the problem ==
Line 8 ⟶ 12:
:<math> P(m_{t+1},x_{t+1}|o_{1:t+1},u_{1:t}) </math>
 
Applying [[Bayes' rule]] gives a framework for sequentially updating the ___location posteriors, given a map and a transition function <math>P(x_t|x_{t-1})</math>,
 
:<math>P(x_t | o_{1:t},u_{1:t},m_t) = \sum_{m_{t-1} } P(o_{t}|x_t, m_t,u_{1:t}) \sum_{x_{t-1}} P(x_t|x_{t-1}) P(x_{t-1}|m_t, o_{1:t-1},u_{1:t}) /Z</math>
:
 
Similarly the map can be updated sequentially by
 
:<math>P(m_t | x_t,o_{1:t},u_{1:t}) = \sum_{x_t} \sum_{m_t} P(m_t | x_t, m_{t-1}, o_t,u_{1:t} ) P(m_{t-1},x_t | o_{1:t-1},m_{t-1},u_{1:t})</math>
:
 
Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two belifsbeliefs in a form of an [[expectation–maximization algorithm]].
 
== Algorithms ==
Statistical techniques used to approximate the above equations include [[Kalman filter]]s and [[particle filter]]s (the algorithm behind Monte Carlo Localization). They provide an estimation of the [[posterior probability distribution|posterior proability distribution]] for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using [[covariance intersection]] are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications.<ref>{{cite conference|last1= Julier|first1=S.|last2=Uhlmann|first2=J.|title=Building a Million-Beacon Map.|conference=Proceedings of ISAM Conference on Intelligent Systems for Manufacturing|year=2001|doi=10.1117/12.444158}}</ref> Other approximation methods achieve improved computational efficiency by using simple bounded-region representations of uncertainty.<ref>{{cite conference|last1= Csorba|first1=M.|last2=Uhlmann|first2=J.|title=A Suboptimal Algorithm for Automatic Map Building.|conference=Proceedings of the 1997 American Control Conference|year=1997|doi=10.1109/ACC.1997.611857}}</ref>
 
[[Set estimation|Set-membership techniques]] are mainly based on [[interval propagation|interval constraint propagation]].<ref>
Line 28 ⟶ 32:
url=http://www.ensta-bretagne.fr/jaulin/paper_reder_ieee_tro.pdf|doi=10.1109/TRO.2008.2010358|s2cid=15474613}}
</ref><ref>
{{cite journal|last1=Jaulin|first1=L.|
title=Range-only SLAM with occupancy maps; A set-membership approach|
journal=IEEE Transactions on Robotics|volume=27|issue=5|pages=1004–1010|
Line 34 ⟶ 38:
url=http://www.ensta-bretagne.fr/jaulin/paper_dig_slam.pdf|doi=10.1109/TRO.2011.2147110|s2cid=52801599}}
</ref>
They provide a set which encloses the pose of the robot and a set approximation of the map. [[Bundle adjustment]], and more generally [[maximum a posteriori estimation]] (MAP), is another popular technique for SLAM using image data, which jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM systems such as Google's ARCore which replaces their prior augmented reality computing platform named Tango, formerly ''Project Tango''. MAP estimators compute the most likely explanation of the robot poses and the map given the sensor data, rather than trying to estimate the entire posterior probability.
 
==cal mapsure the conn(i.erather treaally acctroaches hactric SLAM algmname=cummins>
New SLAM algorithms remain an active research area,<ref name=":0">{{Cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|date=2016|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|language=en-US|volume=32|issue=6|pages=1309–1332|arxiv=1606.05830|bibcode=2016arXiv160605830C|doi=10.1109/tro.2016.2624754|issn=1552-3098|hdl=2440/107554|s2cid=2596787}}</ref> and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be viewed as combinations of choices from each of these aspects.
{{cite jou
 
|last1=
=== Mapping ===
|titization the space of nce
Topological maps are a method of environment representation which capture the connectivity (i.e., topology) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.<ref name=cummins2008>
{{cite joujournal
|last1=Cummins|first1=Mark
|last2=Newman|first2=Paul
|title=FAB-MAP: Probabilistic localization and mapping in the space of appearance
|journal=The International Journal of Robotics Research
|date=June 2008
|volume=27|issue=6|pages=647–665
|volumeage65
|doi=10.909611177/0278364908090961
|s2cid=179617969052
|url=http://xwww.robots.ox.ac.uk/~mjbmjc/Papers/IJRR_2008_FabMap.pdf
|access-date=23 July 2014}}</ref>
 
In contrast, gegrid maps use arrays (typically square or hexagonal) of discretized cells to represent a topological world, and make inferences about which cells are occupied. Typically the cells are assumed to be statistically independent to simplify computation. Under such assumption, <math>P(m_t th| x_t, m_{t-1}, o_t )</math> are set to 1 if the new map's cells are consistent with the observation <mmath>o_t</math> at ___location <math>x_t</math> and 0 if inconsistent.
 
Modern self driving cars mostly simplify the mapping problem to almost nothing, by making extensive use of highly detailed map data collected in advance. This can include map annotations to the level of marking locations of individual white line segments and curbs on the road. Location-tagged visual data such as Google's StreetView may also be used as part of maps. Essentially such systems simplify the SLAM problem to a simpler localization only task, perhaps allowing for moving objects such as cars and people only to be updated in the map at runtime.
Line 57 ⟶ 67:
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world which ___location can be estimated by a sensor, such as [[Wi-Fi]] access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model <math>P(o_t|x_t)</math> directly as a function of the ___location.
 
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) [[laser rangefinder]]s, 3D high definition light detection and ranging ([[lidar]]), 3D flash lidar, 2D or 3D [[sonar]] sensors, and one or more 2D [[camera]]s.<ref name="magnabosco13slam"/> Since the invention of local features, such as [[scale-invariant feature transform|SIFT]], there has been intense research into visual SLAM (VSLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices. <ref name=Se2001>
{{cite conference
|last1=Se|first1=Stephen
Line 207 ⟶ 217:
|isbn=978-0-7803-0067-5
|s2cid=206935019
}}</ref> which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution. The acronym SLAM was coined within the paper, "Localization of Autonomous Guided Vehicles" which first appeared in [[Information Systems Research|ISR]] in 1995.<ref>{{Cite journal|last1=Durrant-Whyte|first1=H.|last2=Bailey|first2=T.|date=June 2006|title=Simultaneous localization and mapping: part I|journal=IEEE Robotics & Automation Magazine|volume=13|issue=2|pages=99–110|doi=10.1109/MRA.2006.1638022|s2cid=8061430|issn=1558-223X|doi-access=free}}</ref>
 
The self-driving STANLEY and JUNIOR cars, led by [[Sebastian Thrun]], won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners<ref>{{Cite news|last=Knight|first=Will|url=https://www.technologyreview.com/s/541326/the-roomba-now-sees-and-maps-a-home/|title=With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots|work=MIT Technology Review|date=September 16, 2015|access-date=2018-04-25|language=en}}</ref> and [[Virtualvirtual reality headset|virtual reality headsets]]s such as the [[Meta Quest 2]] and [[PICO 4]] for markerless inside-out tracking.
 
== See also ==
Line 239 ⟶ 249:
* [https://openslam-org.github.io/ openslam.org] A good collection of open source code and explanations of SLAM.
* [http://eia.udg.es/~qsalvi/Slam.zip Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping] Vehicle moving in 1D, 2D and 3D.
* [https://web.archarchive.org/web/20120313064730/httwwhttp://www.nkn-s.dlr.de/indoarchindoornav/footslam_video.html FootSLAM research page] at [[German Aerospace Center]] (DLR) including the related Wi-Fi SLAM oaches/www.y.com/watcpUPoM7Rgzi_7YWn14Va2FODh7LzADBSmand SLAMPlaceSLAM lecture] Onapproaches.
* [https://www.youtube.com/watch?v=B2qzYCeT9oQ&list=PLpUPoM7Rgzi_7YWn14Va2FODh7LzADBSm SLAM lecture] Online SLAM lecture based on Python.
 
{{Computer vision}}
{{Robotics}}
{{Roboti
 
{{DEFAURT:Simultaneous Localiza:Componal geometry]]
{{DEFAULTSORT:Simultaneous Localization And Mapping}}
navigating]]
[[Category:Computational geometry]]
[[Category:Robot navigation]]
[[Category:Applied machine learning]]
[[Category:Motion in computer vision]]
[[Category:Positioning]]