Simultaneous localization and mapping: Difference between revisions

Content deleted Content added
m clean up, replaced: IEEE Robotics Automation Magazine → IEEE Robotics & Automation Magazine
 
(9 intermediate revisions by 9 users not shown)
Line 9:
== Mathematical description of the problem ==
Given a series of controls <math>u_t</math> and sensor observations <math>o_t</math> over discrete time steps <math>t</math>, the SLAM problem is to compute an estimate of the agent's state <math>x_t</math> and a map of the environment <math>m_t</math>. All quantities are usually probabilistic, so the objective is to compute<ref>{{cite book |last1=Thrun |first1=Sebastian |authorlink = Sebastian Thrun |last2=Burgard |first2=Wolfram |authorlink2 = Wolfram Burgard |last3=Fox |first3=Dieter |authorlink3 = Dieter Fox|date= |title=Probabalistic Robotics |publisher= The MIT Press |page= 309}}</ref>
:
 
:<math> P(m_{t+1},x_{t+1}|o_{1:t+1},u_{1:t}) </math>
Line 68 ⟶ 67:
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world which ___location can be estimated by a sensor, such as [[Wi-Fi]] access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model <math>P(o_t|x_t)</math> directly as a function of the ___location.
 
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) [[laser rangefinder]]s, 3D high definition light detection and ranging ([[lidar]]), 3D flash lidar, 2D or 3D [[sonar]] sensors, and one or more 2D [[camera]]s.<ref name="magnabosco13slam"/> Since the invention of local features, such as [[scale-invariant feature transform|SIFT]], there has been intense research into visual SLAM (VSLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices. <ref name=Se2001>
{{cite conference
|last1=Se|first1=Stephen
Line 77 ⟶ 76:
|doi=10.1109/ROBOT.2001.932909
}}</ref>
Follow up research includes .<ref name=KarlssonEtAl2005>{{cite conference
|last1=Karlsson|first1=N.
|collaboration=Di Bernardo, E.; Ostrowski, J; Goncalves, L.; Pirjanian, P.; Munich, M.
Line 84 ⟶ 83:
|conference=Int. Conf. on Robotics and Automation (ICRA)
|doi=10.1109/ROBOT.2005.1570091
}}</ref>. Both visual and [[lidar]] sensors are informative enough to allow for landmark extraction in many cases. Other recent forms of SLAM include tactile SLAM<ref>{{cite conference|last1= Fox|first1=C.|last2=Evans|first2=M.|last3=Pearson|first3=M.|last4=Prescott|first4=T.|title=Tactile SLAM with a biomimetic whiskered robot.|conference=Proc. IEEE Int. Conf. on Robotics and Automation (ICRA)|year=2012|url=http://eprints.uwe.ac.uk/18384/1/fox_icra12_submitted.pdf}}</ref> (sensing by local touch only), radar SLAM,<ref>{{cite conference|last1=Marck|first1=J.W.|last2=Mohamoud|first2=A.|last3=v.d. Houwen|first3=E.|last4=van Heijster|first4=R.|title=Indoor radar SLAM A radar application for vision and GPS denied environments.|conference=Radar Conference (EuRAD), 2013 European|year=2013|url=http://publications.tno.nl/publication/34607287/4nJ48k/marck-2013-indoor.pdf}}</ref> acoustic SLAM,<ref>Evers, Christine, Alastair H. Moore, and Patrick A. Naylor. "[https://spiral.imperial.ac.uk/bitstream/10044/1/38877/2/2016012291332_994036_4133_Final.pdf Acoustic simultaneous localization and mapping (a-SLAM) of a moving microphone array and its surrounding speakers]." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016.</ref> and Wi-Fi-SLAM (sensing by strengths of nearby Wi-Fi access points).<ref>Ferris, Brian, Dieter Fox, and Neil D. Lawrence. "[https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-399.pdf Wi-Fi-slam using gaussian process latent variable models] {{Webarchive|url=https://web.archive.org/web/20221224110401/https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-399.pdf |date=2022-12-24 }}." IJCAI. Vol. 7. No. 1. 2007.</ref> Recent approaches apply quasi-optical [[wireless]] ranging for [[Trilateration|multi-lateration]] ([[real-time locating system]] (RTLS)) or [[Triangulation|multi-angulation]] in conjunction with SLAM as a tribute to erratic wireless measures. A kind of SLAM for human pedestrians uses a shoe mounted [[inertial measurement unit]] as the main sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of buildings by an [[indoor positioning system]].<ref name=RobertsonEtAl2009>{{cite conference
|last1=Robertson
|first1=P.
Line 180 ⟶ 179:
 
== History ==
A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986.<ref name=Smith1986>{{cite journal
|last1=Smith|first1=R.C.
|last2=Cheeseman|first2=P.
Line 214 ⟶ 213:
|title=Proceedings IROS '91:IEEE/RSJ International Workshop on Intelligent Robots and Systems '91
|chapter=Simultaneous map building and localization for an autonomous mobile robot
|year=1991
|pages=1442–1447
|doi=10.1109/IROS.1991.174711
|isbn=978-0-7803-0067-5
|s2cid=206935019
}}</ref> which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution. The acronym SLAM was coined within the paper, "Localization of Autonomous Guided Vehicles" which first appeared in [[Information Systems Research|ISR]] in 1995.<ref>{{Cite journal|last1=Durrant-Whyte|first1=H.|last2=Bailey|first2=T.|date=June 2006|title=Simultaneous localization and mapping: part I|journal=IEEE Robotics & Automation Magazine|volume=13|issue=2|pages=99–110|doi=10.1109/MRA.2006.1638022|s2cid=8061430|issn=1558-223X|doi-access=free}}</ref>
 
The self-driving STANLEY and JUNIOR cars, led by [[Sebastian Thrun]], won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners<ref>{{Cite news|last=Knight|first=Will|url=https://www.technologyreview.com/s/541326/the-roomba-now-sees-and-maps-a-home/|title=With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots|work=MIT Technology Review|date=September 16, 2015|access-date=2018-04-25|language=en}}</ref> and [[Virtualvirtual reality headset|virtual reality headsets]]s such as the [[Meta Quest 2]] and [[PICO 4]] for markerless inside-out tracking.
 
== See also ==
{{Div col|small=yes}}
* [[Computational photography]]
* [[Kalman filter]]