Content deleted Content added
Yo Tags: Visual edit Mobile edit Mobile web edit |
m clean up, replaced: IEEE Robotics Automation Magazine → IEEE Robotics & Automation Magazine |
||
(10 intermediate revisions by 10 users not shown) | |||
Line 9:
== Mathematical description of the problem ==
Given a series of controls <math>u_t</math> and sensor observations <math>o_t</math> over discrete time steps <math>t</math>, the SLAM problem is to compute an estimate of the agent's state <math>x_t</math> and a map of the environment <math>m_t</math>. All quantities are usually probabilistic, so the objective is to compute<ref>{{cite book |last1=Thrun |first1=Sebastian |authorlink = Sebastian Thrun |last2=Burgard |first2=Wolfram |authorlink2 = Wolfram Burgard |last3=Fox |first3=Dieter |authorlink3 = Dieter Fox|date= |title=Probabalistic Robotics |publisher= The MIT Press |page= 309}}</ref>
:<math> P(m_{t+1},x_{t+1}|o_{1:t+1},u_{1:t}) </math>
Line 68 ⟶ 67:
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world which ___location can be estimated by a sensor, such as [[Wi-Fi]] access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model <math>P(o_t|x_t)</math> directly as a function of the ___location.
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) [[laser rangefinder]]s, 3D high definition light detection and ranging ([[lidar]]), 3D flash lidar, 2D or 3D [[sonar]] sensors, and one or more 2D [[camera]]s.<ref name="magnabosco13slam"/> Since
{{cite conference |last1=Se|first1=Stephen
|collaboration=James J. Little;David Lowe
|title=Vision-based mobile robot localization and mapping using scale-invariant features
|conference=Int. Conf. on Robotics and Automation (ICRA)
|doi=10.1109/ROBOT.2001.932909
}}</ref>
Follow up research includes.<ref name=KarlssonEtAl2005>{{cite conference
|last1=Karlsson|first1=N.
|collaboration=Di Bernardo, E.; Ostrowski, J; Goncalves, L.; Pirjanian, P.; Munich, M.
Line 75 ⟶ 83:
|conference=Int. Conf. on Robotics and Automation (ICRA)
|doi=10.1109/ROBOT.2005.1570091
}}</ref>
|last1=Robertson
|first1=P.
Line 171 ⟶ 179:
== History ==
A seminal work in SLAM is the research of
|last1=Smith|first1=R.C.
|last2=Cheeseman|first2=P.
Line 205 ⟶ 213:
|title=Proceedings IROS '91:IEEE/RSJ International Workshop on Intelligent Robots and Systems '91
|chapter=Simultaneous map building and localization for an autonomous mobile robot
▲|year=1991
|pages=1442–1447
|doi=10.1109/IROS.1991.174711
|isbn=978-0-7803-0067-5
|s2cid=206935019
}}</ref> which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution. The acronym SLAM was coined within the paper, "Localization of Autonomous Guided Vehicles" which first appeared in [[Information Systems Research|ISR]] in 1995.<ref>{{Cite journal|last1=Durrant-Whyte|first1=H.|last2=Bailey|first2=T.|date=June 2006|title=Simultaneous localization and mapping: part I|journal=IEEE Robotics & Automation Magazine|volume=13|issue=2|pages=99–110|doi=10.1109/MRA.2006.1638022|s2cid=8061430|issn=1558-223X|doi-access=free}}</ref>
The self-driving STANLEY and JUNIOR cars, led by [[Sebastian Thrun]], won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners<ref>{{Cite news|last=Knight|first=Will|url=https://www.technologyreview.com/s/541326/the-roomba-now-sees-and-maps-a-home/|title=With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots|work=MIT Technology Review|date=September 16, 2015|access-date=2018-04-25|language=en}}</ref> and [[
== See also ==
{{Div col
* [[Computational photography]]
* [[Kalman filter]]
|