Content deleted Content added
Maxeto0910 (talk | contribs) m period (only) after sentence Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit |
Maxeto0910 (talk | contribs) m no sentence Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit |
||
Line 63:
=== Sensing ===
{{See also|3D scanner}}
[[File:Ouster OS1-64 lidar point cloud of intersection of Folsom and Dore St, San Francisco.png|thumb|Accumulated registered point cloud from [[lidar]] SLAM
SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms.<ref name="magnabosco13slam">{{cite journal|last1=Magnabosco|first1=M.|last2=Breckon|first2=T.P.|title=Cross-Spectral Visual Simultaneous Localization And Mapping (SLAM) with Sensor Handover|journal=Robotics and Autonomous Systems|date=February 2013|volume=63|issue=2|pages=195–208|doi=10.1016/j.robot.2012.09.023|url=http://www.durham.ac.uk/toby.breckon/publications/papers/magnabosco13slam.pdf|access-date=5 November 2013}}</ref> Statistical independence is the mandatory requirement to cope with metric bias and with noise in measurements. Different types of sensors give rise to different SLAM algorithms which assumptions are most appropriate to the sensors. At one extreme, laser scans or visual features provide details of many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via [[image registration]]. At the opposite extreme, [[tactile sensor]]s are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.
|