Content deleted Content added
Dilettante (talk | contribs) m Reverted 1 edit by 8.47.103.172 (talk) to last revision by Jarble |
Martinkard (talk | contribs) m →Sensing Tag: Reverted |
||
Line 69:
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world which ___location can be estimated by a sensor, such as [[Wi-Fi]] access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model <math>P(o_t|x_t)</math> directly as a function of the ___location.
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) [[laser rangefinder]]s, 3D high definition light detection and ranging ([[lidar]]), 3D flash lidar, 2D or 3D [[sonar]] sensors, and one or more 2D [[camera]]s.<ref name="magnabosco13slam"/> Reflective surfaces like mirrors and glasses can make indoor navigation difficult. R. Kardan and colleagues developed machine learning-based specularity detection techniques to improve accuracy and reliability of localization and mapping systems, making indoor navigation more effective. <ref>{{cite journal |last1=Kardan |first1=Ramtin |title=Machine Learning Based Specularity Detection Techniques To Enhance Indoor Navigation |journal=IEEE 17th International Conference on Semantic Computing (ICSC) |date=2023 |pages=143-148 |doi=10.1109/ICSC56153.2023.00030 |url=https://ieeexplore.ieee.org/abstract/document/10066705}}</ref> Since 2005, there has been intense research into visual SLAM (VSLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices.<ref name=KarlssonEtAl2005>{{cite conference
|last1=Karlsson|first1=N.
|collaboration=Di Bernardo, E.; Ostrowski, J; Goncalves, L.; Pirjanian, P.; Munich, M.
|