TY - JOUR
T1 - Bayesian sensor fusion of monocular vision and laser structured light sensor for robust localization of a mobile robot
AU - Kim, Min Young
AU - Ahn, Sang Tae
AU - Cho, Hyungsuck
PY - 2010/4
Y1 - 2010/4
N2 - This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.
AB - This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.
KW - Environmental features
KW - Laser vision sensor
KW - Mobile robot localization
KW - Monocular vision
KW - Sensor fusion
UR - http://www.scopus.com/inward/record.url?scp=84860800391&partnerID=8YFLogxK
U2 - 10.5302/J.ICROS.2010.16.4.381
DO - 10.5302/J.ICROS.2010.16.4.381
M3 - Article
AN - SCOPUS:84860800391
SN - 1976-5622
VL - 16
SP - 381
EP - 390
JO - Journal of Institute of Control, Robotics and Systems
JF - Journal of Institute of Control, Robotics and Systems
IS - 4
ER -