A NEW MULTIMODAL PERCEPTION-BASED SENSOR FUSION COST MAP

Jian Shi, Yiming Lu, and Wanghui Bu

Keywords

Mobile robot, sensor fusion, navigation

Abstract

Mobile robots are typically equipped with multiple sensors for environmental perception and modeling. This paper proposes a new multimodal perception-based sensor fusion cost map. The sensor fusion cost map can adopt corresponding models according to differ- ent sensor models to retain and update information. The LiDAR uses a beam model for updates, the upward-looking sensor employs a cone model, and the downward-looking sensor utilises a polygon model for updates. The map consists of a static layer and a fused obstacle layer. During the map update process, the historical information marked in the static map is first cleared to obtain the initial static map, and the obstacle information is inflated before updating the static map, instead of inflating the obstacle information together with the static map every time, which reduces the computational complexity. The static layer pre-processes the environmental map to enhance computational efficiency, while the fused obstacle layer updates and resets only necessary obstacle information, resulting in higher computational efficiency. The proposed method has been validated on a commercial robot. The validation outcomes for a commercial robot demonstrate that the multimodal perception-based sensor fusion cost map operates in real-time on a Cortex-A5 processor, achieving a CPU utilisation rate that is 20% lower than the cost map. Specifically, the CPU usage of this algorithm remains below 70%, with a computation time of less than 100 ms, guaranteeing exceptional real-time performance. The results show that the proposed method can model suspended, low, and distant obstacles in real time according to the field of view and models of various sensors, ensuring accurate information updates and real-time performance.

Important Links:



Go Back