The Journal of China Universities of Posts and Telecommunications ›› 2022, Vol. 29 ›› Issue (3): 92-104.doi: 10.19682/j.cnki.1005-8885.2022.1012

Previous Articles    

Illumination robust image transformations for feature-based SLAM using photometric and feature matches loss

  

  1. School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Received:2021-10-27 Revised:2022-04-07 Online:2022-06-30 Published:2022-06-30
  • Contact: Yan Danfeng E-mail:yandf@bupt.edu.cn

Abstract: Simultaneous localization and mapping (SLAM) technology becomes more and more important in robot localization. The purpose of this paper is to improve the robustness of visual features to lighting changes and increase the recall rate of map re-localization under different lighting environments by optimizing the image transformation model. An image transformation method based on matches and photometric error (name the method as MPT) is proposed in this paper, and it is seamlessly integrated into the pre-processing stage of the feature-based visual SLAM framework. The results of the experiment show that the MPT method has a better matching effect on different visual features. In addition, the image transformation module encapsulated by a robot operating system (ROS) can be used with multiple visual SLAM systems and improve its re-localization effect under different lighting environments.

Key words: simultaneous localization and mapping (SLAM), image transformation, long-term visual localization