Browsing by Subject "Sensor Fusion"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Towards a Fast, Robust and Accurate Visual-Inertial Simultaneous Localization and Mapping System(2022-05) Mo, JiaweiA Simultaneous Localization and Mapping (SLAM) system estimates a robot's instantaneous location using onboard sensory measurements, e.g., LiDAR (Light Detection and Ranging) sensors, cameras, and inertial measurement units (IMU). It is particularly challenging where GPS reception is weak such as indoor, urban, and underwater environments, but also is a rather essential capability. For robots operating outdoors, the visual conditions can be quite poor, and such robots also have limited onboard computational resources. Our research investigated efficient and robust methods for SLAM algorithms on robots with limited computational and energy capabilities operating in challenging scenarios. This dissertation is mainly divided into three parts. The first part focuses on developing a stereo visual SLAM system for mobile robots operating outdoors with limited computational capacity. Compared to the state-of-the-art SLAM systems, the proposed method is independent of feature detection and matching, and thus it is computationally efficient and robust in adverse visual conditions, which is thoroughly validated on public datasets. In the second part, we further extend the visual SLAM system to a visual-inertial system by integrating IMU data for improved accuracy and robustness. Unlike most existing visual-inertial systems which are discrete-time, our system is continuous-time based on a spline representation, which provides the versatility for handling SLAM-related challenges (e.g., rolling shutter distortion) and applications (e.g., smooth path planning). Extensive experiments validate its state-of-the-art accuracy and real-time computational efficiency. In the third part, we turn our attention to rolling shutter distortion with the goal of improving SLAM performance on such cameras. We propose a deep neural network for accurate rolling shutter correction from a single-view image and IMU data. This enables numerous vision algorithms (e.g., SLAM systems) to run on rolling shutter cameras and produce highly accurate results. We demonstrate its efficacy by evaluating the performance of a SLAM algorithm on rolling shutter imagery corrected by the proposed approach. In summary, this dissertation devotes itself to improving the efficiency and robustness of SLAM systems in challenging scenarios such as the underwater environment. By advancing the state-of-the-art, the proposed methodologies bring SLAM systems one step closer to practical usage of mobile robots in challenging environments.