Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle

The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. If vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camer...

Full description

Saved in:
Bibliographic Details
Published in:Autonomous robots Vol. 39; no. 3; pp. 259 - 277
Main Authors: Heng, Lionel, Lee, Gim Hee, Pollefeys, Marc
Format: Journal Article
Language:English
Published: New York Springer US 01-10-2015
Springer Nature B.V
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. If vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camera system, an accurate calibration of the multi-camera system is a necessary prerequisite. We propose a novel vSLAM-based self-calibration method for a multi-camera system that includes at least one calibrated stereo camera, and an arbitrary number of monocular cameras. We assume overlapping fields of view to only exist within stereo cameras. Our self-calibration estimates the inter-camera transforms with metric scale; metric scale is inferred from calibrated stereo. On our MAV, we set up each camera pair in a stereo configuration which facilitates the estimation of the MAV’s pose with metric scale. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multi-camera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3-point algorithm that uses relative rotation angle measurements from a 3-axis gyroscope to recover the relative motion of the MAV with metric scale and from 2D-2D feature correspondences. This relative motion estimation does not involve scene point triangulation. Our constant-time vSLAM implementation with loop closures runs on-board the MAV in real-time. To the best of our knowledge, no published work has demonstrated real-time on-board vSLAM with loop closures. We show experimental results from simulation experiments, and real-world experiments in both indoor and outdoor environments.
ISSN:0929-5593
1573-7527
DOI:10.1007/s10514-015-9466-8