Deep Learning Techniques for Visual SLAM : a Survey

Visual Simultaneous Localization and Mapping (VSLAM) has attracted considerable attention in recent years. This task involves using visual sensors to localize a robot while simultaneously constructing an internal representation of its environment. Traditional VSLAM methods involve the laborious hand...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 11; p. 1
Main Authors: Mokssit, Saad, Licea, Daniel Bonilla, Guermah, Bassma, Ghogho, Mounir
Format: Journal Article
Language:English
Published: Piscataway IEEE 01-01-2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Visual Simultaneous Localization and Mapping (VSLAM) has attracted considerable attention in recent years. This task involves using visual sensors to localize a robot while simultaneously constructing an internal representation of its environment. Traditional VSLAM methods involve the laborious hand-crafted design of visual features and complex geometric models. As a result, they are generally limited to simple environments with easily identifiable textures. Recent years, however, have witnessed the development of deep learning techniques for VSLAM. This is primarily due to their capability of modeling complex features of the environment in a completely data-driven manner. In this paper, we present a survey of relevant deep learning-based VSLAM methods and suggest a new taxonomy for the subject. We also discuss some of the current challenges and possible directions for this field of study.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3249661