Improving Image-Based Localization with Deep Learning: The Impact of the Loss Function

This work investigates the impact of the loss function on the performance of Neural Networks, in the context of a monocular, RGB-only, image localization task. A common technique used when regressing a camera's pose from an image is to formulate the loss as a linear combination of positional an...

Full description

Saved in:
Bibliographic Details
Main Authors: Ward, Isaac Ronald, Jalwana, M. A. Asim K, Bennamoun, Mohammed
Format: Journal Article
Language:English
Published: 28-04-2019
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This work investigates the impact of the loss function on the performance of Neural Networks, in the context of a monocular, RGB-only, image localization task. A common technique used when regressing a camera's pose from an image is to formulate the loss as a linear combination of positional and rotational mean squared error (using tuned hyperparameters as coefficients). In this work we observe that changes to rotation and position mutually affect the captured image, and in order to improve performance, a pose regression network's loss function should include a term which combines the error of both of these coupled quantities. Based on task specific observations and experimental tuning, we present said loss term, and create a new model by appending this loss term to the loss function of the pre-existing pose regression network `PoseNet'. We achieve improvements in the localization accuracy of the network for indoor scenes; with decreases of up to 26.7% and 24.0% in the median positional and rotational error respectively, when compared to the default PoseNet.
DOI:10.48550/arxiv.1905.03692