Deep Multi-Task Learning for Joint Localization, Perception, and Prediction

Over the last few years, we have witnessed tremendous progress on many subtasks of autonomous driving including perception, motion forecasting, and motion planning. However, these systems often assume that the car is accurately localized against a high-definition map. In this paper we question this...

Full description

Saved in:
Bibliographic Details
Published in:2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 4677 - 4687
Main Authors: Phillips, John, Martinez, Julieta, Barsan, Ioan Andrei, Casas, Sergio, Sadat, Abbas, Urtasun, Raquel
Format: Conference Proceeding
Language:English
Published: IEEE 01-06-2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Over the last few years, we have witnessed tremendous progress on many subtasks of autonomous driving including perception, motion forecasting, and motion planning. However, these systems often assume that the car is accurately localized against a high-definition map. In this paper we question this assumption, and investigate the issues that arise in state-of-the-art autonomy stacks under localization error. Based on our observations, we design a system that jointly performs perception, prediction, and localization. Our architecture is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently. We show experiments on a large-scale autonomy dataset, demonstrating the efficiency and accuracy of our proposed approach.
ISSN:2575-7075
DOI:10.1109/CVPR46437.2021.00465