Focal Distillation From High-Resolution Data to Low-Resolution Data for 3D Object Detection
LiDAR-based 3D object detection plays an essential role in autonomous driving. Although the detector trained on high-resolution data has much better performance than the same detector trained on low-resolution data, the high-resolution LiDAR cannot be widely used due to its high price. In this work,...
Saved in:
Published in: | IEEE transactions on intelligent transportation systems Vol. 24; no. 12; pp. 14064 - 14075 |
---|---|
Main Authors: | , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
New York
IEEE
01-12-2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | LiDAR-based 3D object detection plays an essential role in autonomous driving. Although the detector trained on high-resolution data has much better performance than the same detector trained on low-resolution data, the high-resolution LiDAR cannot be widely used due to its high price. In this work, we propose a new distillation method called Focal Distillation to bridge the gap between high-resolution detector (teacher model) and low-resolution detector (student model). It consists of three essential components: focal classification distillation (FCD), focal regression distillation (FRD) and focal feature distillation (FFD). Taking the low-resolution data as input, the student model can learn discriminative features and produce more accurate results with the assistance of the teacher model trained on high-resolution data. We conducted extensive experiments to validate the effectiveness of Focal Distillation. Evaluated on the KITTI validation set, a typical SECOND model trained with Focal Distillation outperformed its non-distilled counterpart by 3.37%, 7.52%, 11.35% mAP on the category Car, Pedestrian, and Cyclist of moderate level, respectively. Moreover, the remarkable improvements observed on different models and different datasets further demonstrate the generalization ability of our proposed method. |
---|---|
ISSN: | 1524-9050 1558-0016 |
DOI: | 10.1109/TITS.2023.3304837 |