ST3D++: Denoised Self-Training for Unsupervised Domain Adaptation on 3D Object Detection
In this paper, we present a self-training method, named ST3D++, with a holistic pseudo label denoising pipeline for unsupervised domain adaptation on 3D object detection. ST3D++ aims at reducing noise in pseudo label generation as well as alleviating the negative impacts of noisy pseudo labels on mo...
Saved in:
Published in: | IEEE transactions on pattern analysis and machine intelligence Vol. 45; no. 5; pp. 6354 - 6371 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
United States
IEEE
01-05-2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we present a self-training method, named ST3D++, with a holistic pseudo label denoising pipeline for unsupervised domain adaptation on 3D object detection. ST3D++ aims at reducing noise in pseudo label generation as well as alleviating the negative impacts of noisy pseudo labels on model training. First, ST3D++ pre-trains the 3D object detector on the labeled source domain with random object scaling (ROS) which is designed to reduce target domain pseudo label noise arising from object scale bias of the source domain. Then, the detector is progressively improved through alternating between generating pseudo labels and training the object detector with pseudo-labeled target domain data. Here, we equip the pseudo label generation process with a hybrid quality-aware triplet memory to improve the quality and stability of generated pseudo labels. Meanwhile, in the model training stage, we propose a source data assisted training strategy and a curriculum data augmentation policy to effectively rectify noisy gradient directions and avoid model over-fitting to noisy pseudo labeled data. These specific designs enable the detector to be trained on meticulously refined pseudo labeled target data with denoised training signals, and thus effectively facilitate adapting an object detector to a target domain without requiring annotations. Finally, our method is assessed on four 3D benchmark datasets (i.e., Waymo, KITTI, Lyft, and nuScenes) for three common categories (i.e., car, pedestrian and bicycle). ST3D++ achieves state-of-the-art performance on all evaluated settings, outperforming the corresponding baseline by a large margin (e.g., 9.6% <inline-formula><tex-math notation="LaTeX">\sim</tex-math> <mml:math><mml:mo>∼</mml:mo></mml:math><inline-graphic xlink:href="qi-ieq1-3216606.gif"/> </inline-formula> 38.16% on Waymo <inline-formula><tex-math notation="LaTeX">\rightarrow</tex-math> <mml:math><mml:mo>→</mml:mo></mml:math><inline-graphic xlink:href="qi-ieq2-3216606.gif"/> </inline-formula> KITTI in terms of AP<inline-formula><tex-math notation="LaTeX">_{\text{3D}}</tex-math> <mml:math><mml:msub><mml:mrow/><mml:mtext>3D</mml:mtext></mml:msub></mml:math><inline-graphic xlink:href="qi-ieq3-3216606.gif"/> </inline-formula>), and even surpasses the fully supervised oracle results on the KITTI 3D object detection benchmark with target prior. Code is available at https://github.com/CVMI-Lab/ST3D . |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0162-8828 1939-3539 2160-9292 |
DOI: | 10.1109/TPAMI.2022.3216606 |