Y-Net: Learning Domain Robust Feature Representation for ground camera image and large-scale image-based point cloud registration

Registering the 2D images (2D space) with the 3D model of the environment (3D space) provides a promising solution to outdoor Augmented Reality (AR) virtual-real registration. In this work, we use the position and orientation of the ground camera image to synthesize a corresponding rendered image fr...

Full description

Saved in:
Bibliographic Details
Published in:Information sciences Vol. 581; pp. 655 - 677
Main Authors: Liu, Weiquan, Wang, Cheng, Chen, Shuting, Bian, Xuesheng, Lai, Baiqi, Shen, Xuelun, Cheng, Ming, Lai, Shang-Hong, Weng, Dongdong, Li, Jonathan
Format: Journal Article
Language:English
Published: Elsevier Inc 01-12-2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Registering the 2D images (2D space) with the 3D model of the environment (3D space) provides a promising solution to outdoor Augmented Reality (AR) virtual-real registration. In this work, we use the position and orientation of the ground camera image to synthesize a corresponding rendered image from the outdoor large-scale 3D image-based point cloud. To achieve the virtual-real registration, we indirectly establish the spatial relationship between 2D and 3D space by matching the above two kinds (2D/3D space) of cross-domain images. However, matching cross-domain images goes beyond the capability of handcrafted descriptors and existing deep neural networks. To address this issue, we propose an end-to-end network, Y-Net, to learn Domain Robust Feature Representations (DRFRs) for the cross-domain images. Besides, we introduce a cross-domain-constrained loss function that balances the loss in image content and cross-domain consistency of the feature representations. Experimental results show that the DRFRs simultaneously preserve the representation of image content and suppress the influence of independent domains. Furthermore, Y-Net outperforms the existing algorithms on extracting feature representations and achieves state-of-the-art performance in cross-domain image retrieval. Finally, we validate the Y-Net-based registration approach on campus to demonstrate its possible applicability.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2021.10.022