Salient object detection for RGBD video via spatial interaction and depth-based boundary refinement
Recently proposed state-of-the-art saliency detection models rely heavily on labeled datasets and rarely focus on perfect RGBD feature fusion, which lowers their generalization ability. In this paper, we propose a depth-based interaction and refinement network (DIR-Net) to fully leverage the depth i...
Saved in:
Published in: | Complex & intelligent systems Vol. 9; no. 6; pp. 6343 - 6358 |
---|---|
Main Authors: | , , , |
Format: | Journal Article |
Language: | English |
Published: |
Cham
Springer International Publishing
01-12-2023
Springer |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recently proposed state-of-the-art saliency detection models rely heavily on labeled datasets and rarely focus on perfect RGBD feature fusion, which lowers their generalization ability. In this paper, we propose a depth-based interaction and refinement network (DIR-Net) to fully leverage the depth information provided with RGB images to generate and refine the corresponding saliency segmentation maps. In total, three modules are included in our framework. A depth-based refinement module (DRM) and an RGB module work in parallel while coordinating via interactive spatial guidance modules (ISGMs), which utilize spatial and channel attention computed from both depth features and RGB features. In each layer, the features in both modules are refined and guided by the spatial information obtained from the other module through ISGMs. In the RGB module, before sending the depth-guided feature map to the decoder, a convolutional gated recurrent unit (ConvGRU)-based block is introduced to handle temporal information. Thinking about the clear movement information in RGB features, the block also guides temporal information in DRM. By merging the results from both the DRM and RGB modules, a segmentation map with distinct boundaries is generated. Considering the lack of depth images in popular public datasets, we utilize a depth estimation network that incorporates manual postprocessing-based correction to generate depth images on the DAVIS and UVSD datasets. The state-of-the-art performance achieved on both the original and new datasets illustrates the advantage of our RGBD feature fusion strategy, with a real-time speed of 19 fps on a single GPU. |
---|---|
ISSN: | 2199-4536 2198-6053 |
DOI: | 10.1007/s40747-023-01072-w |