Robust Stereo Matching Using Discriminative Multilevel Features and Multimodal Bifurcated Cost Volume Network

Recent introduction of deep convolutional neural network (DCNN) into the task of stereo matching has led to a remarkable progress, achieving superior performance over the traditional methods. However, current stereo matching DCNN architectures still struggle to resolve matching ambiguities and local...

Full description

Saved in:
Bibliographic Details
Published in:IEEE sensors journal Vol. 23; no. 7; pp. 7420 - 7429
Main Authors: Okae, James, Li, Bohan, Qin, Huabiao
Format: Journal Article
Language:English
Published: New York The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 01-04-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent introduction of deep convolutional neural network (DCNN) into the task of stereo matching has led to a remarkable progress, achieving superior performance over the traditional methods. However, current stereo matching DCNN architectures still struggle to resolve matching ambiguities and local minima caused by violation of feature distinctiveness constraint and/or consistency constraint. Taking this underlying problem into account, we propose a robust stereo matching solution using discriminative multilevel feature network (DML-Net) and multimodal bifurcated cost volume network (MBCV-Net). The DML-Net first extracts multilevel (ML) features and then learns to discriminate among the ML features to facilitate accurate dense matching. The MBCV-Net consists of two parallel matching pathways and a fusion module for leveraging geometric relationship between stereo images via feature concatenation matching, in addition to feature-appearance-based matching. Finally, we train the proposed stereo matching network from a unified optimization perspective to allow effective combination of multimodal cost volumes. Experiment on standard stereo benchmarks revealed that the proposed method is effective and surpasses many state-of-the-art methods.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2023.3246960