FFAVOD: Feature fusion architecture for video object detection

•We designed a novel architecture for video object detection that capitalizes on temporal information.•We designed a novel fusion module to merge feature maps coming from several temporally close frames.•We proposed an improvement to the SpotNet attention module.•We trained and evaluated our archite...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters Vol. 151; pp. 294 - 301
Main Authors: Perreault, Hughes, Bilodeau, Guillaume-Alexandre, Saunier, Nicolas, Héritier, Maguelonne
Format: Journal Article
Language:English
Published: Amsterdam Elsevier B.V 01-11-2021
Elsevier Science Ltd
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We designed a novel architecture for video object detection that capitalizes on temporal information.•We designed a novel fusion module to merge feature maps coming from several temporally close frames.•We proposed an improvement to the SpotNet attention module.•We trained and evaluated our architecture with three different base detectors on two traffic surveillance datasets.•We demonstrated a consistent and significant improvement of our model over the three baselines. A significant amount of redundancy exists between consecutive frames of a video. Object detectors typically produce detections for one image at a time, without any capabilities for taking advantage of this redundancy. Meanwhile, many applications for object detection work with videos, including intelligent transportation systems, advanced driver assistance systems and video surveillance. Our work aims at taking advantage of the similarity between video frames to produce better detections. We propose FFAVOD, standing for feature fusion architecture for video object detection. We first introduce a novel video object detection architecture that allows a network to share feature maps between nearby frames. Second, we propose a feature fusion module that learns to merge feature maps to enhance them. We show that using the proposed architecture and the fusion module can improve the performance of three base object detectors on two object detection benchmarks containing sequences of moving road users. Additionally, to further increase performance, we propose an improvement to the SpotNet attention module. Using our architecture on the improved SpotNet detector, we obtain the state-of-the-art performance on the UA-DETRAC public benchmark as well as on the UAVDT dataset. Code is available at https://github.com/hu64/FFAVOD.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2021.09.002