Cloud based scalable object recognition from video streams using orientation fusion and convolutional neural networks
•This paper pioneers the use of empirical mode decomposition with CNNs, to improve visual object recognition accuracy on challenging video datasets.•We study the orientation, phase and amplitude components and show their performance in terms of visual recognition accuracy.•We show that the orientati...
Saved in:
Published in: | Pattern recognition Vol. 121; p. 108207 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier Ltd
01-01-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •This paper pioneers the use of empirical mode decomposition with CNNs, to improve visual object recognition accuracy on challenging video datasets.•We study the orientation, phase and amplitude components and show their performance in terms of visual recognition accuracy.•We show that the orientation component is a good candidate to achieve high object recognition accuracy, for illumination- and expression-variant video datasets.•We propose a feature-fusion strategy of the orientation components to further improve the accuracy rates.•We show that the orientation-fusion approach significantly improves the visual recognition accuracy, under challenging conditions.
Object recognition from live video streams comes with numerous challenges such as the variation in illumination conditions and poses. Convolutional neural networks (CNNs) have been widely used to perform intelligent visual object recognition. Yet, CNNs still suffer from severe accuracy degradation, particularly on illumination-variant datasets. To address this problem, we propose a new CNN method based on orientation fusion for visual object recognition. The proposed cloud-based video analytics system pioneers the use of bi-dimensional empirical mode decomposition to split a video frame into intrinsic mode functions (IMFs). We further propose these IMFs to endure Reisz transform to produce monogenic object components, which are in turn used for the training of CNNs. Past works have demonstrated how the object orientation component may be used to pursue accuracy levels as high as 93%. Herein we demonstrate how a feature-fusion strategy of the orientation components leads to further improving visual recognition accuracy to 97%. We also assess the scalability of our method, looking at both the number and the size of the video streams under scrutiny. We carry out extensive experimentation on the publicly available Yale dataset, including also a self generated video datasets, finding significant improvements (both in accuracy and scale), in comparison to AlexNet, LeNet and SE-ResNeXt, which are three most commonly used deep learning models for visual object recognition and classification. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2021.108207 |