Adaptive weighted crowd receptive field network for crowd counting

Crowd counting plays an important role in crowd analysis and monitoring. To this end, we propose a novel method called Adaptive Weighted Crowd Receptive Field Network (AWRFN) for crowd counting to estimate the number of people and the spatial distribution of input crowd images. The proposed AWRFN is...

Full description

Saved in:
Bibliographic Details
Published in:Pattern analysis and applications : PAA Vol. 24; no. 2; pp. 805 - 817
Main Authors: Peng, Sifan, Wang, Luyang, Yin, Baoqun, Li, Yun, Xia, Yinfeng, Hao, Xiaoliang
Format: Journal Article
Language:English
Published: London Springer London 01-05-2021
Springer Nature B.V
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Crowd counting plays an important role in crowd analysis and monitoring. To this end, we propose a novel method called Adaptive Weighted Crowd Receptive Field Network (AWRFN) for crowd counting to estimate the number of people and the spatial distribution of input crowd images. The proposed AWRFN is composed of four modules: backbone, crowd receptive field block (CRFB), recurrent block (RB), and channel attention block (CAB). Backbone utilizes the first ten layers of VGG16 to extract base features of input images. CRFB is a multi-branch architecture simulating a real human visual system for further obtaining refined and discriminative crowd features. RB generates strong semantic and global information by recurrently stacking convolutional layers with the same parameters. CAB outputs appropriate weights to supervise each channel of the feature maps output from CRFB, which uses the outputs of RB as guidance. Different from previous works using Euclidean Loss, we employ L1_Smooth Loss to train our network in an end-to-end fashion. To demonstrate the effectiveness of our proposed method, we implement AWRFN on two representative datasets including the ShanghaiTech dataset and the UCF_CC_50 dataset. The experimental results prove that our method is both effective and robust compared with the state-of-the-art approaches.
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-020-00934-0