Depth and edge auxiliary learning for still image crowd density estimation

Crowd counting plays a significant role in crowd monitoring and management, which suffers from various challenges, especially in crowd-scale variations and background interference issues. Therefore, we propose a method named depth and edge auxiliary learning for still image crowd density estimation...

Full description

Saved in:
Bibliographic Details
Published in:Pattern analysis and applications : PAA Vol. 24; no. 4; pp. 1777 - 1792
Main Authors: Peng, Sifan, Yin, Baoqun, Hao, Xiaoliang, Yang, Qianqian, Kumar, Aakash, Wang, Luyang
Format: Journal Article
Language:English
Published: London Springer London 01-11-2021
Springer Nature B.V
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Crowd counting plays a significant role in crowd monitoring and management, which suffers from various challenges, especially in crowd-scale variations and background interference issues. Therefore, we propose a method named depth and edge auxiliary learning for still image crowd density estimation to cope with crowd-scale variations and background interference problems simultaneously. The proposed multi-task framework contains three sub-tasks including the crowd head edge regression, the crowd density map regression and the relative depth map regression. The crowd head edge regression task outputs distinctive crowd head edge features to distinguish crowd from complex background. The relative depth map regression task perceives crowd-scale variations and outputs multi-scale crowd features. Moreover, we design an efficient fusion strategy to fuse the above information and make the crowd density map regression generate high-quality crowd density maps. Various experiments were conducted on four main-stream datasets to verify the effectiveness and portability of our method. Experimental results indicate that our method can achieve competitive performance compared with other superior approaches. In addition, our proposed method improves the counting accuracy of the baseline network by 15.6 % .
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-021-01017-4