Deep Multi-View Correspondence for Identity-Aware Multi-Target Tracking

A multi-view multi-target correspondence framework employing deep learning on overlapping cameras for identity-aware tracking in the presence of occlusion is proposed. Our complete pipeline of detection, multi-view correspondence, fusion and tracking, inspired by AI greatly improves person correspon...

Full description

Saved in:
Bibliographic Details
Published in:2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA) pp. 1 - 8
Main Authors: Hanif, Adnan, Bin Mansoor, Atif, Imran, Ali Shariq
Format: Conference Proceeding
Language:English
Published: IEEE 01-11-2017
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A multi-view multi-target correspondence framework employing deep learning on overlapping cameras for identity-aware tracking in the presence of occlusion is proposed. Our complete pipeline of detection, multi-view correspondence, fusion and tracking, inspired by AI greatly improves person correspondence across multiple wide-angled views over traditionally used features set and handcrafted descriptors. We transfer the learning of a deep convolutional neural net (CNN) trained to jointly learn pedestrian features and similarity measures, to establish identity correspondence of non-occluding targets across multiple overlapping cameras with varying illumination and human pose. Subsequently, the identity-aware foreground principal axes of visible targets in each view are fused onto top view without requirement of camera calibration and precise principal axes length information. The problem of ground point localisation of targets on top view is then solved via linear programming for optimal projected axes intersection points to targets assignment using identity information from individual views. Finally, our proposed scheme is evaluated under tracking performance measures of MOTA and MOTP on benchmark video sequences which demonstrate high accuracy results when compared to other well-known approaches.
DOI:10.1109/DICTA.2017.8227423