Fast Pixelwise Adaptive Visual Tracking of Non-Rigid Objects

In this paper, we present a new algorithm for real-time single-object tracking in videos in unconstrained environments. The algorithm comprises two different components that are trained "in one shot" at the first video frame: a detector that makes use of the generalized Hough transform wit...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 26; no. 5; pp. 2368 - 2380
Main Authors: Duffner, Stefan, Garcia, Christophe
Format: Journal Article
Language:English
Published: United States IEEE 01-05-2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Institute of Electrical and Electronics Engineers
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we present a new algorithm for real-time single-object tracking in videos in unconstrained environments. The algorithm comprises two different components that are trained "in one shot" at the first video frame: a detector that makes use of the generalized Hough transform with color and gradient descriptors and a probabilistic segmentation method based on global models for foreground and background color distributions. Both components work at pixel level and are used for tracking in a combined way adapting each other in a co-training manner. Moreover, we propose an adaptive shape model as well as a new probabilistic method for updating the scale of the tracker. Through effective model adaptation and segmentation, the algorithm is able to track objects that undergo rigid and non-rigid deformations and considerable shape and appearance variations. The proposed tracking method has been thoroughly evaluated on challenging benchmarks, and outperforms the state-of-the-art tracking methods designed for the same task. Finally, a very efficient implementation of the proposed models allows for extremely fast tracking.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2017.2676346