Focalized contrastive view-invariant learning for self-supervised skeleton-based action recognition

[Display omitted] •Focalized contrastive learning for self-supervised skeletal action recognition.•Learning view-invariance by maximizing mutual information between multi-view pairs.•Focal loss to improve contrastive learning with hard sample mining.•Demonstrating the effectiveness and compatibility...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) Vol. 537; pp. 198 - 209
Main Authors: Men, Qianhui, Ho, Edmond S.L., Shum, Hubert P.H., Leung, Howard
Format: Journal Article
Language:English
Published: Elsevier B.V 07-06-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:[Display omitted] •Focalized contrastive learning for self-supervised skeletal action recognition.•Learning view-invariance by maximizing mutual information between multi-view pairs.•Focal loss to improve contrastive learning with hard sample mining.•Demonstrating the effectiveness and compatibility with different data sizes.•Robust for both supervised and unsupervised evaluation protocols. Learning view-invariant representation is a key to improving feature discrimination power for skeleton-based action recognition. Existing approaches cannot effectively remove the impact of viewpoint due to the implicit view-dependent representations. In this work, we propose a self-supervised framework called Focalized Contrastive View-invariant Learning (FoCoViL), which significantly suppresses the view-specific information on the representation space where the viewpoints are coarsely aligned. By maximizing mutual information with an effective contrastive loss between multi-view sample pairs, FoCoViL associates actions with common view-invariant properties and simultaneously separates the dissimilar ones. We further propose an adaptive focalization method based on pairwise similarity to enhance contrastive learning for a clearer cluster boundary in the learned space. Different from many existing self-supervised representation learning work that rely heavily on supervised classifiers, FoCoViL performs well on both unsupervised and supervised classifiers with superior recognition performance. Extensive experiments also show that the proposed contrastive-based focalization generates a more discriminative latent representation.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2023.03.070