Rethinking deep clustering paradigms: Self-supervision is all you need

The recent advances in deep clustering have been made possible by significant progress in self-supervised and pseudo-supervised learning. However, the trade-off between self-supervision and pseudo-supervision can give rise to three primary issues. The joint training causes Feature Randomness and Fea...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks Vol. 181; p. 106773
Main Authors: Shaheen, Amal, Mrabah, Nairouz, Ksantini, Riadh, Alqaddoumi, Abdulla
Format: Journal Article
Language:English
Published: United States Elsevier Ltd 01-01-2025
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The recent advances in deep clustering have been made possible by significant progress in self-supervised and pseudo-supervised learning. However, the trade-off between self-supervision and pseudo-supervision can give rise to three primary issues. The joint training causes Feature Randomness and Feature Drift, whereas the independent training causes Feature Randomness and Feature Twist. In essence, using pseudo-labels generates random and unreliable features. The combination of pseudo-supervision and self-supervision drifts the reliable clustering-oriented features. Moreover, moving from self-supervision to pseudo-supervision can twist the curved latent manifolds. This paper addresses the limitations of existing deep clustering paradigms concerning Feature Randomness, Feature Drift, and Feature Twist. We propose a new paradigm with a new strategy that replaces pseudo-supervision with a second round of self-supervision training. The new strategy makes the transition between instance-level self-supervision and neighborhood-level self-supervision smoother and less abrupt. Moreover, it prevents the drifting effect that is caused by the strong competition between instance-level self-supervision and clustering-level pseudo-supervision. Moreover, the absence of the pseudo-supervision prevents the risk of generating random features. With this novel approach, our paper introduces a Rethinking of the Deep Clustering Paradigms, denoted by R-DC. Our model is specifically designed to address three primary challenges encountered in Deep Clustering: Feature Randomness, Feature Drift, and Feature Twist. Experimental results conducted on six datasets have shown that the two-level self-supervision training yields substantial improvements, as evidenced by the results of the clustering and ablation study. Furthermore, experimental comparisons with nine state-of-the-art clustering models have clearly shown that our strategy leads to a significant enhancement in performance. •Analyses the drawbacks of pseudo-supervision.•Introduces a novel deep clustering paradigm that eliminates pseudo-supervision.•Analyses the advantages of the new paradigm after eliminating pseudo-supervision.•Introduces a deep clustering method that selects core points for proximity-level.•Shows significant performance over state-of-the-art deep clustering methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2024.106773