Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
To apply the latest computer vision techniques that require a large computational cost in real industrial applications, knowledge distillation methods (KDs) are essential. Existing logit-based KDs apply the constant temperature scaling to all samples in dataset, limiting the utilization of knowledge...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
14-02-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | To apply the latest computer vision techniques that require a large
computational cost in real industrial applications, knowledge distillation
methods (KDs) are essential. Existing logit-based KDs apply the constant
temperature scaling to all samples in dataset, limiting the utilization of
knowledge inherent in each sample individually. In our approach, we classify
the dataset into two categories (i.e., low energy and high energy samples)
based on their energy score. Through experiments, we have confirmed that low
energy samples exhibit high confidence scores, indicating certain predictions,
while high energy samples yield low confidence scores, meaning uncertain
predictions. To distill optimal knowledge by adjusting non-target class
predictions, we apply a higher temperature to low energy samples to create
smoother distributions and a lower temperature to high energy samples to
achieve sharper distributions. When compared to previous logit-based and
feature-based methods, our energy-based KD (Energy KD) achieves better
performance on various datasets. Especially, Energy KD shows significant
improvements on CIFAR-100-LT and ImageNet datasets, which contain many
challenging samples. Furthermore, we propose high energy-based data
augmentation (HE-DA) for further improving the performance. We demonstrate that
meaningful performance improvement could be achieved by augmenting only 20-50%
of dataset, suggesting that it can be employed on resource-limited devices. To
the best of our knowledge, this paper represents the first attempt to make use
of energy function in knowledge distillation and data augmentation, and we
believe it will greatly contribute to future research. |
---|---|
DOI: | 10.48550/arxiv.2311.14334 |