Improving Resnet-9 Generalization Trained on Small Datasets
This paper presents our proposed approach that won the first prize at the ICLR competition on Hardware Aware Efficient Training. The challenge is to achieve the highest possible accuracy in an image classification task in less than 10 minutes. The training is done on a small dataset of 5000 images p...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
07-09-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents our proposed approach that won the first prize at the
ICLR competition on Hardware Aware Efficient Training. The challenge is to
achieve the highest possible accuracy in an image classification task in less
than 10 minutes. The training is done on a small dataset of 5000 images picked
randomly from CIFAR-10 dataset. The evaluation is performed by the competition
organizers on a secret dataset with 1000 images of the same size. Our approach
includes applying a series of technique for improving the generalization of
ResNet-9 including: sharpness aware optimization, label smoothing, gradient
centralization, input patch whitening as well as metalearning based training.
Our experiments show that the ResNet-9 can achieve the accuracy of 88% while
trained only on a 10% subset of CIFAR-10 dataset in less than 10 minuets |
---|---|
DOI: | 10.48550/arxiv.2309.03965 |