Incremental Meta-Learning via Indirect Discriminant Alignment
Majority of the modern meta-learning methods for few-shot classification tasks operate in two phases: a meta-training phase where the meta-learner learns a generic representation by solving multiple few-shot tasks sampled from a large dataset and a testing phase, where the meta-learner leverages its...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
10-02-2020
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Majority of the modern meta-learning methods for few-shot classification
tasks operate in two phases: a meta-training phase where the meta-learner
learns a generic representation by solving multiple few-shot tasks sampled from
a large dataset and a testing phase, where the meta-learner leverages its
learnt internal representation for a specific few-shot task involving classes
which were not seen during the meta-training phase. To the best of our
knowledge, all such meta-learning methods use a single base dataset for
meta-training to sample tasks from and do not adapt the algorithm after
meta-training. This strategy may not scale to real-world use-cases where the
meta-learner does not potentially have access to the full meta-training dataset
from the very beginning and we need to update the meta-learner in an
incremental fashion when additional training data becomes available. Through
our experimental setup, we develop a notion of incremental learning during the
meta-training phase of meta-learning and propose a method which can be used
with multiple existing metric-based meta-learning algorithms. Experimental
results on benchmark dataset show that our approach performs favorably at test
time as compared to training a model with the full meta-training set and incurs
negligible amount of catastrophic forgetting |
---|---|
DOI: | 10.48550/arxiv.2002.04162 |