Automating Continual Learning
General-purpose learning systems should improve themselves in open-ended fashion in ever-changing environments. Conventional learning algorithms for neural networks, however, suffer from catastrophic forgetting (CF) -- previously acquired skills are forgotten when a new task is learned. Instead of h...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
30-11-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | General-purpose learning systems should improve themselves in open-ended
fashion in ever-changing environments. Conventional learning algorithms for
neural networks, however, suffer from catastrophic forgetting (CF) --
previously acquired skills are forgotten when a new task is learned. Instead of
hand-crafting new algorithms for avoiding CF, we propose Automated Continual
Learning (ACL) to train self-referential neural networks to meta-learn their
own in-context continual (meta-)learning algorithms. ACL encodes all desiderata
-- good performance on both old and new tasks -- into its meta-learning
objectives. Our experiments demonstrate that ACL effectively solves "in-context
catastrophic forgetting"; our ACL-learned algorithms outperform hand-crafted
ones, e.g., on the Split-MNIST benchmark in the replay-free setting, and
enables continual learning of diverse tasks consisting of multiple few-shot and
standard image classification datasets. |
---|---|
DOI: | 10.48550/arxiv.2312.00276 |