Neural networks that overcome classic challenges through practice
Since the earliest proposals for neural network models of the mind and brain, critics have pointed out key weaknesses in these models compared to human cognitive abilities. Here we review recent work that has used metalearning to help overcome some of these challenges. We characterize their successe...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
14-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Since the earliest proposals for neural network models of the mind and brain,
critics have pointed out key weaknesses in these models compared to human
cognitive abilities. Here we review recent work that has used metalearning to
help overcome some of these challenges. We characterize their successes as
addressing an important developmental problem: they provide machines with an
incentive to improve X (where X represents the desired capability) and
opportunities to practice it, through explicit optimization for X; unlike
conventional approaches that hope for achieving X through generalization from
related but different objectives. We review applications of this principle to
four classic challenges: systematicity, catastrophic forgetting, few-shot
learning and multi-step reasoning; we also discuss related aspects of human
development in natural environments. |
---|---|
DOI: | 10.48550/arxiv.2410.10596 |