Learning to Balance Local Losses via Meta-Learning

The standard training for deep neural networks relies on a global and fixed loss function. For more effective training, dynamic loss functions have been recently proposed. However, the dynamic global loss function is not flexible to differentially train layers in complex deep neural networks. In thi...

Full description

Saved in:
Bibliographic Details
Published in:Access, IEEE Vol. 9; pp. 130834 - 130844
Main Authors: Yoa, Seungdong, Jeon, Minkyu, Oh, Youngjin, Kim, Hyunwoo J
Format: Standard
Language:English
Published: IEEE 2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The standard training for deep neural networks relies on a global and fixed loss function. For more effective training, dynamic loss functions have been recently proposed. However, the dynamic global loss function is not flexible to differentially train layers in complex deep neural networks. In this paper, we propose a general framework that learns to adaptively train each layer of deep neural networks via meta-learning. Our framework leverages the local error signals from layers and identifies which layer needs to be trained more at every iteration. Also, the proposed method improves the local loss function with our minibatch-wise dropout and cross-validation loop to alleviate meta-overfitting. The experiments show that our method achieved competitive performance compared to state-of-the-art methods on popular benchmark datasets for image classification: CIFAR-10 and CIFAR-100. Surprisingly, our method enables training deep neural networks without skip-connections using dynamically weighted local loss functions.
DOI:10.1109/ACCESS.2021.3113934