Understanding Dropout as an Optimization Trick
As one of standard approaches to train deep neural networks, dropout has been applied to regularize large models to avoid overfitting, and the improvement in performance by dropout has been explained as avoiding co-adaptation between nodes. However, when correlations between nodes are compared after...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
25-06-2018
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As one of standard approaches to train deep neural networks, dropout has been
applied to regularize large models to avoid overfitting, and the improvement in
performance by dropout has been explained as avoiding co-adaptation between
nodes. However, when correlations between nodes are compared after training the
networks with or without dropout, one question arises if co-adaptation
avoidance explains the dropout effect completely. In this paper, we propose an
additional explanation of why dropout works and propose a new technique to
design better activation functions. First, we show that dropout can be
explained as an optimization technique to push the input towards the saturation
area of nonlinear activation function by accelerating gradient information
flowing even in the saturation area in backpropagation. Based on this
explanation, we propose a new technique for activation functions, {\em gradient
acceleration in activation function (GAAF)}, that accelerates gradients to flow
even in the saturation area. Then, input to the activation function can climb
onto the saturation area which makes the network more robust because the model
converges on a flat region. Experiment results support our explanation of
dropout and confirm that the proposed GAAF technique improves image
classification performance with expected properties. |
---|---|
DOI: | 10.48550/arxiv.1806.09783 |