Factors for the Generalisation of Identity Relations by Neural Networks
ICML 2019 Workshop on Understanding and Improving Generalization in Deep Learning}, Long Beach, California, 201 Many researchers implicitly assume that neural networks learn relations and generalise them to new unseen data. It has been shown recently, however, that the generalisation of feed-forward...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
12-06-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | ICML 2019 Workshop on Understanding and Improving Generalization
in Deep Learning}, Long Beach, California, 201 Many researchers implicitly assume that neural networks learn relations and
generalise them to new unseen data. It has been shown recently, however, that
the generalisation of feed-forward networks fails for identity relations.The
proposed solution for this problem is to create an inductive bias with
Differential Rectifier (DR) units. In this work we explore various factors in
the neural network architecture and learning process whether they make a
difference to the generalisation on equality detection of Neural Networks
without and and with DR units in early and mid fusion architectures.
We find in experiments with synthetic data effects of the number of hidden
layers, the activation function and the data representation. The training set
size in relation to the total possible set of vectors also makes a difference.
However, the accuracy never exceeds 61% without DR units at 50% chance level.
DR units improve generalisation in all tasks and lead to almost perfect test
accuracy in the Mid Fusion setting. Thus, DR units seem to be a promising
approach for creating generalisation abilities that standard networks lack. |
---|---|
DOI: | 10.48550/arxiv.1906.05449 |