Prior knowledge and the creation of "virtual" examples for RBF networks
Considers the problem of how to incorporate prior knowledge in supervised learning techniques. The authors set the problem in the framework of regularization theory, and consider the case in which one knows that the approximated function has radial symmetry. The problem can be solved in two alternat...
Saved in:
Published in: | Proceedings of 1995 IEEE Workshop on Neural Networks for Signal Processing pp. 201 - 210 |
---|---|
Main Authors: | , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
1995
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Considers the problem of how to incorporate prior knowledge in supervised learning techniques. The authors set the problem in the framework of regularization theory, and consider the case in which one knows that the approximated function has radial symmetry. The problem can be solved in two alternative ways: 1) use the invariance as a constraint in the regularization theory framework to derive a rotation invariant version of radial basis functions; 2) use the radial symmetry to create new, "virtual" examples from a given data set. The authors show that these two apparently different methods of learning from "hints" (Abu-Mostafa, 1993) lead to exactly the same analytical solution. |
---|---|
ISBN: | 9780780327399 078032739X |
DOI: | 10.1109/NNSP.1995.514894 |