Distributed artificial neural network architectures

The computational cost of training artificial neural network (ANN) algorithms limits the use of large systems capable of processing complex problems. Implementing ANNs on a parallel or distributed platform to improve performance is therefore desirable. This work illustrates a method to predict and e...

Full description

Saved in:
Bibliographic Details
Published in:19th International Symposium on High Performance Computing Systems and Applications (HPCS'05) pp. 2 - 10
Main Authors: Calvert, D., Guan, J.
Format: Conference Proceeding
Language:English
Published: IEEE 2005
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The computational cost of training artificial neural network (ANN) algorithms limits the use of large systems capable of processing complex problems. Implementing ANNs on a parallel or distributed platform to improve performance is therefore desirable. This work illustrates a method to predict and evaluate the performance of distributed ANN algorithms by analyzing the performance of the comparatively simple mathematical operations, which are used to construct the ANN. The ANN algorithms are divided into simple components: matrix and vector multiplication, matrix processed through a function, competition in a matrix. These basic operational parts are examined individually and it is demonstrated that the computation processes of distributed neural networks can be derived from the composition of these basic operations. Three popular network architectures are examined: multi-layer perceptrons with back-propagation learning, self-organizing map, and radial basis functions network.
ISBN:0769523439
9780769523439
ISSN:1550-5243
2378-2099
DOI:10.1109/HPCS.2005.24