ISSN:
1573-773X
Keywords:
virtual input layer
;
neural network training
;
fast learning
;
SVD
Source:
Springer Online Journal Archives 1860-2000
Topics:
Computer Science
Notes:
Abstract A new methodology for neural learning is presented. Only a single iteration is needed to train a feed-forward network with near-optimal results. This is achieved by introducing a key modification to the conventional multi-layer architecture. A virtual input layer is implemented, which is connected to the nominal input layer by a special nonlinear transfer function, and to the first hidden layer by regular (linear) synapses. A sequence of alternating direction singular value decompositions is then used to determine precisely the inter-layer synaptic weights. This computational paradigm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information transfer within a neural network. Examples show that the trained neural networks generalize well.
Type of Medium:
Electronic Resource
URL:
http://dx.doi.org/10.1023/A:1009682730770
Permalink