A Convergence Theorem for Sequential Learning in Two-Layer Perceptrons

, and

Published under licence by IOP Publishing Ltd
, , Citation M. Marchand et al 1990 EPL 11 487 DOI 10.1209/0295-5075/11/6/001

0295-5075/11/6/487

Abstract

We consider a perceptron with Ni input units, one output and a yet unspecified number of hidden units. This perceptron must be able to learn a given but arbitrary set of input-output examples. By sequential learning we mean that groups of patterns, pertaining to the same class, are sequentially separated from the rest by successively adding hidden units until the remaining patterns are all in the same class. We prove that the internal representations obtained by such procedures are linearly separable. Preliminary numerical tests of an algorithm implementing these ideas are presented and compare favourably with results of other growth algorithms.

Export citation and abstract BibTeX RIS

10.1209/0295-5075/11/6/001