Abstract
In this paper, we present a framework where a learning rule can be optimized within a parametric learning rule space. We define what we callparametric learning rules and present a theoretical study of theirgeneralization properties when estimated from a set of learning tasks and tested over another set of tasks. We corroborate the results of this study with practical experiments.
Similar content being viewed by others
References
D. Chalmers. The evolution of learning: An experiment in genetic connectionism, inProceedings of the 1990 Connectionist Models Summer School, J. Hinton, ed., San Mateo, CA, Morgan Kaufmann, 1990.
Y.Bengio, S. Bengio. Learning a synaptic learning rule,Tech. Rep. 751, Dép. d'Informatique et de Recherche Opérationnelle, Université de Montréal, Montreal (QC) Canada, 1990.
V. N. Vapnik.Estimation of dependencies based on empirical data, Springer-Verlag, New-York, NY, USA, 1982.
J. Holland.Adaptation in Natural and Artificial Systems, University of Michigan Press, 1975.
S. Kirkpatrick, C. D. Gelatt, M.P. Vecchi, Optimization by simulated annealing,Science, no. 220, pp. 671–680, 1983.
J. R. Koza.Genetic programming: on the programming of computers by means of natural selection, Bradford Book, MIT Press, Cambridge, MA, USA, 1992.
S. Bengio, Y. Bengio, J. Cloutier, J. Gecsei. On the optimization of a synaptic learning rule, inConference on Optimality in Biological and Artificial Networks, Dallas, USA, 1992.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Bengio, S., Bengio, Y. & Cloutier, J. On the search for new learning rules for ANNs. Neural Process Lett 2, 26–30 (1995). https://doi.org/10.1007/BF02279935
Issue Date:
DOI: https://doi.org/10.1007/BF02279935