Learning in linear neural networks: The validity of the annealed approximation

Sara A. Solla*, Esther Levin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

A statistical-mechanics approach has been recently used to develop a theoretical framework to describe the process of supervised learning and the emergence of generalization ability in layered neural networks. This theory has yielded a powerful tool: a recursion relation that predicts the generalization ability of trained networks as a function of the size of the training set. The recursion relation results from an annealed approximation to the correct quenched averaging over all possible training sets of fixed size. Here we investigate a simple learning problem for which both the quenched and annealed calculations are performed analytically. Comparison between the corresponding results establishes the range of validity of the annealed approximation.

Original languageEnglish (US)
Pages (from-to)2124-2130
Number of pages7
JournalPhysical Review A
Volume46
Issue number4
DOIs
StatePublished - 1992

ASJC Scopus subject areas

  • Atomic and Molecular Physics, and Optics

Fingerprint

Dive into the research topics of 'Learning in linear neural networks: The validity of the annealed approximation'. Together they form a unique fingerprint.

Cite this