Consistent inference of probabilities in layered networks: Predictions and generalization

Naftali Tishby, Esther Levin, Sara A. Solla

Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • 53 Citations

Abstract

The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.

LanguageEnglish (US)
Title of host publicationIJCNN Int Jt Conf Neural Network
Editors Anon
PublisherPubl by IEEE
Pages403-409
Number of pages7
StatePublished - Dec 1 1989
EventIJCNN International Joint Conference on Neural Networks - Washington, DC, USA
Duration: Jun 18 1989Jun 22 1989

Other

OtherIJCNN International Joint Conference on Neural Networks
CityWashington, DC, USA
Period6/18/896/22/89

Fingerprint

Neural networks

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Tishby, N., Levin, E., & Solla, S. A. (1989). Consistent inference of probabilities in layered networks: Predictions and generalization. In Anon (Ed.), IJCNN Int Jt Conf Neural Network (pp. 403-409). Publ by IEEE.
Tishby, Naftali ; Levin, Esther ; Solla, Sara A. / Consistent inference of probabilities in layered networks : Predictions and generalization. IJCNN Int Jt Conf Neural Network. editor / Anon. Publ by IEEE, 1989. pp. 403-409
@inproceedings{4be6544ea7734c4b83749a21875c044b,
title = "Consistent inference of probabilities in layered networks: Predictions and generalization",
abstract = "The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.",
author = "Naftali Tishby and Esther Levin and Solla, {Sara A.}",
year = "1989",
month = "12",
day = "1",
language = "English (US)",
pages = "403--409",
editor = "Anon",
booktitle = "IJCNN Int Jt Conf Neural Network",
publisher = "Publ by IEEE",

}

Tishby, N, Levin, E & Solla, SA 1989, Consistent inference of probabilities in layered networks: Predictions and generalization. in Anon (ed.), IJCNN Int Jt Conf Neural Network. Publ by IEEE, pp. 403-409, IJCNN International Joint Conference on Neural Networks, Washington, DC, USA, 6/18/89.

Consistent inference of probabilities in layered networks : Predictions and generalization. / Tishby, Naftali; Levin, Esther; Solla, Sara A.

IJCNN Int Jt Conf Neural Network. ed. / Anon. Publ by IEEE, 1989. p. 403-409.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Consistent inference of probabilities in layered networks

T2 - Predictions and generalization

AU - Tishby, Naftali

AU - Levin, Esther

AU - Solla, Sara A.

PY - 1989/12/1

Y1 - 1989/12/1

N2 - The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.

AB - The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.

UR - http://www.scopus.com/inward/record.url?scp=0024940401&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0024940401&partnerID=8YFLogxK

M3 - Conference contribution

SP - 403

EP - 409

BT - IJCNN Int Jt Conf Neural Network

A2 - Anon, null

PB - Publ by IEEE

ER -

Tishby N, Levin E, Solla SA. Consistent inference of probabilities in layered networks: Predictions and generalization. In Anon, editor, IJCNN Int Jt Conf Neural Network. Publ by IEEE. 1989. p. 403-409