### Abstract

The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.

Original language | English (US) |
---|---|

Title of host publication | IJCNN Int Jt Conf Neural Network |

Editors | Anon |

Publisher | Publ by IEEE |

Pages | 403-409 |

Number of pages | 7 |

State | Published - Dec 1 1989 |

Event | IJCNN International Joint Conference on Neural Networks - Washington, DC, USA Duration: Jun 18 1989 → Jun 22 1989 |

### Other

Other | IJCNN International Joint Conference on Neural Networks |
---|---|

City | Washington, DC, USA |

Period | 6/18/89 → 6/22/89 |

### Fingerprint

### ASJC Scopus subject areas

- Engineering(all)

### Cite this

*IJCNN Int Jt Conf Neural Network*(pp. 403-409). Publ by IEEE.

}

*IJCNN Int Jt Conf Neural Network.*Publ by IEEE, pp. 403-409, IJCNN International Joint Conference on Neural Networks, Washington, DC, USA, 6/18/89.

**Consistent inference of probabilities in layered networks : Predictions and generalization.** / Tishby, Naftali; Levin, Esther; Solla, Sara A.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

TY - GEN

T1 - Consistent inference of probabilities in layered networks

T2 - Predictions and generalization

AU - Tishby, Naftali

AU - Levin, Esther

AU - Solla, Sara A.

PY - 1989/12/1

Y1 - 1989/12/1

N2 - The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.

AB - The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.

UR - http://www.scopus.com/inward/record.url?scp=0024940401&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0024940401&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0024940401

SP - 403

EP - 409

BT - IJCNN Int Jt Conf Neural Network

A2 - Anon, null

PB - Publ by IEEE

ER -