## Abstract

The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.

Original language | English (US) |
---|---|

Title of host publication | IJCNN Int Jt Conf Neural Network |

Editors | Anon |

Publisher | Publ by IEEE |

Pages | 403-409 |

Number of pages | 7 |

State | Published - Dec 1 1989 |

Event | IJCNN International Joint Conference on Neural Networks - Washington, DC, USA Duration: Jun 18 1989 → Jun 22 1989 |

### Other

Other | IJCNN International Joint Conference on Neural Networks |
---|---|

City | Washington, DC, USA |

Period | 6/18/89 → 6/22/89 |

## ASJC Scopus subject areas

- Engineering(all)