Abstract
We consider the problem of on-line gradient descent learning for general two-layer neural networks. An analytic solution is presented and used to investigate the role of the learning rate in controlling the evolution and convergence of the learning process.
Original language | English (US) |
---|---|
Pages | 302-308 |
Number of pages | 7 |
State | Published - 1995 |
Event | 8th International Conference on Neural Information Processing Systems, NIPS 1995 - Denver, United States Duration: Nov 27 1995 → Dec 2 1995 |
Conference
Conference | 8th International Conference on Neural Information Processing Systems, NIPS 1995 |
---|---|
Country/Territory | United States |
City | Denver |
Period | 11/27/95 → 12/2/95 |
ASJC Scopus subject areas
- Information Systems
- Computer Networks and Communications
- Signal Processing