Abstract
The mixtures-of-experts (ME) methodology provides a tool of classification when experts of logistic regression models or Bernoulli models are mixed according to a set of local weights. We show that the Vapnik-Chervonenkis dimension of the ME architecture is bounded below by the number of experts m and above by O(m4s2), where s is the dimension of the input. For mixtures of Bernoulli experts with a scalar input, we show that the lower bound m is attained, in which case we obtain the exact result that the VC dimension is equal to the number of experts.
Original language | English (US) |
---|---|
Pages (from-to) | 1293-1301 |
Number of pages | 9 |
Journal | Neural Computation |
Volume | 12 |
Issue number | 6 |
DOIs | |
State | Published - Jun 2000 |
ASJC Scopus subject areas
- Arts and Humanities (miscellaneous)
- Cognitive Neuroscience