The VC dimension for mixtures of binary classifiers

Wenxin Jiang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

The mixtures-of-experts (ME) methodology provides a tool of classification when experts of logistic regression models or Bernoulli models are mixed according to a set of local weights. We show that the Vapnik-Chervonenkis dimension of the ME architecture is bounded below by the number of experts m and above by O(m4s2), where s is the dimension of the input. For mixtures of Bernoulli experts with a scalar input, we show that the lower bound m is attained, in which case we obtain the exact result that the VC dimension is equal to the number of experts.

Original languageEnglish (US)
Pages (from-to)1293-1301
Number of pages9
JournalNeural Computation
Volume12
Issue number6
DOIs
StatePublished - Jun 2000

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'The VC dimension for mixtures of binary classifiers'. Together they form a unique fingerprint.

Cite this