On the identifiability of mixtures-of-experts

W. Jiang*, M. A. Tanner

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

43 Scopus citations


In mixtures-of-experts (ME) models, 'experts' of generalized linear models are combined, according to a set of local weights called the 'gating function'. The invariant transformations of the ME probability density functions include the permutations of the expert labels and the translations of the parameters in the gating functions. Under certain conditions, we show that the ME systems are identifiable if the experts are ordered and the gating parameters are initialized. The conditions are validated for Poisson, gamma, normal and binomial experts.

Original languageEnglish (US)
Pages (from-to)1253-1258
Number of pages6
JournalNeural Networks
Issue number9
StatePublished - Nov 1999


  • Generalized linear models
  • Indentifiability
  • Invariant transformations
  • Mixtures-of-experts

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'On the identifiability of mixtures-of-experts'. Together they form a unique fingerprint.

Cite this