Mixtures-of-experts of autoregressive time series: Asymptotic normality and model specification

Alexandre X. Carvalho*, Martin A. Tanner

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Scopus citations


We consider a class of nonlinear models based on mixtures of local autoregressive time series. At any given time point, we have a certain number of linear models, denoted as experts, where the vector of covariates may include lags of the dependent variable. Additionally, we assume the existence of a latent multinomial variable, whose distribution depends on the same covariates as the experts, that determines which linear process is observed. This structure, denoted as mixture-of-experts (ME), is considerably flexible in modeling the conditional mean function, as shown by Jiang and Tanner. In this paper, we present a formal treatment of conditions to guarantee the asymptotic normality of the maximum likelihood estimator (MLE), under stationarity and nonstationarity, and under correct model specification and model misspecification. The performance of common model selection criteria in selecting the number of experts is explored via Monte Carlo simulations. Finally, we present applications to simulated and real data sets, to illustrate the ability of the proposed structure to model not only the conditional mean, but also the whole conditional density.

Original languageEnglish (US)
Pages (from-to)39-56
Number of pages18
JournalIEEE Transactions on Neural Networks
Issue number1
StatePublished - Jan 2005


  • Asymptotic properties
  • Maximum likelihood estimation
  • Mixture-of-experts (ME)
  • Nonlinear time series

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Mixtures-of-experts of autoregressive time series: Asymptotic normality and model specification'. Together they form a unique fingerprint.

Cite this