Methodology and convergence rates for functional linear regression

Peter Hall*, Joel L. Horowitz

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

263 Scopus citations


In functional linear regression, the slope "parameter" is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a range of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigenfunctions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.

Original languageEnglish (US)
Pages (from-to)70-91
Number of pages22
JournalAnnals of Statistics
Issue number1
StatePublished - Feb 2007


  • Deconvolution
  • Dimension reduction
  • Eigenfunction
  • Eigenvalue
  • Linear operator
  • Minimax optimality
  • Nonparametric
  • Principal components analysis
  • Quadratic regularisation
  • Smoothing

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty


Dive into the research topics of 'Methodology and convergence rates for functional linear regression'. Together they form a unique fingerprint.

Cite this