Statistical-computational tradeoffs in high-dimensional single index models

Lingxiao Wang*, Zhuoran Yang, Zhaoran Wang

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review


We study the statistical-computational tradeoffs in a high dimensional single index model Y = f(X>ß*) + e, where f is unknown, X is a Gaussian vector and ß* is s-sparse with unit norm. When Cov(Y, X>ß*) 6= 0, [43] shows that the direction and support of ß* can be recovered using a generalized version of Lasso. In this paper, we investigate the case when this critical assumption fails to hold, where the problem becomes considerably harder. Using the statistical query model to characterize the computational cost of an algorithm, we show that when Cov(Y, X>ß*) = 0 and Cov(Y, (X>ß*)2) > 0, no computationally tractable algorithms can achieve the information-theoretic limit of the minimax risk. This implies that one must pay an extra computational cost for the nonlinearity involved in the model.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
StatePublished - 2019
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: Dec 8 2019Dec 14 2019

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'Statistical-computational tradeoffs in high-dimensional single index models'. Together they form a unique fingerprint.

Cite this