AGNOSTIC LEARNING OF GENERAL RELU ACTIVATION USING GRADIENT DESCENT

Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan

Research output: Contribution to conferencePaperpeer-review

2 Scopus citations

Abstract

We provide a convergence analysis of gradient descent for the problem of agnostically learning a single ReLU function under Gaussian distributions. Unlike prior work that studies the setting of zero bias, we consider the more challenging scenario when the bias of the ReLU function is non-zero. Our main result establishes that starting from random initialization, in a polynomial number of iterations gradient descent outputs, with high probability, a ReLU function that achieves an error that is within a constant factor of the optimal i.e., it is guaranteed to achieve an error of O(OP T ), where OP T is the error of the best ReLU function. This is a significant improvement over existing guarantees for gradient descent, which only guarantee error of O(d · OP T ) even in the zero-bias case (Frei et al., 2020). We also provide finite sample guarantees, and obtain similar guarantees for a broader class of marginal distributions beyond Gaussians.

Original languageEnglish (US)
StatePublished - 2023
Event11th International Conference on Learning Representations, ICLR 2023 - Kigali, Rwanda
Duration: May 1 2023May 5 2023

Conference

Conference11th International Conference on Learning Representations, ICLR 2023
Country/TerritoryRwanda
CityKigali
Period5/1/235/5/23

Funding

The last two authors are supported by the National Science Foundation (NSF) under Grant No. CCF-1652491 and CCF 1934931. The last author was also funded by a Google Research Scholar award.

ASJC Scopus subject areas

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'AGNOSTIC LEARNING OF GENERAL RELU ACTIVATION USING GRADIENT DESCENT'. Together they form a unique fingerprint.

Cite this