Learning sums of independent random variables with sparse collective support

Anindya De, Philip M. Long, Rocco A. Servedio

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

We study the learnability of sums of independent integer random variables given a bound on the size of the union of their supports. For A ⊂ ℤ + , a sum of independent random variables with collective support A (called an A-sum in this paper) is a distribution S = X 1 + • • • + X N where the Xi's are mutually independent (but not necessarily identically distributed) integer random variables with ∪isupp(Xi) ⊂ A. We give two main algorithmic results for learning such distributions: 1) For the case |A| = 3, we give an algorithm for learning A-sums to accuracy ϵ that uses poly(1/ϵ) samples and runs in time poly(1/ϵ), independent of N and of the elements of A. 2) For an arbitrary constant k ≥ 4, if A = {a 1 ,⋯, a k } with 0 < a 1 <⋯ < a k , we give an algorithm that uses poly(1/ϵ) • log log a k samples (independent of N) and runs in time poly(1/ϵ, loga k ). We prove an essentially matching lower bound: if |A| = 4, then any algorithm must use Ω(log log a 4 ) samples even for learning to constant accuracy. We also give similar-in-spirit (but quantitatively very different) algorithmic results, and essentially matching lower bounds, for the case in which A is not known to the learner. Our learning algorithms employ new limit theorems which may be of independent interest. Our lower bounds rely on equidistribution type results from number theory. Our algorithms and lower bounds together settle the question of how the sample complexity of learning sums of independent integer random variables scales with the elements in the union of their supports, both in the known-support and unknown-support settings. Finally, all our algorithms easily extend to the "semi-agnostic" learning model, in which training data is generated from a distribution that is only cϵ-close to some A-sum for a constant c > 0.

Original languageEnglish (US)
Title of host publicationProceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018
EditorsMikkel Thorup
PublisherIEEE Computer Society
Pages297-308
Number of pages12
ISBN (Electronic)9781538642306
DOIs
StatePublished - Nov 30 2018
Event59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018 - Paris, France
Duration: Oct 7 2018Oct 9 2018

Publication series

NameProceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
Volume2018-October
ISSN (Print)0272-5428

Other

Other59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018
CountryFrance
CityParis
Period10/7/1810/9/18

Fingerprint

Random variables
Number theory
Learning algorithms

Keywords

  • Central limit theorems
  • Distribution learning
  • Sample complexity
  • Sums of independent random variables
  • Unsupervised learning

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

De, A., Long, P. M., & Servedio, R. A. (2018). Learning sums of independent random variables with sparse collective support. In M. Thorup (Ed.), Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018 (pp. 297-308). [8555114] (Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS; Vol. 2018-October). IEEE Computer Society. https://doi.org/10.1109/FOCS.2018.00036
De, Anindya ; Long, Philip M. ; Servedio, Rocco A. / Learning sums of independent random variables with sparse collective support. Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018. editor / Mikkel Thorup. IEEE Computer Society, 2018. pp. 297-308 (Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS).
@inproceedings{08ce4b34ea4840dc862d0560ccaec45e,
title = "Learning sums of independent random variables with sparse collective support",
abstract = "We study the learnability of sums of independent integer random variables given a bound on the size of the union of their supports. For A ⊂ ℤ + , a sum of independent random variables with collective support A (called an A-sum in this paper) is a distribution S = X 1 + • • • + X N where the Xi's are mutually independent (but not necessarily identically distributed) integer random variables with ∪isupp(Xi) ⊂ A. We give two main algorithmic results for learning such distributions: 1) For the case |A| = 3, we give an algorithm for learning A-sums to accuracy ϵ that uses poly(1/ϵ) samples and runs in time poly(1/ϵ), independent of N and of the elements of A. 2) For an arbitrary constant k ≥ 4, if A = {a 1 ,⋯, a k } with 0 < a 1 <⋯ < a k , we give an algorithm that uses poly(1/ϵ) • log log a k samples (independent of N) and runs in time poly(1/ϵ, loga k ). We prove an essentially matching lower bound: if |A| = 4, then any algorithm must use Ω(log log a 4 ) samples even for learning to constant accuracy. We also give similar-in-spirit (but quantitatively very different) algorithmic results, and essentially matching lower bounds, for the case in which A is not known to the learner. Our learning algorithms employ new limit theorems which may be of independent interest. Our lower bounds rely on equidistribution type results from number theory. Our algorithms and lower bounds together settle the question of how the sample complexity of learning sums of independent integer random variables scales with the elements in the union of their supports, both in the known-support and unknown-support settings. Finally, all our algorithms easily extend to the {"}semi-agnostic{"} learning model, in which training data is generated from a distribution that is only cϵ-close to some A-sum for a constant c > 0.",
keywords = "Central limit theorems, Distribution learning, Sample complexity, Sums of independent random variables, Unsupervised learning",
author = "Anindya De and Long, {Philip M.} and Servedio, {Rocco A.}",
year = "2018",
month = "11",
day = "30",
doi = "10.1109/FOCS.2018.00036",
language = "English (US)",
series = "Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS",
publisher = "IEEE Computer Society",
pages = "297--308",
editor = "Mikkel Thorup",
booktitle = "Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018",
address = "United States",

}

De, A, Long, PM & Servedio, RA 2018, Learning sums of independent random variables with sparse collective support. in M Thorup (ed.), Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018., 8555114, Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS, vol. 2018-October, IEEE Computer Society, pp. 297-308, 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, 10/7/18. https://doi.org/10.1109/FOCS.2018.00036

Learning sums of independent random variables with sparse collective support. / De, Anindya; Long, Philip M.; Servedio, Rocco A.

Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018. ed. / Mikkel Thorup. IEEE Computer Society, 2018. p. 297-308 8555114 (Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS; Vol. 2018-October).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Learning sums of independent random variables with sparse collective support

AU - De, Anindya

AU - Long, Philip M.

AU - Servedio, Rocco A.

PY - 2018/11/30

Y1 - 2018/11/30

N2 - We study the learnability of sums of independent integer random variables given a bound on the size of the union of their supports. For A ⊂ ℤ + , a sum of independent random variables with collective support A (called an A-sum in this paper) is a distribution S = X 1 + • • • + X N where the Xi's are mutually independent (but not necessarily identically distributed) integer random variables with ∪isupp(Xi) ⊂ A. We give two main algorithmic results for learning such distributions: 1) For the case |A| = 3, we give an algorithm for learning A-sums to accuracy ϵ that uses poly(1/ϵ) samples and runs in time poly(1/ϵ), independent of N and of the elements of A. 2) For an arbitrary constant k ≥ 4, if A = {a 1 ,⋯, a k } with 0 < a 1 <⋯ < a k , we give an algorithm that uses poly(1/ϵ) • log log a k samples (independent of N) and runs in time poly(1/ϵ, loga k ). We prove an essentially matching lower bound: if |A| = 4, then any algorithm must use Ω(log log a 4 ) samples even for learning to constant accuracy. We also give similar-in-spirit (but quantitatively very different) algorithmic results, and essentially matching lower bounds, for the case in which A is not known to the learner. Our learning algorithms employ new limit theorems which may be of independent interest. Our lower bounds rely on equidistribution type results from number theory. Our algorithms and lower bounds together settle the question of how the sample complexity of learning sums of independent integer random variables scales with the elements in the union of their supports, both in the known-support and unknown-support settings. Finally, all our algorithms easily extend to the "semi-agnostic" learning model, in which training data is generated from a distribution that is only cϵ-close to some A-sum for a constant c > 0.

AB - We study the learnability of sums of independent integer random variables given a bound on the size of the union of their supports. For A ⊂ ℤ + , a sum of independent random variables with collective support A (called an A-sum in this paper) is a distribution S = X 1 + • • • + X N where the Xi's are mutually independent (but not necessarily identically distributed) integer random variables with ∪isupp(Xi) ⊂ A. We give two main algorithmic results for learning such distributions: 1) For the case |A| = 3, we give an algorithm for learning A-sums to accuracy ϵ that uses poly(1/ϵ) samples and runs in time poly(1/ϵ), independent of N and of the elements of A. 2) For an arbitrary constant k ≥ 4, if A = {a 1 ,⋯, a k } with 0 < a 1 <⋯ < a k , we give an algorithm that uses poly(1/ϵ) • log log a k samples (independent of N) and runs in time poly(1/ϵ, loga k ). We prove an essentially matching lower bound: if |A| = 4, then any algorithm must use Ω(log log a 4 ) samples even for learning to constant accuracy. We also give similar-in-spirit (but quantitatively very different) algorithmic results, and essentially matching lower bounds, for the case in which A is not known to the learner. Our learning algorithms employ new limit theorems which may be of independent interest. Our lower bounds rely on equidistribution type results from number theory. Our algorithms and lower bounds together settle the question of how the sample complexity of learning sums of independent integer random variables scales with the elements in the union of their supports, both in the known-support and unknown-support settings. Finally, all our algorithms easily extend to the "semi-agnostic" learning model, in which training data is generated from a distribution that is only cϵ-close to some A-sum for a constant c > 0.

KW - Central limit theorems

KW - Distribution learning

KW - Sample complexity

KW - Sums of independent random variables

KW - Unsupervised learning

UR - http://www.scopus.com/inward/record.url?scp=85059818867&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85059818867&partnerID=8YFLogxK

U2 - 10.1109/FOCS.2018.00036

DO - 10.1109/FOCS.2018.00036

M3 - Conference contribution

T3 - Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS

SP - 297

EP - 308

BT - Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018

A2 - Thorup, Mikkel

PB - IEEE Computer Society

ER -

De A, Long PM, Servedio RA. Learning sums of independent random variables with sparse collective support. In Thorup M, editor, Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018. IEEE Computer Society. 2018. p. 297-308. 8555114. (Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS). https://doi.org/10.1109/FOCS.2018.00036