TY - GEN
T1 - Towards learning sparsely used dictionaries with arbitrary supports
AU - Awasthi, Pranjal
AU - Vijayaraghavan, Aravindan
N1 - Funding Information:
The authors thank Sivaraman Balakrishnan, Aditya Bhaskara, Anindya De, Konstantin Makarychev and David Steurer for several helpful discussions. Aravindan Vijayaraghavan is supported by the National Science Foundation (NSF) under Grant No. CCF-1652491 and CCF-1637585.
Funding Information:
The authors thank Sivaraman Balakrishnan, Aditya Bhas-kara, Anindya De, Konstantin Makarychev and David Steurer for several helpful discussions. Aravindan Vijayaraghavan is supported by the National Science Foundation (NSF) under Grant No. CCF-1652491 and CCF-1637585.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/11/30
Y1 - 2018/11/30
N2 - Dictionary learning is a popular approach for inferring a hidden basis in which data has a sparse representation. There is a hidden dictionary or basis A which is an n × m matrix, with m > n typically (this is called the over-complete setting). Data generated from the dictionary is given by Y = AX where X is a matrix whose columns have supports chosen from a distribution over k-sparse vectors, and the non-zero values chosen from a symmetric distribution. Given Y, the goal is to recover A and X in polynomial time (in m, n). Existing algorithms give polynomial time guarantees for recovering incoherent dictionaries, under strong distributional assumptions both on the supports of the columns of X, and on the values of the non-zero entries. In this work, we study the following question: can we design efficient algorithms for recovering dictionaries when the supports of the columns of X are arbitrary? To address this question while circumventing the issue of non-identifiability, we study a natural semirandom model for dictionary learning. In this model, there are a large number of samples y = Ax with arbitrary k-sparse supports for x, along with a few samples where the sparse supports are chosen uniformly at random. While the presence of a few samples with random supports ensures identifiability, the support distribution can look almost arbitrary in aggregate. Hence, existing algorithmic techniques seem to break down as they make strong assumptions on the supports. Our main contribution is a new polynomial time algorithm for learning incoherent over-complete dictionaries that provably works under the semirandom model. Additionally the same algorithm provides polynomial time guarantees in new parameter regimes when the supports are fully random. Finally, as a by product of our techniques, we also identify a minimal set of conditions on the supports under which the dictionary can be (information theoretically) recovered from polynomially many samples for almost linear sparsity, i.e., k = Õ(n).
AB - Dictionary learning is a popular approach for inferring a hidden basis in which data has a sparse representation. There is a hidden dictionary or basis A which is an n × m matrix, with m > n typically (this is called the over-complete setting). Data generated from the dictionary is given by Y = AX where X is a matrix whose columns have supports chosen from a distribution over k-sparse vectors, and the non-zero values chosen from a symmetric distribution. Given Y, the goal is to recover A and X in polynomial time (in m, n). Existing algorithms give polynomial time guarantees for recovering incoherent dictionaries, under strong distributional assumptions both on the supports of the columns of X, and on the values of the non-zero entries. In this work, we study the following question: can we design efficient algorithms for recovering dictionaries when the supports of the columns of X are arbitrary? To address this question while circumventing the issue of non-identifiability, we study a natural semirandom model for dictionary learning. In this model, there are a large number of samples y = Ax with arbitrary k-sparse supports for x, along with a few samples where the sparse supports are chosen uniformly at random. While the presence of a few samples with random supports ensures identifiability, the support distribution can look almost arbitrary in aggregate. Hence, existing algorithmic techniques seem to break down as they make strong assumptions on the supports. Our main contribution is a new polynomial time algorithm for learning incoherent over-complete dictionaries that provably works under the semirandom model. Additionally the same algorithm provides polynomial time guarantees in new parameter regimes when the supports are fully random. Finally, as a by product of our techniques, we also identify a minimal set of conditions on the supports under which the dictionary can be (information theoretically) recovered from polynomially many samples for almost linear sparsity, i.e., k = Õ(n).
KW - Beyond worst-case analysis
KW - Dictionary learning
KW - Semi-random models
UR - http://www.scopus.com/inward/record.url?scp=85059823038&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85059823038&partnerID=8YFLogxK
U2 - 10.1109/FOCS.2018.00035
DO - 10.1109/FOCS.2018.00035
M3 - Conference contribution
AN - SCOPUS:85059823038
T3 - Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
SP - 283
EP - 296
BT - Proceedings - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018
A2 - Thorup, Mikkel
PB - IEEE Computer Society
T2 - 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018
Y2 - 7 October 2018 through 9 October 2018
ER -