Uniqueness of tensor decompositions with applications to polynomial identifiability

Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan

Research output: Contribution to journalConference article

16 Citations (Scopus)

Abstract

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings - an essential first step towards efficient learning algorithms. Our methods also apply to the "overcomplete" case, which has proved challenging in many applications. Given the importance of Kruskal's theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work.

Original languageEnglish (US)
Pages (from-to)742-778
Number of pages37
JournalJournal of Machine Learning Research
Volume35
StatePublished - Jan 1 2014
Event27th Conference on Learning Theory, COLT 2014 - Barcelona, Spain
Duration: Jun 13 2014Jun 15 2014

Fingerprint

Tensor Decomposition
Identifiability
Tensors
Uniqueness
Polynomials
Decomposition
Polynomial
Tensor
Latent Variable Models
Hidden Markov models
Mixture Model
Theorem
Learning algorithms
Markov Model
Immediately
Learning Algorithm
Efficient Algorithms
Imply
Decompose

Keywords

  • Kruskal uniqueness theorem
  • Latent variable models
  • Tensor decomposition

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

Cite this

@article{690519c4174a4c1d94b6a0d6b9f479c9,
title = "Uniqueness of tensor decompositions with applications to polynomial identifiability",
abstract = "We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings - an essential first step towards efficient learning algorithms. Our methods also apply to the {"}overcomplete{"} case, which has proved challenging in many applications. Given the importance of Kruskal's theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work.",
keywords = "Kruskal uniqueness theorem, Latent variable models, Tensor decomposition",
author = "Aditya Bhaskara and Moses Charikar and Aravindan Vijayaraghavan",
year = "2014",
month = "1",
day = "1",
language = "English (US)",
volume = "35",
pages = "742--778",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

Uniqueness of tensor decompositions with applications to polynomial identifiability. / Bhaskara, Aditya; Charikar, Moses; Vijayaraghavan, Aravindan.

In: Journal of Machine Learning Research, Vol. 35, 01.01.2014, p. 742-778.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Uniqueness of tensor decompositions with applications to polynomial identifiability

AU - Bhaskara, Aditya

AU - Charikar, Moses

AU - Vijayaraghavan, Aravindan

PY - 2014/1/1

Y1 - 2014/1/1

N2 - We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings - an essential first step towards efficient learning algorithms. Our methods also apply to the "overcomplete" case, which has proved challenging in many applications. Given the importance of Kruskal's theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work.

AB - We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings - an essential first step towards efficient learning algorithms. Our methods also apply to the "overcomplete" case, which has proved challenging in many applications. Given the importance of Kruskal's theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work.

KW - Kruskal uniqueness theorem

KW - Latent variable models

KW - Tensor decomposition

UR - http://www.scopus.com/inward/record.url?scp=84919952229&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84919952229&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:84919952229

VL - 35

SP - 742

EP - 778

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -