## Abstract

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, we prove that it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings - an essential first step towards efficient learning algorithms. Our methods also apply to the "overcomplete" case, which has proved challenging in many applications. Given the importance of Kruskal's theorem in the tensor literature, we expect that our robust version will have several applications beyond the settings we explore in this work.

Original language | English (US) |
---|---|

Pages (from-to) | 742-778 |

Number of pages | 37 |

Journal | Journal of Machine Learning Research |

Volume | 35 |

State | Published - Jan 1 2014 |

Event | 27th Conference on Learning Theory, COLT 2014 - Barcelona, Spain Duration: Jun 13 2014 → Jun 15 2014 |

## Keywords

- Kruskal uniqueness theorem
- Latent variable models
- Tensor decomposition

## ASJC Scopus subject areas

- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence