Learning low-dimensional temporal representations with latent alignments

Bing Su*, Ying Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


Low-dimensional discriminative representations enhance machine learning methods in both performance and complexity. This has motivated supervised dimensionality reduction (DR), which transforms high-dimensional data into a discriminative subspace. Most DR methods require data to be i.i.d. However, in some domains, data naturally appear in sequences, where the observations are temporally correlated. We propose a DR method, namely, latent temporal linear discriminant analysis (LT-LDA), to learn low-dimensional temporal representations. We construct the separability among sequence classes by lifting the holistic temporal structures, which are established based on temporal alignments and may change in different subspaces. We jointly learn the subspace and the associated latent alignments by optimizing an objective that favors easily separable temporal structures. We show that this objective is connected to the inference of alignments and thus allows for an iterative solution. We provide both theoretical insight and empirical evaluations on several real-world sequence datasets to show the applicability of our method.

Original languageEnglish (US)
Article number8723170
Pages (from-to)2842-2857
Number of pages16
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number11
StatePublished - Nov 1 2020


  • Dimensionality reduction
  • discriminant analysis
  • latent alignment
  • temporal sequences

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Applied Mathematics
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics


Dive into the research topics of 'Learning low-dimensional temporal representations with latent alignments'. Together they form a unique fingerprint.

Cite this