Learning low-dimensional temporal representations

Bing Su*, Ying Wu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Low-dimensional discriminative representations enhance machine learning methods in both performance and complexity, motivating supervised dimensionality reduction (DR) that transforms high-dimensional data to a discriminative sub-space. Most DR methods require data to be i.i.d., however, in some domains, data naturally come in sequences, where the observations are temporally correlated. We propose a DR method called LT-LDA to learn low-dimensional temporal representations. We construct the separability among sequence classes by lifting the holistic temporal structures, which are established based on temporal alignments and may change in different subspaces. We jointly learn the subspace and the associated alignments by optimizing an objective which favors easily-separable temporal structures, and show that this objective is connected to the inference of alignments, thus allows an iterative solution. We provide both theoretical insight and empirical evaluation on real-world sequence datasets to show the interest of our method.

Original languageEnglish (US)
Title of host publication35th International Conference on Machine Learning, ICML 2018
EditorsAndreas Krause, Jennifer Dy
PublisherInternational Machine Learning Society (IMLS)
Pages7578-7587
Number of pages10
Volume11
ISBN (Electronic)9781510867963
StatePublished - Jan 1 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: Jul 10 2018Jul 15 2018

Other

Other35th International Conference on Machine Learning, ICML 2018
CountrySweden
CityStockholm
Period7/10/187/15/18

Fingerprint

Learning systems

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Cite this

Su, B., & Wu, Y. (2018). Learning low-dimensional temporal representations. In A. Krause, & J. Dy (Eds.), 35th International Conference on Machine Learning, ICML 2018 (Vol. 11, pp. 7578-7587). International Machine Learning Society (IMLS).
Su, Bing ; Wu, Ying. / Learning low-dimensional temporal representations. 35th International Conference on Machine Learning, ICML 2018. editor / Andreas Krause ; Jennifer Dy. Vol. 11 International Machine Learning Society (IMLS), 2018. pp. 7578-7587
@inproceedings{3f1b45b23e6142daa4a248c5a52dd0a6,
title = "Learning low-dimensional temporal representations",
abstract = "Low-dimensional discriminative representations enhance machine learning methods in both performance and complexity, motivating supervised dimensionality reduction (DR) that transforms high-dimensional data to a discriminative sub-space. Most DR methods require data to be i.i.d., however, in some domains, data naturally come in sequences, where the observations are temporally correlated. We propose a DR method called LT-LDA to learn low-dimensional temporal representations. We construct the separability among sequence classes by lifting the holistic temporal structures, which are established based on temporal alignments and may change in different subspaces. We jointly learn the subspace and the associated alignments by optimizing an objective which favors easily-separable temporal structures, and show that this objective is connected to the inference of alignments, thus allows an iterative solution. We provide both theoretical insight and empirical evaluation on real-world sequence datasets to show the interest of our method.",
author = "Bing Su and Ying Wu",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
volume = "11",
pages = "7578--7587",
editor = "Andreas Krause and Jennifer Dy",
booktitle = "35th International Conference on Machine Learning, ICML 2018",
publisher = "International Machine Learning Society (IMLS)",

}

Su, B & Wu, Y 2018, Learning low-dimensional temporal representations. in A Krause & J Dy (eds), 35th International Conference on Machine Learning, ICML 2018. vol. 11, International Machine Learning Society (IMLS), pp. 7578-7587, 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, 7/10/18.

Learning low-dimensional temporal representations. / Su, Bing; Wu, Ying.

35th International Conference on Machine Learning, ICML 2018. ed. / Andreas Krause; Jennifer Dy. Vol. 11 International Machine Learning Society (IMLS), 2018. p. 7578-7587.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Learning low-dimensional temporal representations

AU - Su, Bing

AU - Wu, Ying

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Low-dimensional discriminative representations enhance machine learning methods in both performance and complexity, motivating supervised dimensionality reduction (DR) that transforms high-dimensional data to a discriminative sub-space. Most DR methods require data to be i.i.d., however, in some domains, data naturally come in sequences, where the observations are temporally correlated. We propose a DR method called LT-LDA to learn low-dimensional temporal representations. We construct the separability among sequence classes by lifting the holistic temporal structures, which are established based on temporal alignments and may change in different subspaces. We jointly learn the subspace and the associated alignments by optimizing an objective which favors easily-separable temporal structures, and show that this objective is connected to the inference of alignments, thus allows an iterative solution. We provide both theoretical insight and empirical evaluation on real-world sequence datasets to show the interest of our method.

AB - Low-dimensional discriminative representations enhance machine learning methods in both performance and complexity, motivating supervised dimensionality reduction (DR) that transforms high-dimensional data to a discriminative sub-space. Most DR methods require data to be i.i.d., however, in some domains, data naturally come in sequences, where the observations are temporally correlated. We propose a DR method called LT-LDA to learn low-dimensional temporal representations. We construct the separability among sequence classes by lifting the holistic temporal structures, which are established based on temporal alignments and may change in different subspaces. We jointly learn the subspace and the associated alignments by optimizing an objective which favors easily-separable temporal structures, and show that this objective is connected to the inference of alignments, thus allows an iterative solution. We provide both theoretical insight and empirical evaluation on real-world sequence datasets to show the interest of our method.

UR - http://www.scopus.com/inward/record.url?scp=85057327468&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057327468&partnerID=8YFLogxK

M3 - Conference contribution

VL - 11

SP - 7578

EP - 7587

BT - 35th International Conference on Machine Learning, ICML 2018

A2 - Krause, Andreas

A2 - Dy, Jennifer

PB - International Machine Learning Society (IMLS)

ER -

Su B, Wu Y. Learning low-dimensional temporal representations. In Krause A, Dy J, editors, 35th International Conference on Machine Learning, ICML 2018. Vol. 11. International Machine Learning Society (IMLS). 2018. p. 7578-7587