TY - JOUR
T1 - Discriminative dimensionality reduction for multi-dimensional sequences
AU - Su, Bing
AU - Ding, Xiaoqing
AU - Fellow, Life
AU - Wang, Hao
AU - Wu, Ying
N1 - Funding Information:
National Natural Science Foundation of China under Grant No. 61603373, No. 61032008, No. 61471214
Publisher Copyright:
© 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
PY - 2018/1
Y1 - 2018/1
N2 - Since the observables at particular time instants in a temporal sequence exhibit dependencies, they are not independent samples. Thus, it is not plausible to apply i.i.d. assumption-based dimensionality reduction methods to sequence data. This paper presents a novel supervised dimensionality reduction approach for sequence data, called Linear Sequence Discriminant Analysis (LSDA). It learns a linear discriminative projection of the feature vectors in sequences to a lower-dimensional subspace by maximizing the separability of the sequence classes such that the entire sequences are holistically discriminated. The sequence class separability is constructed based on the sequence statistics, and the use of different statistics produces different LSDA methods. This paper presents and compares two novel LSDA methods, namely M-LSDA and D-LSDA. M-LSDA extracts model-based statistics by exploiting the dynamical structure of the sequence classes, and D-LSDA extracts the distance-based statistics by computing the pairwise similarity of samples from the same sequence class. Extensive experiments on several different tasks have demonstrated the effectiveness and the general applicability of the proposed methods.
AB - Since the observables at particular time instants in a temporal sequence exhibit dependencies, they are not independent samples. Thus, it is not plausible to apply i.i.d. assumption-based dimensionality reduction methods to sequence data. This paper presents a novel supervised dimensionality reduction approach for sequence data, called Linear Sequence Discriminant Analysis (LSDA). It learns a linear discriminative projection of the feature vectors in sequences to a lower-dimensional subspace by maximizing the separability of the sequence classes such that the entire sequences are holistically discriminated. The sequence class separability is constructed based on the sequence statistics, and the use of different statistics produces different LSDA methods. This paper presents and compares two novel LSDA methods, namely M-LSDA and D-LSDA. M-LSDA extracts model-based statistics by exploiting the dynamical structure of the sequence classes, and D-LSDA extracts the distance-based statistics by computing the pairwise similarity of samples from the same sequence class. Extensive experiments on several different tasks have demonstrated the effectiveness and the general applicability of the proposed methods.
KW - Dimensionality reduction
KW - Discriminant analysis
KW - Sequence classification
KW - character recognition
KW - metric learning
UR - http://www.scopus.com/inward/record.url?scp=85021765726&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85021765726&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2017.2665545
DO - 10.1109/TPAMI.2017.2665545
M3 - Article
C2 - 28186877
AN - SCOPUS:85021765726
SN - 0162-8828
VL - 40
SP - 77
EP - 91
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 1
ER -