Discriminative transformation for multi-dimensional temporal sequences

Bing Su*, Xiaoqing Ding, Changsong Liu, Hao Wang, Ying Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Scopus citations


Feature space transformation techniques have been widely studied for dimensionality reduction in vector-based feature space. However, these techniques are inapplicable to sequence data because the features in the same sequence are not independent. In this paper, we propose a method called max-min inter-sequence distance analysis (MMSDA) to transform features in sequences into a low-dimensional subspace such that different sequence classes are holistically separated. To utilize the temporal dependencies, MMSDA first aligns features in sequences from the same class to an adapted number of temporal states, and then, constructs the sequence class separability based on the statistics of these ordered states. To learn the transformation, MMSDA formulates the objective of maximizing the minimal pairwise separability in the latent subspace as a semi-definite programming problem and provides a new tractable and effective solution with theoretical proofs by constraints unfolding and pruning, convex relaxation, and within-class scatter compression. Extensive experiments on different tasks have demonstrated the effectiveness of MMSDA.

Original languageEnglish (US)
Article number7929315
Pages (from-to)3579-3593
Number of pages15
JournalIEEE Transactions on Image Processing
Issue number7
StatePublished - Jul 2017


  • Dimensionality reduction
  • Feature space transformation
  • Max-min inter-sequence distance analysis
  • Sequence classification

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Discriminative transformation for multi-dimensional temporal sequences'. Together they form a unique fingerprint.

Cite this