TY - GEN
T1 - Towards self-exploring discriminating features
AU - Wu, Ying
AU - Huang, Thomas S.
N1 - Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2001
Y1 - 2001
N2 - Many visual learning tasks are usually confronted by some common difficulties. One of them is the lack of supervised information, due to the fact that labeling could be tedious, expensive or even impossible. Such scenario makes it challenging to learn object concepts from images. This problem could be alleviated by taking a hybrid of labeled and unlabeled training data for learning. Since the unlabeled data characterize the joint probability across different features, they could be used to boost weak classifiers by exploring discriminating features in a self-supervised fashion. Discriminant-EM (D-EM) attacks such problems by integrating discriminant analysis with the EM framework. Both linear and nonlinear methods are investigated in this paper. Based on kernel multiple discriminant analysis (KMDA), the nonlinear D-EM provides better ability to simplify the probabilistic structures of data distributions in a discrimination space. We also propose a novel data-sampling scheme for efficient learning of kernel discriminants. Our experimental results showthat D-EM outperforms a variety of supervised and semi-supervised learning algorithms for many visual learning tasks, such as content-based image retrieval and invariant object recognition.
AB - Many visual learning tasks are usually confronted by some common difficulties. One of them is the lack of supervised information, due to the fact that labeling could be tedious, expensive or even impossible. Such scenario makes it challenging to learn object concepts from images. This problem could be alleviated by taking a hybrid of labeled and unlabeled training data for learning. Since the unlabeled data characterize the joint probability across different features, they could be used to boost weak classifiers by exploring discriminating features in a self-supervised fashion. Discriminant-EM (D-EM) attacks such problems by integrating discriminant analysis with the EM framework. Both linear and nonlinear methods are investigated in this paper. Based on kernel multiple discriminant analysis (KMDA), the nonlinear D-EM provides better ability to simplify the probabilistic structures of data distributions in a discrimination space. We also propose a novel data-sampling scheme for efficient learning of kernel discriminants. Our experimental results showthat D-EM outperforms a variety of supervised and semi-supervised learning algorithms for many visual learning tasks, such as content-based image retrieval and invariant object recognition.
UR - http://www.scopus.com/inward/record.url?scp=84899430832&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84899430832&partnerID=8YFLogxK
U2 - 10.1007/3-540-44596-x_22
DO - 10.1007/3-540-44596-x_22
M3 - Conference contribution
AN - SCOPUS:84899430832
SN - 3540423591
SN - 9783540423591
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 263
EP - 277
BT - Machine Learning and Data Mining in Pattern Recognition - Second International Workshop, MLDM 2001, Proceedings
A2 - Perner, Petra
PB - Springer Verlag
T2 - 2nd International Workshop on Machine Learning and Data Mining in Pattern Recognition, MLDM 2001
Y2 - 25 July 2001 through 27 July 2001
ER -