Dual uncertainty minimization regularization and its applications on heterogeneous data

Yu Cheng, Alok Nidhi Choudhary, Jun Wang, Sharath Pankanti, Huan Liu

Research output: Contribution to journalConference articlepeer-review

Abstract

In many practical machine learning systems, the prediction/classification tasks involve the usage of heterogeneous data in semi-supervised settings, where the objective is to maximize the utility of multiple views (usually dual views) information from the data. In this work, we propose a general framework, Dual Uncertainty Minimization Regularization (DUMR), that maximizes the usage of heterogeneous data for a dual view semi-supervised classification/prediction. Through extending a recent uncertainty regularizer to a heterogeneous setting, we propose to optimize an objective which ensures the minimum uncertainty of the prediction over both views extracted from heterogeneous source. In specific, for different problem settings, we design two type of uncertainty regularizer with entropy and squared-loss mutual information, separately. The proposed framework is exploited in three datamining/multimeida analysis tasks, social role identification, legislative prediction and action recognition, and the comparison with other peer methods corroborate the superior performance of the proposed method.

Original languageEnglish (US)
Article number7022727
Pages (from-to)1163-1170
Number of pages8
JournalIEEE International Conference on Data Mining Workshops, ICDMW
Volume2015-January
Issue numberJanuary
DOIs
StatePublished - Jan 1 2015
Event14th IEEE International Conference on Data Mining Workshops, ICDMW 2014 - Shenzhen, China
Duration: Dec 14 2014 → …

Keywords

  • Dual Uncertainty Minimization
  • Heterogeneous Data
  • Multiple-Views Learning

ASJC Scopus subject areas

  • Computer Science Applications
  • Software

Fingerprint Dive into the research topics of 'Dual uncertainty minimization regularization and its applications on heterogeneous data'. Together they form a unique fingerprint.

Cite this