Mining visual collocation patterns via self-supervised subspace learning

Junsong Yuan*, Ying Wu

*Corresponding author for this work

Research output: Contribution to journalArticle

11 Scopus citations

Abstract

Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.

Original languageEnglish (US)
Article number6095381
Pages (from-to)334-346
Number of pages13
JournalIEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Volume42
Issue number2
DOIs
StatePublished - Apr 1 2012

Keywords

  • Image data mining
  • visual collocation pattern
  • visual pattern discovery

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Information Systems
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Mining visual collocation patterns via self-supervised subspace learning'. Together they form a unique fingerprint.

  • Cite this