Abstract
Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.
Original language | English (US) |
---|---|
Article number | 6095381 |
Pages (from-to) | 334-346 |
Number of pages | 13 |
Journal | IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics |
Volume | 42 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2012 |
Funding
Manuscript received August 9, 2010; revised May 11, 2011; accepted August 26, 2011. Date of publication December 5, 2011; date of current version March 16, 2012. This work was supported in part by the National Science Foundation under Grants IIS-0347877 and IIS-0916607 and in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under Grant ARO W911NF-08-1-0504. The work of J. Yuan was supported by the Nanyang Assistant Professorship. This paper was recommended by Editor E. Santos, Jr.
Keywords
- Image data mining
- visual collocation pattern
- visual pattern discovery
ASJC Scopus subject areas
- Software
- Information Systems
- Human-Computer Interaction
- Electrical and Electronic Engineering
- Control and Systems Engineering
- Computer Science Applications