Abstract
Image representation is a fundamental task in computer vision. However, most of the existing approaches for image representation ignore the relations between images and consider each input image independently. Intuitively, relations between images can help to understand the images and maintain model consistency over related images, leading to better explainability. In this paper, we consider modeling the image-level relations to generate more informative image representations, and propose ImageGCN, an end-to-end graph convolutional network framework for inductive multi-relational image modeling. We apply ImageGCN to chest X-ray images where rich relational information is available for disease identification. Unlike previous image representation models, ImageGCN learns the representation of an image using both its original pixel features and its relationship with other images. Besides learning informative representations for images, ImageGCN can also be used for object detection in a weakly supervised manner. The experimental results on 3 open-source x-ray datasets, ChestX-ray14, CheXpert and MIMIC-CXR demonstrate that ImageGCN can outperform respective baselines in both disease identification and localization tasks and can achieve comparable and often better results than the state-of-the-art methods.
Original language | English (US) |
---|---|
Pages (from-to) | 1990-2003 |
Number of pages | 14 |
Journal | IEEE Transactions on Medical Imaging |
Volume | 41 |
Issue number | 8 |
DOIs | |
State | Published - Aug 1 2022 |
Funding
Keywords
- Chest X-ray
- graph convolutional network
- graph learning
- image representation
- relation modeling
ASJC Scopus subject areas
- Software
- Radiological and Ultrasound Technology
- Computer Science Applications
- Electrical and Electronic Engineering