Learning from crowds for automated histopathological image segmentation

Miguel López-Pérez*, Pablo Morales-Álvarez, Lee A.D. Cooper, Christopher Felicelli, Jeffery Goldstein, Brian Vadasz, Rafael Molina, Aggelos K. Katsaggelos

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Automated semantic segmentation of histopathological images is an essential task in Computational Pathology (CPATH). The main limitation of Deep Learning (DL) to address this task is the scarcity of expert annotations. Crowdsourcing (CR) has emerged as a promising solution to reduce the individual (expert) annotation cost by distributing the labeling effort among a group of (non-expert) annotators. Extracting knowledge in this scenario is challenging, as it involves noisy annotations. Jointly learning the underlying (expert) segmentation and the annotators’ expertise is currently a commonly used approach. Unfortunately, this approach is frequently carried out by learning a different neural network for each annotator, which scales poorly when the number of annotators grows. For this reason, this strategy cannot be easily applied to real-world CPATH segmentation. This paper proposes a new family of methods for CR segmentation of histopathological images. Our approach consists of two coupled networks: a segmentation network (for learning the expert segmentation) and an annotator network (for learning the annotators’ expertise). We propose to estimate the annotators’ behavior with only one network that receives the annotator ID as input, achieving scalability on the number of annotators. Our family is composed of three different models for the annotator network. Within this family, we propose a novel modeling of the annotator network in the CR segmentation literature, which considers the global features of the image. We validate our methods on a real-world dataset of Triple Negative Breast Cancer images labeled by several medical students. Our new CR modeling achieves a Dice coefficient of 0.7827, outperforming the well-known STAPLE (0.7039) and being competitive with the supervised method with expert labels (0.7723).

Original languageEnglish (US)
Article number102327
JournalComputerized Medical Imaging and Graphics
StatePublished - Mar 2024


  • Cancer
  • Crowdsourcing
  • Histopathology
  • Noisy labels
  • Segmentation

ASJC Scopus subject areas

  • Radiological and Ultrasound Technology
  • Health Informatics
  • Radiology Nuclear Medicine and imaging
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Learning from crowds for automated histopathological image segmentation'. Together they form a unique fingerprint.

Cite this