TY - JOUR
T1 - Large-scale multi-center CT and MRI segmentation of pancreas with deep learning
AU - Zhang, Zheyuan
AU - Keles, Elif
AU - Durak, Gorkem
AU - Taktak, Yavuz
AU - Susladkar, Onkar
AU - Gorade, Vandan
AU - Jha, Debesh
AU - Ormeci, Asli C.
AU - Medetalibeyoglu, Alpay
AU - Yao, Lanhong
AU - Wang, Bin
AU - Isler, Ilkin Sevgi
AU - Peng, Linkai
AU - Pan, Hongyi
AU - Vendrami, Camila Lopes
AU - Bourhani, Amir
AU - Velichko, Yury
AU - Gong, Boqing
AU - Spampinato, Concetto
AU - Pyrros, Ayis
AU - Tiwari, Pallavi
AU - Klatte, Derk C.F.
AU - Engels, Megan
AU - Hoogenboom, Sanne
AU - Bolan, Candice W.
AU - Agarunov, Emil
AU - Harfouch, Nassier
AU - Huang, Chenchan
AU - Bruno, Marco J.
AU - Schoots, Ivo
AU - Keswani, Rajesh N.
AU - Miller, Frank H.
AU - Gonda, Tamas
AU - Yazici, Cemal
AU - Tirkes, Temel
AU - Turkbey, Baris
AU - Wallace, Michael B.
AU - Bagci, Ulas
N1 - Publisher Copyright:
© 2024
PY - 2025/1
Y1 - 2025/1
N2 - Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1 W) and T2-weighted (T2 W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We introduced a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet's accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen's kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (±7.2%, at case level) with CT, 85.0% (±7.9%) with T1 W MRI, and 86.3% (±6.4%) with T2 W MRI. There was a high correlation for pancreas volume prediction with R2 of 0.91, 0.84, and 0.85 for CT, T1 W, and T2 W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1 W and T2 W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.
AB - Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1 W) and T2-weighted (T2 W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We introduced a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet's accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen's kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (±7.2%, at case level) with CT, 85.0% (±7.9%) with T1 W MRI, and 86.3% (±6.4%) with T2 W MRI. There was a high correlation for pancreas volume prediction with R2 of 0.91, 0.84, and 0.85 for CT, T1 W, and T2 W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1 W and T2 W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.
KW - CT pancreas
KW - Generalized segmentation
KW - MRI pancreas
KW - Pancreas segmentation
KW - Transformer segmentation
UR - http://www.scopus.com/inward/record.url?scp=85208758378&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85208758378&partnerID=8YFLogxK
U2 - 10.1016/j.media.2024.103382
DO - 10.1016/j.media.2024.103382
M3 - Article
C2 - 39541706
AN - SCOPUS:85208758378
SN - 1361-8415
VL - 99
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 103382
ER -