TY - GEN
T1 - Vision-Language Contrastive Learning Approach to Robust Automatic Placenta Analysis Using Photographic Images
AU - Pan, Yimu
AU - Gernand, Alison D.
AU - Goldstein, Jeffery A.
AU - Mithal, Leena
AU - Mwinyelle, Delia
AU - Wang, James Z.
N1 - Funding Information:
This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562.
Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - The standard placental examination helps identify adverse pregnancy outcomes but is not scalable since it requires hospital-level equipment and expert knowledge. Although the current supervised learning approaches in automatic placenta analysis improved the scalability, those approaches fall short on robustness and generalizability due to the scarcity of labeled training images. In this paper, we propose to use the vision-language contrastive learning (VLC) approach to address the data scarcity problem by incorporating the abundant pathology reports into the training data. Moreover, we address the feature suppression problem in the current VLC approaches to improve generalizability and robustness. The improvements enable us to use a shared image encoder across tasks to boost efficiency. Overall, our approach outperforms the strong baselines for fetal/maternal inflammatory response (FIR/MIR), chorioamnionitis, and sepsis risk classification tasks using the images from a professional photography instrument at the Northwestern Memorial Hospital; it also achieves the highest inference robustness to iPad images for MIR and chorioamnionitis risk classification tasks. It is the first approach to show robustness to placenta images from a mobile platform that is accessible to low-resource communities.
AB - The standard placental examination helps identify adverse pregnancy outcomes but is not scalable since it requires hospital-level equipment and expert knowledge. Although the current supervised learning approaches in automatic placenta analysis improved the scalability, those approaches fall short on robustness and generalizability due to the scarcity of labeled training images. In this paper, we propose to use the vision-language contrastive learning (VLC) approach to address the data scarcity problem by incorporating the abundant pathology reports into the training data. Moreover, we address the feature suppression problem in the current VLC approaches to improve generalizability and robustness. The improvements enable us to use a shared image encoder across tasks to boost efficiency. Overall, our approach outperforms the strong baselines for fetal/maternal inflammatory response (FIR/MIR), chorioamnionitis, and sepsis risk classification tasks using the images from a professional photography instrument at the Northwestern Memorial Hospital; it also achieves the highest inference robustness to iPad images for MIR and chorioamnionitis risk classification tasks. It is the first approach to show robustness to placenta images from a mobile platform that is accessible to low-resource communities.
KW - Placenta analysis
KW - Vision-language pre-training
KW - mHealth
UR - http://www.scopus.com/inward/record.url?scp=85139055377&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139055377&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16437-8_68
DO - 10.1007/978-3-031-16437-8_68
M3 - Conference contribution
AN - SCOPUS:85139055377
SN - 9783031164361
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 707
EP - 716
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings
A2 - Wang, Linwei
A2 - Dou, Qi
A2 - Fletcher, P. Thomas
A2 - Speidel, Stefanie
A2 - Li, Shuo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022
Y2 - 18 September 2022 through 22 September 2022
ER -