Abstract
We examined whether automated visual evaluation (AVE), a deep learning computer application for cervical cancer screening, can be used on cervix images taken by a contemporary smartphone camera. A large number of cervix images acquired by the commercial MobileODT EVA system were filtered for acceptable visual quality and then 7587 filtered images from 3221 women were annotated by a group of gynecologic oncologists (so the gold standard is an expert impression, not histopathology). We tested and analyzed on multiple random splits of the images using two deep learning, object detection networks. For all the receiver operating characteristics curves, the area under the curve values for the discrimination of the most likely precancer cases from least likely cases (most likely controls) were above 0.90. These results showed that AVE can classify cervix images with confidence scores that are strongly associated with expert evaluations of severity for the same images. The results on a small subset of images that have histopathologic diagnoses further supported the capability of AVE for predicting cervical precancer. We examined the associations of AVE severity score with gynecologic oncologist impression at all regions where we had a sufficient number of cases and controls, and the influence of a woman's age. The method was found generally resilient to regional variation in the appearance of the cervix. This work suggests that using AVE on smartphones could be a useful adjunct to health-worker visual assessment with acetic acid, a cervical cancer screening method commonly used in low- and middle-resource settings.
Original language | English (US) |
---|---|
Pages (from-to) | 2416-2423 |
Number of pages | 8 |
Journal | International Journal of Cancer |
Volume | 147 |
Issue number | 9 |
DOIs | |
State | Published - Nov 1 2020 |
Funding
U.S. National Institutes of Health (NIH); National Cancer Institute; National Library of Medicine (NLM); Intramural Research Program of the Lister Hill National Center for Biomedical Communications (LHNCBC) Funding information This work was supported by the Intramural Research Program of the Lister Hill National Center for Biomedical Communications (LHNCBC), the National Library of Medicine (NLM), the National Cancer Institute, and the U.S. National Institutes of Health (NIH). The authors are grateful to MobileODT for providing images used in our study. The images were provided under special agreement with the National Institutes of Health. The authors have no disclosures with the exception of the following: Dr M. H. E. has advised or participated in educational speaking activities, but does not receive an honorarium from any companies. In specific cases, his employers have received payment for his time spent for these activities from Merck, Hologic, Papivax, Cynvec and Altum Pharma. If travel required for meetings with industry, the company pays for Dr Einstein's travel expenses. Rutgers has received grant funding for research‐related costs of clinical trials that Dr Einstein has been the overall or local PI within the past 12 months from Roche, Johnson and Johnson, Pfizer, AstraZeneca, Advaxis and Inovio. Dr A. P. N. receives honoraria from CSATS Inc. for expert review of surgical cases, unrelated to this publication. Dr J. Z. M.'s employer has received payment for her time spent for advisory activity with Tesaro, the company also paid for travel. Rutgers has received grant funding from Merck for research related costs of clinical trials that she was the local PI. NCI is conducting a study in Nigeria, for which MobileODT contributed EVA systems and software, and quality assurance of image acquisition at no cost to NCI. MobileODT had no access to this data analysis or results, and no influence on decision to publish.
Keywords
- automated visual evaluation
- cervical cancer screening
- deep learning
- smartphone camera
ASJC Scopus subject areas
- Oncology
- Cancer Research