Abstract
Patient motion during PET is inevitable. Its long acquisition time not only increases the motion and the associated artifacts but also the patient’s discomfort, thus PET acceleration is desirable. However, accelerating PET acquisition will result in reconstructed images with low SNR, and the image quality will still be degraded by motion-induced artifacts. Most of the previous PET motion correction methods are motion type specific that require motion modeling, thus may fail when multiple types of motion present together. Also, those methods are customized for standard long acquisition and could not be directly applied to accelerated PET. To this end, modeling-free universal motion correction reconstruction for accelerated PET is still highly under-explored. In this work, we propose a novel deep learning-aided motion correction and reconstruction framework for accelerated PET, called Fast-MC-PET. Our framework consists of a universal motion correction (UMC) and a short-to-long acquisition reconstruction (SL-Reon) module. The UMC enables modeling-free motion correction by estimating quasi-continuous motion from ultra-short frame reconstructions and using this information for motion-compensated reconstruction. Then, the SL-Recon converts the accelerated UMC image with low counts to a high-quality image with high counts for our final reconstruction output. Our experimental results on human studies show that our Fast-MC-PET can enable 7-fold acceleration and use only 2 min acquisition to generate high-quality reconstruction images that outperform/match previous motion correction reconstruction methods using standard 15 min long acquisition data.
Original language | English (US) |
---|---|
Title of host publication | Information Processing in Medical Imaging - 28th International Conference, IPMI 2023, Proceedings |
Editors | Alejandro Frangi, Marleen de Bruijne, Demian Wassermann, Nassir Navab |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 523-535 |
Number of pages | 13 |
ISBN (Print) | 9783031340475 |
DOIs | |
State | Published - 2023 |
Event | 28th International Conference on Information Processing in Medical Imaging, IPMI 2023 - San Carlos de Bariloche, Argentina Duration: Jun 18 2023 → Jun 23 2023 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 13939 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 28th International Conference on Information Processing in Medical Imaging, IPMI 2023 |
---|---|
Country/Territory | Argentina |
City | San Carlos de Bariloche |
Period | 6/18/23 → 6/23/23 |
Funding
Acknowledgements. This work was supported by the National Natural Science Foundation of China (31971288, U1801265, 61936007, U20B2065, 61976045, 62276050, and 61976045), National Key R&D Program of China (2020AAA0105701), Sichuan Science and Technology Program (2021YJ0247), Innovation and Technology Commission-Innovation and Technology Fund ITS/100/20, Doctor Dissertation of Northwestern Polytechnical University CX2022053. Acknowledgements. Computing resources and support were provided by AINOS-TICS Ltd., enabled through funding by Innovate UK. MG is funded by UKRI under grant MR/W004097/1. Acknowledgements. This research was in part supported by the NSF grant IIS-1724174, the NIH NINDS and NIA via RF1NS121099 to Vemuri and the MOST grant 110-2118-M-002-005-MY3 to Yang. Foundation, DMS 1912194, Simons Foundation Collab. on the Global Brain. Yemini: Klingenstein-Simons Fellowship in Neuroscience, Hypothesis Fund. Dey: NIH NIBIB NAC P41EB015902, NIBIB 5R01EB032708. Varol: 1K99MH128772-01A1. Venkatacha-lam: Burroughs Wellcome Fund and NIH R01 NS126334. Acknowledgments. This work was supported by the Hong Kong Innovation and Technology Commission under Project No. ITS/238/21. Acknowledgements. Support for this work was provided by National Science Foundation grant 1944247 and National Institutes of Health grants U19AG056169 and 5R01AG070937 to C.M. Partially supported by the EANS 2021 Leica Research Grant. This work is partly supported by JSPS KAKENHI JP23KJ0118. Acknowledgements. This research/project is supported by the Singapore Ministry of Education (Academic research fund Tier 1) and A*STAR (H22P0M0007). Additional funding is provided by the National Science Foundation MDS-2010778, National Institute of Health R01 EB022856, EB02875. This research was also supported by the A*STAR Computational Resource Centre through the use of its high-performance computing facilities. financed by grants from Region Stockholm ALF Medicine, Styrgruppen KI/Region Stockholm for Research in Odontology and research funds from Karolinska Institutet and KTH. Acknowledgements. This research was partially supported by the Bavarian State Ministry of Science and the Arts and coordinated by the bidt, the BMBF (DeepMentia, 031L0200A), the DFG and the LRZ. Acknowledgments. This work was supported in part by the United States National Institutes of Health (NIH) through grants MH125479 and EB008374. Acknowledgement. This work was supported by Shenzhen Science and Technology Innovation Committee (Project No. SGDX20210823103201011) and Hong Kong Innovation and Technology Fund (Project No. ITS/028/21FP). Acknowledgments. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), and NVIDIA Hardware Award, and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) No. 2022-0-00959 ((Part 2) Few-Shot Learning of Causal Inference in Vision and Language for Decision Making), and the MOTIE (Ministry of Trade, Industry, and Energy) in Korea, under Human Resource Development Program for Industrial Innovation (Global) (P0017311) supervised by the Korea Institute for Advancement of Technology (KIAT). Acknowledgements. This work was partially supported by the EPSRC Centre for Doctoral Training in Fluid Dynamics (EP/L01615X/1) and the Royal Academy of Engineering Chair in Emerging Technologies (CiET1919/19). The computational work was undertaken on the UK National Tier-2 high performance computing service JADE-2 (EP/T022205/1). Acknowledgement. This work was partially supported by grants from NIH (R21AG065942, R01EY032125, and R01DE030286). Acknowledgement. The reported research was partly supported by NIH award # 1R21CA258493-01A1, NSF awards IIS-2212046 and IIS-2123920, and Stony Brook OVPR seed grants. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Acknowledgement. This work was supported by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences [203145Z/16/Z] and the International Alliance for Cancer Early Detection, an alliance between Cancer Research UK [C28070/A30912; C73666/A31378], Canary Center at Stanford University, the University of Cambridge, OHSU Knight Cancer Institute, University College London and the University of Manchester. Supported by Natural Sciences and Engineering Research Council of Canada (NSERC) and Reseau de BioImagrie du Quebec (RBIQ). Work supported by MIT JClinic, Philips, and Wistron. Acknowledgement. This research was partly supported by the National Key R&D Program of China (Grant No. 2020AAA0108303), Shenzhen Science and Technology Project (Grant No. JCYJ20200109143041798), Shenzhen Stable Supporting Program (Grant No. WDZC20200820200655001), and Shenzhen Key Lab of next generation interactive media innovative technology (Grant No. ZDSY S20210623092001004). Acknowledgement. This work is supported by the Fundamental Research Funds for the Central Universities. Acknowledgements. We gratefully acknowledge the tissue donors and their families. This work was supported by the NIH (Grants RF1 AG069474, P30 AG072979 and R01 AG056014), a UCLM travel and research grant (to R.I), and an Alzheimer’s Association grant (AARF-19-615258) (to L.E.M.W). Acknowledgements. ALY is supported by an MRC Skills Development Fellowship (MR/T027800/1). Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012) (For a full list of ADNI funders see: https://adni.loni.usc.edu/ wp-content/uploads/how to apply/ADNI Data Use Agreement.pdf). H. Dai and S. Joshi were supported by NSF grant DMS-1912030. P. T. Fletcher was supported by NSF grant IIS-2205417. M. Bauer was supported by NSF grants DMS-1912037, DMS-1953244 and by FWF grant FWF-P 35813-N. Acknowledgments. This work was supported by NIH R01-AG067103 grant. Computations were performed on Washington University Center for High Performance Computing. Acknowledgement. This work was supported in part by National Natural Science Foundation of China (grant number 62131015), Science and Technology Commission of Shanghai Municipality (STCSM) (grant number 21010502600), and The Key R&D Program of Guangdong Province, China (grant number 2021B0101420006). Acknowledgments. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), Public Safety Canada (NS-5001-22170), in part by NVIDIA Hardware Award, and in part by the Hong Kong Innovation and Technology Commission under Project No. ITS/238/21. Acknowledgment. This work was supported in part by National Natural Science Foundation of China (No. 62131015), Science and Technology Commission of Shanghai Municipality (STCSM) (No. 21010502600), The Key R&D Program of Guangdong Province, China (No. 2021B0101420006), and the China Postdoctoral Science Foundation (Nos. BX2021333, 2021M703340). This work was completed under the close collaboration between C. Jiang and Y. Pan, and they contributed equally to this work.
Keywords
- Accelerated PET
- Deep Reconstruction
- Universal Motion Correction
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science