Scientific article
Open access

Direct inference of Patlak parametric images in whole-body PET/CT imaging using convolutional neural networks

Published inEuropean journal of nuclear medicine and molecular imaging, vol. 49, no. 12, p. 4048-4063
Publication date2022-10
First online date2022-06-18

Purpose: This study proposed and investigated the feasibility of estimating Patlak-derived influx rate constant (Ki) from standardized uptake value (SUV) and/or dynamic PET image series.

Methods: Whole-body 18F-FDG dynamic PET images of 19 subjects consisting of 13 frames or passes were employed for training a residual deep learning model with SUV and/or dynamic series as input and Ki-Patlak (slope) images as output. The training and evaluation were performed using a nine-fold cross-validation scheme. Owing to the availability of SUV images acquired 60 min post-injection (20 min total acquisition time), the data sets used for the training of the models were split into two groups: “With SUV” and “Without SUV.” For “With SUV” group, the model was first trained using only SUV images and then the passes (starting from pass 13, the last pass, to pass 9) were added to the training of the model (one pass each time). For this group, 6 models were developed with input data consisting of SUV, SUV plus pass 13, SUV plus passes 13 and 12, SUV plus passes 13 to 11, SUV plus passes 13 to 10, and SUV plus passes 13 to 9. For the “Without SUV” group, the same trend was followed, but without using the SUV images (5 models were developed with input data of passes 13 to 9). For model performance evaluation, the mean absolute error (MAE), mean error (ME), mean relative absolute error (MRAE%), relative error (RE%), mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between the predicted Ki-Patlak images by the two groups and the reference Ki-Patlak images generated through Patlak analysis using the whole acquired data sets. For specific evaluation of the method, regions of interest (ROIs) were drawn on representative organs, including the lung, liver, brain, and heart and around the identified malignant lesions.

Results: The MRAE%, RE%, PSNR, and SSIM indices across all patients were estimated as 7.45 ± 0.94%, 4.54 ± 2.93%, 46.89 ± 2.93, and 1.00 ± 6.7 × 10−7, respectively, for models predicted using SUV plus passes 13 to 9 as input. The predicted parameters using passes 13 to 11 as input exhibited almost similar results compared to the predicted models using SUV plus passes 13 to 9 as input. Yet, the bias was continuously reduced by adding passes until pass 11, after which the magnitude of error reduction was negligible. Hence, the predicted model with SUV plus passes 13 to 9 had the lowest quantification bias. Lesions invisible in one or both of SUV and Ki-Patlak images appeared similarly through visual inspection in the predicted images with tolerable bias.

Conclusion: This study concluded the feasibility of direct deep learning-based approach to estimate Ki-Patlak parametric maps without requiring the input function and with a fewer number of passes. This would lead to shorter acquisition times for WB dynamic imaging with acceptable bias and comparable lesion detectability performance.

  • Dynamic PET imaging
  • Clinical oncology
  • Deep learning
  • Patlak analysis
  • Lesion detectability
  • Fluorodeoxyglucose F18
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Neural Networks, Computer
  • Positron Emission Tomography Computed Tomography / methods
  • Positron-Emission Tomography / methods
  • Whole Body Imaging / method
Citation (ISO format)
ZAKER, Neda et al. Direct inference of Patlak parametric images in whole-body PET/CT imaging using convolutional neural networks. In: European journal of nuclear medicine and molecular imaging, 2022, vol. 49, n° 12, p. 4048–4063. doi: 10.1007/s00259-022-05867-w
Main files (1)
Article (Published version)
Secondary files (1)
ISSN of the journal1619-7070

Technical informations

Creation06/19/2022 4:50:00 PM
First validation06/19/2022 4:50:00 PM
Update time03/16/2023 8:54:13 AM
Status update03/16/2023 8:54:11 AM
Last indexation02/01/2024 9:09:08 AM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack