Abstract
Medical image-to-image translation, using conditional Gen-erative Adversarial Networks (cGANs), could be beneficial for clinicaldecisions when additional diagnostics scans are requested. The recentlyproposed pix2pix architecture provides an effective image-to-image trans-lation method to study such medical use of cGANs. This study addressesthe question to what extent pix2pix can translate a magnetic resonanceimaging (MRI) scan of a patient into an estimate of a positron emissiontomography (PET) scan of the same patient. We perform two image-to-image translation experiments using paired MRI and PET brain scansof Alzheimer’s disease patients and healthy controls. In experiment 1, wetrain using data sliced in one dimension (the axial plane). In experiment2, we train using augmented data sliced in all three dimensions (axial,sagittal and coronal). After training, the synthetically generated PETscans are compared to the actual ones. The results suggest that PETscans can be sufficiently and reliably estimated from MRI, with similarresults using axial and augmented training. We conclude that image-to-image translation is a promising and potentially cost-saving method formaking informed use of expensive diagnostic technology
Original language | English |
---|---|
Title of host publication | Inferring PET from MRI with pix2pix |
Publication status | Published - 2018 |
Event | Benelux Conference on Artificial Intelligence - Den Bosch, Netherlands Duration: 8 Nov 2018 → 9 Nov 2018 Conference number: 30 https://bnaic2018.nl/ |
Conference
Conference | Benelux Conference on Artificial Intelligence |
---|---|
Abbreviated title | BNAIC2018 |
Country/Territory | Netherlands |
City | Den Bosch |
Period | 8/11/18 → 9/11/18 |
Internet address |