Journal List > Brain Tumor Res Treat > v.10(2) > 1516081733

Park: Artificial Intelligence in Neuro-Oncologic Imaging: A Brief Review for Clinical Use Cases and Future Perspectives

Abstract

The artificial intelligence (AI) techniques, both deep learning end-to-end approaches and radiomics with machine learning, have been developed for various imaging-based tasks in neuro-oncology. In this brief review, use cases of AI in neuro-oncologic imaging are summarized: image quality improvement, metastasis detection, radiogenomics, and treatment response monitoring. We then give a brief overview of generative adversarial network and potential utility of synthetic images for various deep learning algorithms of imaging-based tasks and image translation tasks as becoming new data input. Lastly, we highlight the importance of cohorts and clinical trial as a true validation for clinical utility of AI in neuro-oncologic imaging.

ARTIFICIAL INTELLIGENCE, MACHINE LEARNING, AND DEEP LEARNING

Artificial intelligence (AI) is a broad term that describes any task performed by a computer that normally requires human intelligence [12]. Machine learning (ML) is defined as a subset of AI that enables computers to learn from “data” without explicit programming to make predictions when new data are encountered [1]. Deep learning is part of ML and is currently gaining significant attention owing to its utilization of “big data” in medicine. Deep learning is composed of a large number of layers and interconnected artificial neural networks (ANNs) that enable computations of large datasets with high performance. A convolutional neural network (CNN) is one example of ANNs that excels at pattern recognition and identifying complex patterns in imaging data compared with previous learning methods [1]. The hierarchy of AI, ML, and deep learning is shown in Fig. 1.

CURRENT USE CASES OF AI IN NEURO-ONCOLOGIC IMAGING

To identify current research trends, original research papers of AI published in the neuro-oncology field, database search was conducted in the MEDLINE (National Center for Biotechnology Information, NCBI) databases up from any time until March 24, 2022. The search terms used to find radiomics studies were “artificial intelligence” OR “deep learning” OR “machine learning” OR “radiogenomics” OR “radiomics” AND “neuro-oncology” OR “brain tumor” OR “glio*” or “brain metastasis.” The search identified 2,890 full text articles.
Applications of AI in the clinical workflow of neuro-oncologic patients are summarized in Fig. 2. The use cases of AI suggest that AI/ML can be applied to solve a particular problem in a clinical situation. The following review briefly summarizes current use cases of AI in neuro-oncology.

Machine learning and deep learning for radiogenomics

Image-based diagnoses of genetic mutations are of significance for glioma because radiogenomics can stratify low- and high-risk patients to further guide patient consultations and therapeutic plans. Radiomics [345] and deep learning studies [6789] have focused on the prediction of key genomic landscapes observed in diffuse gliomas [10]: isocitrate dehydrogenase (IDH) mutations, 1p/19q codeletions, O6-methylguanine-DNA methyltransferase (MGMT)-promoter methylation status, and epidermal growth factor receptor (EGFR) amplification mutations.
Radiomics involves the extraction of numerous quantitative features from images from a given region-of-interest (tumor portion) to assess the spatial complexity and heterogeneity of the tumor. Morphological (volume/shape), histogram (first-order), texture (second-order), and transform-based features are the most commonly used radiomics features [1112]. Radiomics features are subsequently applied using a feature selection step to reduce the dimensionality of the data [13]. Radiogenomics of gliomas is applied to multiparametric imaging to predict not only a single genetic mutation but also complex or multiple genomic alterations. Kickingereder et al. [14] demonstrated in 152 glioblastomas that radiomics from multiparametric MRI could predict DNA methylation status and hallmark copy number variations. Hu et al. [15] analyzed 48 image-guided biopsies from glioblastomas and associated these with radiomics from structural MRIs to successfully predict EGFR (75%), PDGFRA (77.1%), CDKN2A (87.5%), and RB1 mutations (87.5%). Recently, Park et al. [16] demonstrated multiparametric MRI radiomics using diffusion and perfusion imaging to predict core signaling pathways from next-generation sequencing of IDH-wild type glioblastomas. In brain metastasis, radiomics applied in prediction of the primary tumor types [17] or prediction of EGFR mutation status in non-small cell lung cancer [18]. Due to heterogeneous nature of patients cohort and small size of the lesions, the quality of radiogenomics research on brain metastasis is not often satisfactory [19]. In deep learning, CNN is the most commonly used algorithm. One study [7] using 256 MRIs from the Cancer Imaging Archives dataset showed a prediction accuracy of 94% for IDH status, 92% for 1p/19q co-deletion status, and 83% for MGMT-promoter methylation status. However, the deep learning method requires the availability of a large amount of input data, and classification problems of genomic mutations often suffer from data hungriness. This issue can be resolved using the data augmentation method by imaging generation, which will be discussed further in this review.

Deep learning image-to-image task (1): segmentation using U-net for treatment response monitoring

As mentioned above, supervised learning for classification requires a large amount of data. Fortunately, image-to-image translation and image segmentation (image to binary mask) tasks do not suffer from data hungriness and are thus suitable for neuro-oncologic imaging. The first use case is treatment response monitoring with deep learning-based tumor segmentation.
Deep learning-based segmentation can be used in both routine clinical practice and data research, with the benefit that computers do not tire and can provide fast and reproducible segmentations. The Multimodal Brain Tumor Image Segmentation Benchmark [20] is one effort that can be used to enhance technical developments. “This is a public dataset of MR scans of low-and high-grade gliomas for challenge of tumor segmentation algorithm. More than 20 different tumor segmentation algorithms were optimized and the reference dataset are publicly available.”
Recently, the importance and meaningful clinical use of deep learning-based brain tumor segmentation were demonstrated using automated and quantitative assessment of treatment response [21]. The Response Assessment in Neuro-Oncology (RANO) criteria is the standard method for assessing the treatment response of brain tumors and is based on the manual measurement of two-dimensional diameters of contrast-enhancing lesions of glioblastoma [22]. A recent study demonstrated that automated deep learning-based volumetric assessment provides highly accurate segmentation of contrast-enhancing tumor and non-enhancing T2/fluid-attenuated inversion recovery signal abnormalities with independent multicenter validation, enabling quantitative volumetric tumor response assessment [21]. The study demonstrated higher agreement of the quantitative volumetrically defined time to progression (TTP) than that of the RANO assessment by a margin of 36%, and the automated volumetrically defined TTP was a better surrogate endpoint for overall survival than was RANO [21]. These findings provide evidence that deep learning-based volumetric assessment of tumor response is both feasible and clinically important for providing high-quality imaging endpoints in neuro-oncology.

Deep learning image-to-image task (2): image detection for brain metastasis detection

Another use case for segmentation using deep learning is the detection and segmentation of brain metastases in neuro-oncology. The detection of brain metastases is important; however, it creates a considerable workload for many radiologists, especially given the rise in cancer incidence, survival rates, and use of thin-section contrast-enhanced MRI [2324].
Several deep learning methods using CNN have been proposed [232526] that improve radiologists’ performance for detecting metastases <100 mm in size from 89.83% to 100% [26]. Currently, an important limitation of deep learning methods is the trade-off between false-positive rates and sensitivity, such as detecting vascular structures when the threshold is low (high sensitivity and high false-positive rate) or failing to detect small metastases <3 mm in size when the threshold is high (low sensitivity). Recently, a consensus-recommended MRI protocol for metastasis was proposed [24], and subsequent studies [2728] using black blood imaging demonstrated high sensitivity with a low false-positive rate. The representative cases are shown in Fig. 3. Automated detection of brain metastases will ultimately reduce workloads for radiologists by triaging cases and improve detection accuracy as it becomes an assistant (first) reader before radiologists.

Deep learning image-to-image task (3): image translation for better image quality

The role of deep learning is not limited to the image-based tasks of detection, segmentation, and classification, which were previously performed by humans. The acquisition and pre-processing of MRI images can be empowered using deep learning. One example is deep learning-based reconstruction (DLR), which uses a deep CNN-based algorithm embedded into the MRI reconstruction pipeline [29] that is placed within the MRI machine. The algorithm takes raw k-space data as the input and generates high-fidelity images as the output. Compared with conventional ML image reconstruction, the deep learning algorithm provides higher spatial resolution with more highly defined edges [2930]. In brain tumor imaging, higher spatial resolution enables the acquisition of high-resolution and thin-sliced images with less noise; this is particularly valuable for small anatomic structures with small tumors, such as pituitary adenomas, where the normal pituitary stalk and gland need to be delineated from tumor tissue. The representative cases obtained using DLR are shown in Fig. 4. A recent study by Kim et al. [31] demonstrated that 1 mm DLR MRI achieved higher diagnostic performance than that of 3 mm MRI (p=0.01 for reader 1, p=0.02 for reader 2) for identifying cavernous sinus invasion by a residual tumor. Lee et al. [32] demonstrated that 1 mm DLR MRI provides thin slice images that increase the sensitivity for detecting pituitary microadenoma and small recurrent/residual tumors after initial surgery. The readers preferred 1 mm DLR MRI over 3 mm routine MRI for delineating the healthy pituitary stalk and gland. Moreover, inexperienced readers preferred 1 mm DLR MRI more than did experienced readers. Thus, thin-sliced DLR MRI has greater value than routine thick sliced MRI because it has higher sensitivity for detecting pituitary adenoma, and allows better delineation of the normal pituitary gland in pre- and postoperative adenoma, which facilitates accurate guidance during surgery. Furthermore, because the DLR algorithm is built into the MRI machine, and the image processing time is relatively short, this technique offers significant potential for future studies in various clinical use cases.

FUTURE USE CASE: IMAGE GENERATION

The most commonly used AI method (for both machine and deep learning) in neuro-oncologic imaging is supervised learning; the main purpose is classification, whereby reference standard include different classes of diagnoses (e.g., IDH-mutant vs. IDH-wild type), prognoses (e.g., long vs. short survival), or treatment response (responder vs. non-responder). When provided with sufficient examples of different classes, algorithms “learn” how to classify novel data [33]. However, in neuro-oncology, data are often insufficient because of disease rarity, limited data exchange between centers, and lack of data standardization between various MRI protocols, which ultimately hinders optimal learning. Thus, data augmentation is a key element of deep learning models that is designed to deal with unbalanced classes and improve the accuracy of predictions [34].
The generative adversarial network (GAN) enables the generation of new images from unlabeled original images [35] and is an attractive solution to overcome the limitation of small datasets [3436]. GAN learns the data distribution from training samples and generates realistic imaging data that are similar in distribution but are different from the original data [373839]. In a clinical study, Park et al. [40] tested whether the morphological characteristics of GAN-produced images reflected actual tumors. The study showed that the morphologic variations of GAN-based synthetic images of IDH-mutant glioblastomas were similar to those of actual images, and include tumor location, absence of necrosis, enhancement category, and margin and type of tissue surrounding the non-enhanced regions. Moreover, these morphological variations were predictive of IDH mutations in both real and synthetic datasets. The study suggested that GAN-based datasets are a useful training set, and a diagnostic model was created from the morphological characteristics of actual and synthetic data.
The GAN utility is an image-to-image translation task, which is useful for generating synthetic data to fill in absent or insufficient data in a multicenter trial. Jayachandran et al. [41] explored the synthesis of post-contrast MRI sequences from pre-contrast MRI sequences alone by filling in absent imaging data without the use of the gadolinium-based contrast agent during MRI. The study incorporated MRI data from three phase 2 and 3 clinical trials with >2,000 patients. The deep learning segmentation technique was also included. The study demonstrated that quantitative volumetrically defined TTP is possible with synthetic post-contrast MRI images, and no significant difference (0.1 months) was found between synthetic and true post-contrast MRI sequences based on automatic volumetry. Thus, equal prognostic surrogate levels for predicting OS were demonstrated between synthetic and true post-contrast MRI data.
Image generation using GAN will eventually become a data input itself and expand the use of deep learning algorithms. In supervised learning tasks, GAN will be used to augment the dataset and improve classification performance. In addition, in image-to-image translation tasks, GAN will become the initial learning step before segmentation and detection to improve image quality or fill datasets, thereby improving the performance of the image-to-image task. For example, by generating rare cases of genetic mutation, the classification algorithm for IDH mutation status will be improved. The abundant of input images will improve performance of tumor detection and segmentation. Also, the image translation of GAN will fill-in missing sequences of multi-parametric imaging that reduce contrast use of CT or MRI and/or reduce amount of radiation dose.
This brief review summarized use cases in neuro-oncology, and recent technical and study concept improvements for AI. An important element for researchers conducting AI tasks [42] is robust validation of the clinical performance of AI algorithms through the definition of use cases in the clinic and obtaining a prospective cohort study for real-world validation. The field of neuro-oncology is limited by the amount of data in the training set and designing the cohort for the validation; nevertheless, recent advances have enabled the development of approaches to improve and confront such challenges. In the future, researchers should be encouraged to improve study designs by prospectively registering studies and clinical trials.

Notes

Ethics Statement: Not applicable

Conflicts of Interest: The author has no potential conflicts of interest to disclose.

Funding Statement: This research was supported by National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (grant numbers: NRF-2020R1A2B5B01001707) and by Ministry of Health and Welfare, South Korea (HI21C1161).

Availability of Data and Material

All data generated or analyzed during the study are included in this published article.

References

1. Lee JG, Jun S, Cho YW, Lee H, Kim GB, Seo JB, et al. Deep learning in medical imaging: general overview. Korean J Radiol. 2017; 18:570–584. PMID: 28670152.
2. Rudie JD, Rauschecker AM, Bryan RN, Davatzikos C, Mohan S. Emerging applications of artificial intelligence in neuro-oncology. Radiology. 2019; 290:607–618. PMID: 30667332.
3. Gevaert O, Mitchell LA, Achrol AS, Xu J, Echegaray S, Steinberg GK, et al. Glioblastoma multiforme: exploratory radiogenomic analysis by using quantitative image features. Radiology. 2014; 273:168–174. PMID: 24827998.
4. Eichinger P, Alberts E, Delbridge C, Trebeschi S, Valentinitsch A, Bette S, et al. Diffusion tensor image features predict IDH genotype in newly diagnosed WHO grade II/III gliomas. Sci Rep. 2017; 7:13396. PMID: 29042619.
5. Zhou H, Vallières M, Bai HX, Su C, Tang H, Oldridge D, et al. MRI features predict survival and molecular markers in diffuse lower-grade gliomas. Neuro Oncol. 2017; 19:862–870. PMID: 28339588.
6. Chang K, Bai HX, Zhou H, Su C, Bi WL, Agbodza E, et al. Residual convolutional neural network for the determination of IDH status in low-and high-grade gliomas from MR imaging. Clin Cancer Res. 2018; 24:1073–1081. PMID: 29167275.
7. Chang P, Grinband J, Weinberg BD, Bardis M, Khy M, Cadena G, et al. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. AJNR Am J Neuroradiol. 2018; 39:1201–1207. PMID: 29748206.
8. Han L, Kamdar MR. MRI to MGMT: predicting methylation status in glioblastoma patients using convolutional recurrent neural networks. Pac Symp Biocomput. 2018; 23:331–342. PMID: 29218894.
9. Liang S, Zhang R, Liang D, Song T, Ai T, Xia C, et al. Multimodal 3D DenseNet for IDH genotype prediction in gliomas. Genes (Basel). 2018; 9:382.
10. Louis DN, Perry A, Reifenberger G, von Deimling A, Figarella-Branger D, Cavenee WK, et al. The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta Neuropathol. 2016; 131:803–820. PMID: 27157931.
11. Gillies RJ, Kinahan PE, Hricak H. Radiomics: images are more than pictures, they are data. Radiology. 2016; 278:563–577. PMID: 26579733.
12. Kumar V, Gu Y, Basu S, Berglund A, Eschrich SA, Schabath MB, et al. Radiomics: the process and the challenges. Magn Reson Imaging. 2012; 30:1234–1248. PMID: 22898692.
13. Lambin P, Leijenaar RTH, Deist TM, Peerlings J, de Jong EEC, van Timmeren J, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol. 2017; 14:749–762. PMID: 28975929.
14. Kickingereder P, Bonekamp D, Nowosielski M, Kratz A, Sill M, Burth S, et al. Radiogenomics of glioblastoma: machine learning–based classification of molecular characteristics by using multiparametric and multiregional MR imaging features. Radiology. 2016; 281:907–918. PMID: 27636026.
15. Hu LS, Ning S, Eschbacher JM, Baxter LC, Gaw N, Ranjbar S, et al. Radiogenomics to characterize regional genetic heterogeneity in glioblastoma. Neuro Oncol. 2017; 19:128–137. PMID: 27502248.
16. Park JE, Kim HS, Park SY, Nam SJ, Chun SM, Jo Y, et al. Prediction of core signaling pathway by using diffusion-and perfusion-based MRI radiomics and next-generation sequencing in isocitrate dehydrogenase wild-type glioblastoma. Radiology. 2020; 294:388–397. PMID: 31845844.
17. Kniep HC, Madesta F, Schneider T, Hanning U, Schönfeld MH, Schön G, et al. Radiomics of brain MRI: utility in prediction of metastatic tumor type. Radiology. 2018; 290:479–487. PMID: 30526358.
18. Wang G, Wang B, Wang Z, Li W, Xiu J, Liu Z, et al. Radiomics signature of brain metastasis: prediction of EGFR mutation status. Eur Radiol. 2021; 31:4538–4547. PMID: 33439315.
19. Park CJ, Park YW, Ahn SS, Kim D, Kim EH, Kang SG, et al. Quality of radiomics research on brain metastasis: a roadmap to promote clinical translation. Korean J Radiol. 2022; 23:77–88. PMID: 34983096.
20. Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans Med Imaging. 2015; 34:1993–2024. PMID: 25494501.
21. Kickingereder P, Isensee F, Tursunova I, Petersen J, Neuberger U, Bonekamp D, et al. Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study. Lancet Oncol. 2019; 20:728–740. PMID: 30952559.
22. Wen PY, Macdonald DR, Reardon DA, Cloughesy TF, Sorensen AG, Galanis E, et al. Updated response assessment criteria for high-grade gliomas: response assessment in neuro-oncology working group. J Clin Oncol. 2010; 28:1963–1972. PMID: 20231676.
23. Charron O, Lallement A, Jarnet D, Noblet V, Clavier JB, Meyer P. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput Biol Med. 2018; 95:43–54. PMID: 29455079.
24. Kaufmann TJ, Smits M, Boxerman J, Huang R, Barboriak DP, Weller M, et al. Consensus recommendations for a standardized brain tumor imaging protocol for clinical trials in brain metastases. Neuro Oncol. 2020; 22:757–772. PMID: 32048719.
25. Liu Y, Stojadinovic S, Hrycushko B, Wardak Z, Lau S, Lu W, et al. A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. PLoS One. 2017; 12:e0185844. PMID: 28985229.
26. Zhou Z, Sanders JW, Johnson JM, Gule-Monroe MK, Chen MM, Briere TM, et al. Computer-aided detection of brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors. Radiology. 2020; 295:407–415. PMID: 32181729.
27. Jun Y, Eo T, Kim T, Shin H, Hwang D, Bae SH, et al. Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors. Sci Rep. 2018; 8:9450. PMID: 29930257.
28. Park YW, Jun Y, Lee Y, Han K, An C, Ahn SS, et al. Robust performance of deep learning for automatic detection and segmentation of brain metastases using three-dimensional black-blood and three-dimensional gradient echo imaging. Eur Radiol. 2021; 31:6686–6695. PMID: 33738598.
29. Lebel RM. Performance characterization of a novel deep learning-based MR image reconstruction pipeline. arXiv [Preprint]. 2020; cited 2022 Jan 28. Available at: . DOI: 10.48550/arXiv.2008.06559.
30. Peters RD, Harris H, Lawson S. The clinical benefits of AIR™ Recon DL for MR image reconstruction [Internet]. Chicago, IL: GE Healthcare;2020. Accessed Jan 28, 2022. https://www.gehealthcare.com/-/jssmedia/c943df5927a049bb9ac95a9f0349ad8c .
31. Kim M, Kim HS, Kim HJ, Park JE, Park SY, Kim YH, et al. Thin-slice pituitary MRI with deep learning-based reconstruction: diagnostic performance in a postoperative setting. Radiology. 2020; 298:114–122. PMID: 33141001.
32. Lee DH, Park JE, Nam YK, Lee J, Kim S, Kim YH, et al. Deep learning-based thin-section MRI reconstruction improves tumour detection and delineation in pre- and post-treatment pituitary adenoma. Sci Rep. 2021; 11:21302. PMID: 34716372.
33. Marcus G. Deep learning: a critical appraisal. arXiv [Preprint]. 2018; cited 2022 Jan 28. Available at: . DOI: 10.48550/arXiv.1801.00631.
34. Moreno-Barea FJ, Jerez JM, Franco L. Improving classification accuracy using data augmentation on small data sets. Expert Syst Appl. 2020; 161:113696.
35. Engstrom L, Tran B, Tsipras D, Schmidt L, Madry A. A rotation and a translation suffice: fooling CNNs with simple transformations. In : Proceedings of the 2019 International Conference on Learning Representations; 2019 May 6-9; New Orleans, LO. OepnReview.net;Accessed Jan 28, 2022. Available at: https://openreview.net/forum?id=BJfvknCqFQ.
36. Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data. 2019; 6:60.
37. Dar SU, Yurt M, Karacan L, Erdem A, Erdem E, Cukur T. Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans Med Imaging. 2019; 38:2375–2388. PMID: 30835216.
38. Yurt M, Dar SU, Erdem A, Erdem E, Oguz KK, Çukur T. mustGAN: multi-stream generative adversarial networks for MR image synthesis. Med Image Anal. 2021; 70:101944. PMID: 33690024.
39. Dar SUH, Yurt M, Shahdloo M, Ildız ME, Tınaz B, Çukur T. Prior-guided image reconstruction for accelerated multi-contrast MRI via generative adversarial networks. IEEE J Sel Top Signal Process. 2020; 14:1072–1087.
40. Park JE, Eun D, Kim HS, Lee DH, Jang RW, Kim N. Generative adversarial network for glioblastoma ensures morphologic variations and improves diagnostic model for isocitrate dehydrogenase mutant type. Sci Rep. 2021; 11:9912. PMID: 33972663.
41. Jayachandran Preetha C, Meredig H, Brugnara G, Mahmutoglu MA, Foltyn M, Isensee F, et al. Deep-learning-based synthesis of post-contrast T1-weighted MRI for tumour response assessment in neuro-oncology: a multicentre, retrospective cohort study. Lancet Digit Health. 2021; 3:e784–e794. PMID: 34688602.
42. Kim DW, Jang HY, Kim KW, Shin Y, Park SH. Design characteristics of studies reporting the performance of artificial intelligence algorithms for diagnostic analysis of medical images: results from recently published papers. Korean J Radiol. 2019; 20:405–410. PMID: 30799571.
Fig. 1

The hierarchy of artificial intelligence, machine learning, and deep learning.

btrt-10-69-g001
Fig. 2

Diagram demonstrating artificial intelligence (AI), machine learning (ML), and deep learning in the clinical workflow of neuro-oncology patients. Following image acquisition, deep learning-based reconstruction can be applied to reduce noise and improve image quality. Then, AI-assisted image-based tasks are performed, which include deep learning-based detection and segmentation. After segmentation, the quantitative analysis of radiomics can be applied, and further analyses are performed using ML. AI-assisted image-based tasks help to provide quantitative and standardized reporting. Importantly, deep learning-based image generation can be applied during the data input stage and may improve prediction performance during every process of AI in neuro-oncologic imaging.

btrt-10-69-g002
Fig. 3

Representative cases of deep learning detection for brain metastasis on black blood and white blood imaging, respectively. The red colored dots are AI prediction for brain metastasis. The enhancing lesion is a vascular structure (yellow arrow, pseudo-lesion) that leads to a false-positive artificial intelligence (AI) prediction.

btrt-10-69-g003
Fig. 4

Representative cases of deep learning reconstruction (DLR) for pituitary adenoma. The DLR image provides better image quality with lesion conspicuity. Note that the residual mass in the left cavernous sinus (yellow arrow) can be clearly visualized on the 1 mm DLR image.

btrt-10-69-g004
TOOLS
Similar articles