Abstract
Purpose
Automated analytical systems have begun to emerge as a database system that enables the scanning of medical images to be performed on computers and the construction of big data. Deep-learning artificial intelligence (AI) architectures have been developed and applied to medical images, making high-precision diagnosis possible.
Materials and Methods
For diagnosis, the medical images need to be labeled and standardized. After pre-processing the data and entering them into the deep-learning architecture, the final diagnosis results can be obtained quickly and accurately. To solve the problem of overfitting because of an insufficient amount of labeled data, data augmentation is performed through rotation, using left and right flips to artificially increase the amount of data. Because various deep-learning architectures have been developed and publicized over the past few years, the results of the diagnosis can be obtained by entering a medical image.
Results
Classification and regression are performed by a supervised machine-learning method and clustering and generation are performed by an unsupervised machine-learning method. When the convolutional neural network (CNN) method is applied to the deep-learning layer, feature extraction can be used to classify diseases very efficiently and thus to diagnose various diseases.
It is well-established that automated analytical systems have begun to emerge as a database system capable of scanning medical images with computers and configuring big data . With the development of back-propagation deep-learning artificial intelligence (AI) architectures, medical imaging has begun to be applied and it has been reported that accurate diagnosis is possible. The most efficient model for image analysis is the ‘Convolutional Neural Network’ (CNN), a key block in the configuration of deep networks. CNN involves developing various optimization algorithms such as LeNet, AlexNet, ZF Net, GoogleNet, VGGNet and ResNet. CNN is a powerful feature extractor with a deep layer that can extract features from images.12) Deep learning algorithms, especially convolutional networks, are rapidly emerging as a methodology for analyzing medical images. There has been remarkable growth in medical imaging analysis of the nerves, retina, lung, digital pathology, breast, heart, abdomen, and musculo-skeletal system using the AI methods that make up the deep-learning architecture.
Finding non-invasive and quantitative assessment techniques for early detection of Alzheimer's disease [AD] is fundamentally important for early treatment. Tumor detection, classification, and quantitative assessment in positron emission tomography (PET) imaging are important for early diagnosis and treatment planning. A number of techniques have been proposed for segmenting medical image data through quantitative assessment. However, some quantitative methods of evaluating medical images are inaccurate and require considerable computation time to analyze large amounts of data. Analytical methods using AI algorithms can improve diagnostic accuracy and save time.
The deep-learning technology, known as the AI method, dramatically improves diagnostic performance by automatically extracting features of complex and precise medical images and comparing their differences.34) Image analysis by AI algorithm is superior to the traditional image analysis method. Automatically classifying skin lesions using images is a challenging task because the skin lesion shapes differ a lot, but they have successfully categorized skin cancer successfully.5) When evaluating diabetic retinal fundus photographs, deep-learning using Inception-v3 architecture enabled diagnosis of diabetic retinopathy with high sensitivity and specificity.6) Early detection of pulmonary nodules on a chest by computed tomography (CT) scan was performed using the Lung Image Database Consortium (LIDC) and the Image Database Resource Initiative (IDID) database for the diagnosis of lung cancer, and extracted successfully using the network back propagation algorithm.7) CNN (Convolutional Neural Network)-type deep learning model, trained for large mammography lesions, is superior in performance to existing CAD system.8) An automated detection system has been successfully established by studying the feasibility of a deep learning approach for detecting cartilage lesions in the knee joint with MR images.9) PET images with low spatial resolution overestimated the volume due to the partial volume effect, but the optimal volume was extracted using the Artificial neural networks (ANN) algorithm.10) Through the development of AI algorithms, the diagnostic performance of various medical images has been improved, and it is expected to be continuously introduced into medical image diagnosis systems because it has a higher diagnostic performance index than any other quantitative analysis methods so far.
For the diagnosis of medical imaging, there should be labeled and standardized big data, as shown in Fig. 1. AI is generally a concept that includes machine-learning and deep-learning, and can be analyzed in various ways, depending on the characteristics of images stored in the database. The data is pre-processed in the same order as in Fig. 2, entered into the deep-learning architecture, and final diagnosis results are obtained. The hyper-parameter is changed and the deep-learning result is checked and modified to optimize the parameter. In order to solve the problem of over-fitting due to lack of the sufficient labeled data, data augmentation is performed through rotation, left and right flip, and GAN.
Although medical images are stored in the PACS system, they are almost useless data for AI analysis. In order to use medical images for AI, labeling, standardization, bounding box, and segmentation are required. These high quality images require manual work, which can be time-consuming and can vary, depending on the skill of the person. Standardized and anonymized well-labeled databases are very important for developing and testing AI algorithms that require big data. There has been an attempt to build a large-scale database system through these complex processes, and to improve diagnostic performance through competition. These attempts have been quite successful and appear to be most effective in increasing diagnostic accuracy.
In recent years, an anonymous medical image database has been released and researchers are freely approaching and presenting research results. A database of F-18 FDG PET images from participants of the Alzheimer's disease neuro-imaging initiative (ADNI) is being developed to study early detection of AD patients. F-18 FDG PET images of the ADNI database were used to distinguish AD patients with 88.64% specificity, 87.70% sensitivity, and 88.24% accuracy.11) Magnetic resonance imaging (MRI) of the knee is the preferred method for diagnosing a knee injury, but the analysis is time consuming and the likelihood of diagnostic errors is high. MRNet, a CNN, and logistic regression analysis were performed as a dataset of 1,370 knee MRI scans performed at the Stanford University Medical Center (https://stanfordmlgroup.github.io). The results of the validation did not show a significant difference from the analysis of 9 clinical experts at the Stanford University Medical Center.12) The leader-board reports the average AUC (0.917) of current (Jan 09, 2019) abnormal detection, ACL tears, and meniscal tear work (https://stanfordmlgroup.github.io/competitions/mrnet).
To screen for severe pneumothorax, a total of 13,292 frontal chest X-rays were visually labeled by a radiologist to create a database.13) Candidate images for analysis were identified and stored in a clinical PACS system in the form of ‘dicom’. These well-defined data sets can be used to train and evaluate different network architectures. Building such a well-defined database requires a lot of time and effort. The National Institute of Health (NIH) has opened the Chest X-ray14 set, so that it is publicly available (available at https://nihcc.app.box.com/v/ChestXray-NIHCC). The NIH allows students to learn by using well-organized and labeled data sets, and opens up a database for anyone to challenge and develop optimized algorithms. Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) databases were created and released for the diagnosis of lung cancer. For the diagnosis of skin cancer, the database can be opened (https://isic-archive.com) to participate in the competition.5) The MR knee database is also available and anyone interested in this field can participate in AI analysis (https://stanfordmlgroup.github.io/competitions/mrnet). Most problems with large data sets and public challenge data sets are listed at http://www.grand-challenge.org.
In order to construct a more efficient, accurate and useful database, the Department of Nuclear Medicine at Dong-A University Medical Center established a database system called SortDB with IRM (http://www.irm.kr/). The feature of SortDB is some software that allows researchers to download a desired dicom file in a specified file format (nii, jpg, gif etc.) by entering a manageable list of Excel files. To apply an AI algorithm, big data is needed. It is an automated database generation program that stores and manages data stored in small and medium sized hospitals in the necessary format. A database is constructed with the system configuration shown in Fig. 3, shortening the preprocessing time for creating a standardized data set.
For quantitative analysis, an image was normalized using a standard affine model with 12 parameters using SPM5 software.14) In order to use quantitative evaluation using machine learning, a pre-processing method, including the normalization process of image data must necessarily be included. Since the accuracy of the normalized image depends on the standard template or the variety of the process, there are considerable variations among researchers. When applying an AI algorithm through a normalized image, the accuracy is generally increased. However, there is a lack of logical explanations and studies on why normalized images increase accuracy. Theoretical studies are needed, as well as empirical studies on the optimal pre-processing method for image analysis using AI.
The accuracy of the quantitative analysis has a considerable impact on the quality of the image transformation by pre-processing. In particular, the accuracy of AI analysis varies considerably, depending on the quality of medical images for applying the algorithm. The image transformation of the dicom file is linearly transformed using the return value of getOuputData function of a library.15) Bits number 8 and the SUV values were linearly converted to 0–255 scales. By using this method of linear transformation, the image representing the SUV value can be reproduced as a jpg file with distortion. It is necessary to study the image conversion methods so that medical images will suffer only minimal deformation in the conversion process. It is necessary to maintain the consistency of AI learning using image-aware CNNs by down-sampling commonly used images.13)
Generally, PET images do not have a large number of data, making it difficult to construct a database of big data. In this case, data augmentation is performed through image inversion, enlargement/reduction, shearing, and rotation to randomly distribute data to perform AI analysis. Too many hyper-parameter optimizations are some of the most important difficulties in achieving optimal accuracy. This can be done using the open source Future Gadget Laboratory (https://github.com/Kaixhin/FGLab) framework.13) Evaluation of the validity of deep-learning can be done by calculation of the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive predictive value (PPV) of the entire validation set, and can be done using scikit-learn (https://scikit-learn.org). Receiver operating characteristic (ROC) curves can be drawn using matplotlib (https://matplotlib.org).
Currently, CNN algorithms are implemented in various ways. The recent trend is to use Keras (version 2.0.3, https://keras.io/) deep-learning library on TensorFlow (version 1.2.1, Google) to implement a model of convoluted neural networks. Deep-learning architectures for AI analysis are VGG16/19, Xception, Inception, and ResNet models, which are the best performing algorithms.16171819) The optimal algorithm for image classification may be an algorithm that can reflect the feature characteristics of each medical image. Currently, the location of such features is estimated through the activation map. However, at present, only the experience of repeated execution of each medical image is the method of finding the optimal model. The AlexNet algorithm, which won the ILSRVC competition in 201220) to classify the chest X-ray image into 5 classes, was used and successfully classified with 92.10% accuracy.21) AlexNet consists of 5 convolution layers and 3 fullyconnected (FC) layers, and the last FC layer uses a softmax activation function. The researcher21) down-sampled the dicom file and converted it into a 256×256 pixel PNG file and used it as the input image of AlexNet. The mini-batch size is 128, the number of training iterations is 100, the adaptive learning rate is 1×10−3 for Adam optimizer, and the momentum is 0.5. The ReLU activation functions were used in the previous step of the max-pooling layers and the L2 regularization was adjusted to 1×10−4.
One of the most successful methods is the PCA method, which is based on the eigen-image approach.22) After analyzing the images through PCA, SVM was used as a classifier to distinguish the boundary between normal and demented patients and to improve the accuracy by diagnosing and analyzing the initial dementia. PCA was applied to SPECT images and the possibility of classification was greatly improved, and the results were presented through 3-D scattering.23) Quantitative evaluation and classification is one of the most efficient and logical methods. The accuracy of classification of AD by applying PCA-SVM to SPECT and PET images was 96.7% and 89.52%, respectively.24) Using Gaussian mixture models (GMMs), a method of automatically selecting the region of interest (ROI) of a three-dimensional functional brain image containing high and low activation region information was presented.25) Based on classification using eigenvector decomposition and SVM, including concept extraction by representative vector separation using principal component analysis, we find that (PCA)/independent component analysis (ICA) for early AD diagnosis by brain image analysis and pattern disease discrimination CAD is a very useful direction.11)
In brain research using AI, many studies have been conducted in the field of Alzheimer's disease classification, anatomical segmentation of brain regions, and tumor detection. Alzheimer's disease (AD)/mild cognitive impairment (MCI)/HC classification was successfully performed by using the Gaussian Restricted Boltzmann Machine (RBM) to find feature expressions in volume patches of MRI and PET images.26) The 3-D convolution neural network in AD classification is superior to other algorithm classifiers.2728) It automatically segments the magnetic resonance (MR) images of the human brain using CNN.29) Segmentation of striatum segmentation was performed using deep CNN, and the results were compared with those of FreeSurfer.30) In the brain area, manual segments are time-consuming, with individual differences occurring, while automatic segmentation has significant difficulties in performing in complex structures. Twenty-five deep-layers called the ‘voxelwise residual network’ (VoxResNet) were developed and successfully segmented automatically.31) To demonstrate end-to-end nonlinear mapping from MR images to CT images, a 3-D fully convolutional neural network (FCN) was employed and verified in a real pelvic CT/MRI data set.32) Input and output improved performance by using twovolume CNNs, and excellent performance was observed by evaluating the input and output forms in the MRI and PET images of the Neuroimaging Initiative (ADNI) database.33)
By introducing the multiple-instance learning (MIL) framework, a de-convolutional neural network is constructed to generate the heat map of suspicious regions.34) A unique set of radiologic datasets of publicly available chest X-rays and their reports were used to find and report 17 unique patterns by applying CNN algorithms.35) It has been reported that the presence of interstitial patterns have been found by applying a segmentation-based label propagation method to a dataset of interstitial lung disease,36) and it has been reported that lung texture patterns are classified using CNN.37) A method for classifying frontal and lateral chest X-ray images using deep-learning methods and automating metadata annotations has been reported.38) A new method of using a three-dimensional (3-D) CNN for false positive reduction in automatic pulmonary nodule detection in a Volumetric Computed Tomography (CT) scan has been proposed. 3-D CNN is able to enter more spatial information and extract more representative features through a hierarchical architecture, trained with 3-D samples. The proposed algorithm has achieved high CPM (Competition Metric) scores, has been extensively tested in the LUNA16 Challenge, and can be applied to 3-D PET images.
Since most mammograms are 2-D and the number of data is large, AI images can be successfully analyzed using deep-learning in natural images. The discovery of breast cancer is the detection and classification of tumor lesions, the detection and classification of micro-calcifications, and risk-scoring work, which can be effectively analyzed by CNN or RBM methods. For the measurement of breast density, CNNs for feature extraction were used,39) and a modified region proposal CNN (R-CNN) has been used for localization. It has been reported that U-net is used for segmentation breast and fibro-glandular tissue (FGT) in MRI in a dataset and accurate breast density calculation results are observed.40) It has been reported that a short-term risk assessment model has been developed by achieving a predictive accuracy of 71.4% by calculating the risk score by implementing the mammographic X-ray image as a risk prediction module MLP (multiple layer perception) classifier.41)
Cardiac artificial research fields include left ventricle segmentation, slice classification, image quality assessment, automated calcium scoring and coronary centerline tracking, and super-resolution. 2-D and 3-D CNN techniques are mainly used for classification, and deep-learning techniques such as U-net segmentation algorithm are used for segmentation. The high-resolution 3-D volume in the 2-D image stack has been reconstructed using a novel image super-resolution (SR) approach.42) The image quality is superior to the SR method because the CNN model is computationally efficient, but SR-CNN is advantageous in image segmentation and motion tracking.43) Using multistream CNN (3 views), it has been reported that low-dose chest CT can be identified with high accuracy by deep learning when the region of interest is considered as a coronary artery calcification candidate over 130 HU.44) Coronary calcium in gated cardiac CT angiography (CCTA) was detected using 3-D CNN and multi-stream 2-D CNN.45)
Musculo-skeletal images are analyzed by deep-learning algorithms for segmentation and identification of bone, joint, and associated soft tissue abnormalities. 3-D CNN architecture has been developed to automatically perform supervised segmentation of vertebral bodies (VBs) from 3-D magnetic resonance (MR) spine images, and a ‘Dice’ similarity coefficient of 93.4% has been reached.23) Automatic spine recognition, including spine location identification and multiple image naming, requires large amounts of image data and is difficult to recognize due to the variety of spine shapes and postures. By using a deep-learning architecture called Transformed Deep Convolution Network (TDCN), the posture of the spine was automatically corrected to process the image.46) A CNN regression has a limitation in that its computation time is long and the capture range is small for implementing an intensity-based 2-D/3-D registration technology. It is reported that highly accurate real-time 2-D/3-D registration is possible, even in a greatly enlarged capture range.47) Several deep-learning methods have been developed for automatically evaluating the age of skeletal bones using X-ray imaging and their performance has been verified by showing an average discrepancy of about 0.8 years.48)
Diagnosis using AI is quickly performed and its accuracy is very high. AI diagnosis is becoming an important technology for future diagnostic systems. However, AI diagnosis needs to be supplemented in several aspects. AI learning using deep-learning architecture requires big data. However, most medical images are technical and man-powered, making it difficult to build big data systems. It is also timeconsuming to create a database of standardized and labeled images of medical images. Most people are building databases by manually pre-processing all medical images for AI application. Performing data augmentation through rotation, left/right flip, and up/down reversal of a medical image has a positive effect on the accuracy of learning. Data augmentation using GAN is being applied in various areas of the medical field.2149) In liver lesion classification using CT images, it was reported that the accuracy of 7.1% was increased when the number of data using GAN was increased.21) In chest pathology classification using X-ray image, accuracy was reported to be increased by 21.23%.49) It has been reported that synthesized images using GAN can be a method of data augmentation in medical image analysis. However, more research is needed to see if synthetic images can be used for artificial intelligence learning to determine clinical diagnostics that require rigorous accuracy. Increasing data through the GAN is still controversial, but it is a field of research that is needed. It is necessary to make the medical image stored in the PACS system into the image of the previous step for AI analysis. Research is needed to automatically generate standard images so that medical images can be used directly in deep-learning.
AI analysis of medical images requires labeled, standardized and optimized images. There is a clear difference in the accuracy of the final classification between pre-processing and non-pre-processing medical images. However, since the pre-processed image has noise different from that of the original image, it is necessary to study the effect of the resulting image on the accuracy. Deep-learning architecture can have various forms, and new excellent architecture for image analysis is being released every year through competition. However, it is only through experience that the most accurate result of any architecture can be used for each medical image. Therefore, it is necessary for people to apply various deep-learning architectures to each medical image and then to discuss each other's experiences to find the optimal method. Even if one deep-learning model is determined, there are many hyper-parameters in it. The composition of parameter combinations is often dependent on user experience. Optimization of Hyper-parameters is usually done through a time-consuming grid search and random search, but Bayesian optimization and genetic algorithm optimization can be used efficiently.
Medical imaging diagnostics using deep-learning architecture have reached expert levels in the areas of neurons, retina, lung, digital pathology, breast, heart, abdomen, and musculo-skeletal system. However, when other hospital protocols apply different medical images, their accuracy is significantly reduced and new optimization parameters must be found. Even with some resolution and noise changes in medical imaging, AI diagnostics are very fragile. There is also a lack of logical explanations as to which process to diagnose. Although there is an effort to maintain consistency through data harmonization, there is the possibility that the overall quality of the image will deteriorate. It is necessary to study the change of the accuracy by using the data whose resolution is decreased. It is necessary to develop algorithms that can recognize and generalize images of medical images with different resolutions or noise, but it will take a considerable amount of time. Nevertheless, the diagnosis of medical images by the deep-learning architecture developed up till now is almost at expert diagnosis level. In addition, many data that have not yet been analyzed can be discovered and studied with AI diagnosis, and this can accurately and quickly diagnose and improve the quality of medical care dramatically.
Acknowledgements
This research was supported by the project at Institute of Convergence Bio-Health, Dong-A University funded by Busan Institute of S&T Evaluation and Planning. The authors would like to thank Dr. Adrian Ankiewicz of the Australian National University (Australia) for helpful comments on the manuscript.
Notes
References
1. Farooq A, Anwar S, Awais M, Alnowami M. Artificial intelligence based smart diagnosis of alzheimer's disease and mild cognitive impairment. 2017. p. 1–4.
2. Vieira S, Pinaya WHL, Mechelli A. Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications. Neuroscience and Biobehavioral Reviews. 2017; 74:58–75.
4. Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. 2014. p. 818–833.
5. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017; 542:115.
6. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016; 316:2402–2410.
7. Golan R, Jacob C, Denzinger J. Lung nodule detection in CT images using deep convolutional neural networks. 2016. p. 243–250.
8. Kooi T, Litjens G, Van Ginneken B, et al. Large scale deep learning for computer aided detection of mammographic lesions. Med Image Anal. 2017; 35:303–312.
9. Liu F, Zhou Z, Samsonov A, et al. Deep learning approach for evaluating knee MR images: Achieving high diagnostic performance for cartilage lesion detection. Radiology. 2018; 289:160–169.
10. Sharif MS, Abbod M, Amira A, Zaidi H. Artificial neural network-based system for PET volume segmentation. International Journal of Biomedical Imaging. 2010; 2010:105610.
11. Illán I, Górriz JM, Ramírez J, et al. 18F-FDG PET imaging analysis for computer aided Alzheimer’s diagnosis. Inf Sci. 2011; 181:903–916.
12. Bien N, Rajpurkar P, Ball RL, et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet. PLoS Medicine. 2018; 15:e1002699.
13. Taylor AG, Mielke C, Mongan J. Automated detection of moderate and large pneumothorax on frontal chest X-rays using deep convolutional neural networks: A retrospective study. PLoS Medicine. 2018; 15:e1002697.
14. Woods RP, Grafton ST, Holmes CJ, Cherry SR, Mazziotta JC. Automated image registration: I. general methods and intrasubject, intramodality validation. J Comput Assist Tomogr. 1998; 22:139–152.
15. DCMTK V364. Https://support.dcmtk.org/docs/classDicomImage.html#ac1b5118cbae9e797aa55940fcd60258e.2019.
16. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. ArXiv Preprint
arXiv:1409.1556. 2014.
17. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. 2015. p. 1–9.
18. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. 2016. p. 770–778.
19. Chollet F. Xception: Deep learning with depthwise separable convolutions. 2017. p. 1251–1258.
20. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. 2012. p. 1097–1105.
21. Salehinejad H, Valaee S, Dowdell T, Colak E, Barfett J. Generalization of deep neural networks for chest pathology classification in x-rays using generative adversarial networks. 2018. p. 990–994.
23. Korez R, Likar B, Pernuš F, Vrtovec T. Model-based segmentation of vertebral bodies from MR images with 3D CNNs. 2016. p. 433–441.
24. López M, Ramírez J, Górriz JM, et al. Principal component analysis-based techniques and supervised classification schemes for the early detection of alzheimer’s disease. Neurocomputing. 2011; 74:1260–1271.
25. Górriz JM, Lassl A, Ramírez J, Salas-Gonzalez D, Puntonet C, Lang E. Automatic selection of ROIs in functional imaging using gaussian mixture models. Neurosci Lett. 2009; 460:108–111.
26. Suk H, Lee S, Shen D, Alzheimer’s Disease. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage. 2014; 101:569–582.
27. Payan A, Montana G. Predicting alzheimer's disease: A neuroimaging study with 3D convolutional neural networks. ArXiv Preprint arXiv:1502.02506. 2015.
28. Hosseini-Asl E, Gimel'farb G, El-Baz A. Alzheimer's disease diagnostics by a deeply supervised adaptable 3D convolutional network. ArXiv Preprint arXiv:1607.00556. 2016.
29. de Brebisson A, Montana G. Deep neural networks for anatomical brain segmentation. 2015. p. 20–28.
30. Choi H, Jin KH. Fast and robust segmentation of the striatum using deep convolutional neural networks. J Neurosci Methods. 2016; 274:146–153.
31. Chen H, Dou Q, Yu L, Qin J, Heng P. VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. Neuroimage. 2018; 170:446–455.
32. Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT image from MRI data using 3D fully convolutional networks. Anonymous: Springer;2016. p. 170–178.
33. Li R, Zhang W, Suk H, et al. Deep learning based imaging data completion for improved brain disease diagnosis. 2014. p. 305–312.
34. Kim H, Hwang S. Deconvolutional feature stacking for weakly-supervised semantic segmentation. ArXiv Preprint arXiv:1602.04984. 2016.
35. Shin H, Roberts K, Lu L, Demner-Fushman D, Yao J, Summers RM. Learning to read chest x-rays: Recurrent neural cascade model for automated image annotation. 2016. p. 2497–2506.
36. Gao M, Xu Z, Lu L, et al. Segmentation label propagation using deep convolutional neural networks and dense conditional random field. 2016. p. 1265–1268.
37. Gao M, Bagci U, Lu L, et al. Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks. Computer Methods in Biomechanics and Biomedical Engineering. Imaging & Visualization. 2018; 6:1–6.
38. Rajkomar A, Lingam S, Taylor AG, Blum M, Mongan J. High-throughput classification of radiographs using deep convolutional neural networks. J Digit Imaging. 2017; 30:95–101.
39. Fonseca P, Mendoza J, Wainer J, et al. Automatic breast density classification using a convolutional neural network architecture search procedure. 2015. 9414:p. 941428.
40. Dalmış MU, Litjens G, Holland K, et al. Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Med Phys. 2017; 44:533–546.
41. Qiu Y, Wang Y, Yan S, et al. An initial investigation on developing a new method to predict short-term breast cancer risk based on deep learning technology. 2016. 9785:p. 978521.
42. Avendi M, Kheradvar A, Jafarkhani H. A combined deeplearning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Med Image Anal. 2016; 30:108–119.
43. Oktay O, Bai W, Lee M, et al. Multi-input cardiac image super-resolution using convolutional neural networks. 2016. p. 246–254.
44. Lessmann N, Išgum I, Setio AA, et al. Deep convolutional neural networks for automatic coronary calcium scoring in a screening study with low-dose chest CT. 2016. 9785:p. 978511.
45. Wolterink JM, Leiner T, de Vos BD, van Hamersvelt RW, Viergever MA, Išgum I. Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks. Med Image Anal. 2016; 34:123–136.
46. Cai Y, Landis M, Laidley DT, Kornecki A, Lum A, Li S. Multi-modal vertebrae recognition using transformed deep convolution network. Comput Med Imaging Graph. 2016; 51:11–19.
47. Miao S, Wang ZJ, Liao R. A CNN regression approach for real-time 2D/3D registration. IEEE Trans Med Imaging. 2016; 35:1352–1363.