Journal List > Korean J Gastroenterol > v.75(3) > 1144388

Bang: Deep Learning in Upper Gastrointestinal Disorders: Status and Future Perspectives

Abstract

Artificial intelligence using deep learning has been applied to gastrointestinal disorders for the detection, classification, and delineation of various lesion images. With the accumulation of enormous medical records, the evolution of computation power with graphic processing units, and the widespread use of open-source libraries in large-scale machine learning processes, medical artificial intelligence is overcoming its traditional limitations. This paper explains the basic concepts of deep learning model establishment and summarizes previous studies on upper gastrointestinal disorders. The limitations and perspectives on future development are also discussed.

References

1. Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol. 2019; 25:1666–1683.
crossref
2. Ebigbo A, Palm C, Probst A, et al. A technical review of artificial intelligence as applied to gastrointestinal endoscopy: clarifying the terminology. Endosc Int Open. 2019; 7:E1616–E1623.
crossref
3. Imler TD, Morea J, Kahi C, et al. Multi-center colonoscopy quality measurement utilizing natural language processing. Am J Gastroenterol. 2015; 110:543–552.
crossref
4. de Groof AJ, Struyvenberg MR, van der Putten J, et al. Deep-learning system detects neoplasia in patients with Barrett's esophagus with higher accuracy than endoscopists in a multistep training and validation study with benchmarking. Gastroenterology. 2019 Nov 21. [Epub ahead of print].
crossref
5. de Groof J, van der Sommen F, van der Putten J, et al. The Argos project: the development of a computer-aided detection system to improve detection of Barrett's neoplasia on white light endoscopy. United European Gastroenterol J. 2019; 7:538–547.
6. Ebigbo A, Mendel R, Probst A, et al. Real-time use of artificial intelligence in the evaluation of cancer in Barrett's oesophagus. Gut. 2019 Sep 20. [Epub ahead of print].
crossref
7. Sehgal V, Rosenfeld A, Graham DG, et al. Machine learning cre-ates a simple endoscopic classification system that improves dysplasia detection in Barrett's oesophagus amongst non-expert endoscopists. Gastroenterol Res Pract. 2018; 2018:1872437.
crossref
8. van der Sommen F, Zinger S, Curvers WL, et al. Computer-aided detection of early neoplastic lesions in Barrett's esophagus. Endoscopy. 2016; 48:617–624.
crossref
9. Swager AF, van der Sommen F, Klomp SR, et al. Computer-aided detection of early Barrett's neoplasia using volumetric laser endomicroscopy. Gastrointest Endosc. 2017; 86:839–846.
crossref
10. Struyvenberg MR, van der Sommen F, Swager AF, et al. Improved Barrett's neoplasia detection using computer-assisted multiframe analysis of volumetric laser endomicroscopy. Dis Esophagus. 2019 Jul 31. [Epub ahead of print].
crossref
11. Hong J, Park BY, Park H. Convolutional neural network classifier for distinguishing Barrett's esophagus and neoplasia endomicroscopy images. Conf Proc IEEE Eng Med Biol Soc. 2017; 2017:2892–2895.
crossref
12. Guo L, Xiao X, Wu C, et al. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointest Endosc. 2020; 91:41–51.
crossref
13. Cai SL, Li B, Tan WM, et al. Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video). Gastrointest Endosc. 2019; 90:745–753.e2.
crossref
14. Nakagawa K, Ishihara R, Aoyama K, et al. Classification for invasion depth of esophageal squamous cell carcinoma using a deep neural network compared with experienced endoscopists. Gastrointest Endosc. 2019; 90:407–414.
crossref
15. Tokai Y, Yoshio T, Aoyama K, et al. Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma. Esophagus. 2020 Jan 24. [Epub ahead of print].
crossref
16. Ghatwary N, Zolgharni M, Ye X. Early esophageal adenocarcinoma detection using deep learning methods. Int J Comput Assist Radiol Surg. 2019; 14:611–621.
crossref
17. Horie Y, Yoshio T, Aoyama K, et al. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest Endosc. 2019; 89:25–32.
crossref
18. Zheng W, Zhang X, Kim JJ, et al. High accuracy of convolutional neural network for evaluation of Helicobacter pylori infection based on endoscopic images: preliminary experience. Clin Transl Gastroenterol. 2019; 10:e00109.
crossref
19. Shichijo S, Endo Y, Aoyama K, et al. Application of convolutional neural networks for evaluating Helicobacter pylori infection status on the basis of endoscopic images. Scand J Gastroenterol. 2019; 54:158–163.
crossref
20. Nakashima H, Kawahira H, Kawachi H, Sakaki N. Artificial intelligence diagnosis of Helicobacter pylori infection using blue laser imaging-bright and linked color imaging: a single-center prospective study. Ann Gastroenterol. 2018; 31:462–468.
crossref
21. Itoh T, Kawahira H, Nakashima H, Yata N. Deep learning analyzes Helicobacter pylori infection by upper gastrointestinal endoscopy images. Endosc Int Open. 2018; 6:E139–E144.
crossref
22. Shichijo S, Nomura S, Aoyama K, et al. Application of convolutional neural networks in the diagnosis of Helicobacter pylori infection based on endoscopic images. EBioMedicine. 2017; 25:106–111.
crossref
23. Huang CR, Sheu BS, Chung PC, Yang HB. Computerized diagnosis of Helicobacter pylori infection and associated gastric inflammation from endoscopic images by refined feature selection using a neural network. Endoscopy. 2004; 36:601–608.
24. Cho BJ, Bang CS, Park SW, et al. Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network. Endoscopy. 2019; 51:1121–1129.
crossref
25. Yoon HJ, Kim S, Kim JH, et al. A lesion-based convolutional neural network improves endoscopic detection and depth prediction of early gastric cancer. J Clin Med. 2019; 8:E1310.
crossref
26. Zhu Y, Wang QC, Xu MD, et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc. 2019; 89:806–815.e1.
crossref
27. Kubota K, Kuroda J, Yoshida M, Ohta K, Kitajima M. Medical image analysis: computer-aided diagnosis of gastric cancer invasion on endoscopic images. Surg Endosc. 2012; 26:1485–1489.
crossref
28. Hirasawa T, Aoyama K, Tanimoto T, et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer. 2018; 21:653–660.
crossref
29. Kanesaka T, Lee TC, Uedo N, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018; 87:1339–1344.
crossref
30. Lee JH, Kim YJ, Kim YW, et al. Spotting malignancies from gastric endoscopic images using deep learning. Surg Endosc. 2019; 33:3790–3797.
crossref
31. Wu L, Zhang J, Zhou W, et al. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut. 2019; 68:2161–2169.
crossref
32. Chen D, Wu L, Li Y, et al. Comparing blind spots of unsedated ul-trafine, sedated, and unsedated conventional gastroscopy with and without artificial intelligence: a prospective, single-blind, 3-parallel-group, randomized, single-center trial. Gastrointest Endosc. 2020; 91:332–339.e3.
crossref
33. Shung DL, Au B, Taylor RA, et al. Validation of a machine learning model that outperforms clinical risk scoring systems for upper gastrointestinal bleeding. Gastroenterology. 2020; 158:160–167.
crossref
34. Rotondano G, Cipolletta L, Grossi E, et al. Artificial neural networks accurately predict mortality in patients with nonvariceal upper GI bleeding. Gastrointest Endosc. 2011; 73:218–226.e2.
crossref
35. Das A, Ben-Menachem T, Farooq FT, et al. Artificial neural network as a predictive instrument in patients with acute nonvariceal upper gastrointestinal hemorrhage. Gastroenterology. 2008; 134:65–74.
crossref
36. Grossi E, Marmo R, Intraligi M, Buscema M. Artificial neural networks for early prediction of mortality in patients with non variceal upper GI bleeding (UGIB). Biomed Inform Insights. 2008; 1:7–19.
crossref
37. Sallis BF, Erkert L, Moñino-Romero S, et al. An algorithm for the classification of mRNA patterns in eosinophilic esophagitis: Integration of machine learning. J Allergy Clin Immunol. 2018; 141:1354–1364.e9.
crossref
38. Luo H, Xu G, Li C, et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multi-centre, case-control, diagnostic study. Lancet Oncol. 2019; 20:1645–1654.
crossref
39. Cho BJ, Bang CS. Artificial intelligence for the determination of a management strategy for diminutive colorectal polyps: hype, hope, or help. Am J Gastroenterol. 2020; 115:70–72.
crossref
40. Price WN 2nd, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA. 2019; 322:1765–1766.
crossref
41. Poursabzi-Sangdeh F, Goldstein DG, Hofman JM, Vaughan JW, Wallach H. Manipulating and measuring model interpretability. arXiv preprint. 2018. arXiv:1802.07810.
42. Schlegl T, Seeböck P, Waldstein SM, Langs G, Schmidt-Erfurth U. f-AnoGAN: fast unsupervised anomaly detection with generative adversarial networks. Med Image Anal. 2019; 54:30–44.
crossref
44. Gidon A, Zolnik TA, Fidzinski P, et al. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science. 2020; 367:83–87.
crossref

Fig. 1.
Schematic view of perceptron.
kjg-75-120f1.tif
Fig. 2.
Schematic view of a deep neural network.
kjg-75-120f2.tif
Fig. 3.
Schematic view of a convolutional neural network.
kjg-75-120f3.tif
Fig. 4.
Mechanistic scheme of an artificial neural network.
kjg-75-120f4.tif
Fig. 5.
Limitation of a single perceptron.
kjg-75-120f5.tif
Table 1.
Summary of Clinical Studies Using Machine Learning in the Barrett's Esophagus
Study Aim of study Design of study Number of subjects Type of artificial intelligence Modality Outcomes
de Groof et al. (2019)4 Endoscopic classification and segmentation of early neoplasia in patients with Barrett's esophagus Retrospective Pretraining: 494,364 images. Refinement training: 1,247 images. Refinement training and internal validation: 297 images. External validation: 160 images CNN (hybrid ResNet/U-Net) White-light endoscopy Accuracy in the test dataset: 89%. Accuracy in the external validation: 88% (vs. 73% in the endoscopists). The algorithm identified the optimal site for biopsy of detected neoplasia in 92–97% of cases.
de Groof et al. (2019)5 Endoscopic detection and localization of early neoplasia in patients with Barrett's esophagus Prospective 40 neoplastic Barrett's lesions and 20 non-dysplastic Barret's esophagus SVM White-light endoscopy Accuracy: 92%
Struyvenberg et al. (2019)10 Automatic data extraction followed by computer-aided diagnosis using a multiframe approach for detection of Barrett's esophagus neoplasia Prospective 3,060 volumetric laser endomicroscopy frames Eight predictive models (e.g., SVM, random forest, and naive bayes) Ex vivo volumetric laser endomicroscopy Maximum AUC 0.94, median 0.91
Ebigbo et al. (2019)6 Endoscopic classification and segmentation of early neoplasia in patients with Barrett's esophagus Retrospective Training: 129 images, validation: 36 of early esophageal adenocarcinoma and 26 of normal Barrett's esophagus CNN (ResNet with DeepLab V.3+ encoder-decoder neural network) Capture system of images from the real-time endoscopic camera livestream Accuracy in the validation dataset: 89.9%
Sehgal et al. (2018)7 Decision tree algorithm generated by expert endoscopists could be used to improve dysplasia detection in non-expert endoscopists Retrospective Videos from 40 patients Decision tree algorithm Video recordings of highdefinition endoscopy with i-Scan enhancement Accuracy: 92% (vs. 88% of experts' average accuracy for dysplasia prediction)
Hong et al. (2017)11 To classify three conditions of intestinal metaplasia, gastric metaplasia and neoplasms based on endomicroscopy images Retrospective Training: 262 endomicroscopy images, testing: 26 test images CNN Endomicroscopy images Accuracy: 80.77%
Swager et al. (2017)9 Identification of early Barrett's esophagus neoplasia on ex vivo volumetric laser endomicroscopy images Retrospective 60 volumetric laser endomicroscopy images (training and validation with leave-one-out cross-validation) Ensemble method (SVM, discriminant analysis, Ada Boost, random forest, etc.) Ex vivo volumetric laser endomicroscopy AUC 0.95 (vs. 0.81 for the volumetric laser endomicroscopy experts)
van der Sommen et al. (2016)8 Discrimination of early neoplastic lesions in Barrett's esophagus Retrospective 100 endoscopic images from 44 patients (leave-one-out cross- validation on a per-patient basis) SVM White-light endoscopy Sensitivity: 83%, specificity: 83% (per-image analysis)

CNN, convolutional neural network; SVM, support vector machine; AUC, area under the curve.

Table 2.
Summary of Clinical Studies Using Machine Learning in the Diagnosis of Esophageal Cancers
Study Aim of study Design of study Number of subjects Type of artificial intelligence Modality Outcomes
Guo et al. (2020)12 Diagnosis of precancerous lesions and early esophageal squamous cell carcinomas Retrospective Training: 6,473 narrow-band imaging images, validation with four datasets including images and video clips CNN (SegNet) Narrow-band imaging AUC: 0.989
Tokai et al. (2020)15 Detection and classification for invasion depth of esophageal squamous cell carcinoma Retrospective Training: 1,751 images, testing: 291 images CNN (GoogLeNet) White light endoscopy Accuracy: 80.9%
Cai et al. (2019)13 Detection of esophageal squamous cell carcinoma Retrospective Training and testing: 2,428 images, validation: 187 images CNN White light endoscopy Average accuracy of endoscopists were increased with CNN from 81.7% to 91.1%
Nakagawa et al. (2019)14 Classification for invasion depth of esophageal squamous cell carcinoma Retrospective Training: 8,660 non-ME and 5,678 ME images, validation: 405 non-ME images and 509 ME images CNN Non-ME and ME images of white-light endoscopy Accuracy: 91% (mucosa, SM1 vs. SM2, SM3)
Ghatwary et al.(2019)16 Detection of esophageal adenocarcinoma Retrospective 100 images from 39 patients CNNs High-definition white light endoscopy F-measure: 0.94
Horie et al. (2019)17 Classification and detection of esophageal cancers including squamous cell carcinoma and adenocarcinoma Retrospective Training: 8,428, testing: 1,118 images CNN White-light endoscopy images and narrow-band imaging images Accuracy: 98%

CNN, convolutional neural network; AUC, area under the curve; ME, magnifying endoscopy; SM, submucosa.

Table 3.
Summary of Clinical Studies Using Machine Learning in the Diagnosis of Helicobacter pylori (H. pylori) Infection in Endoscopic Images
Study Aim of study Design of study Number of subjects Type of artificial intelligence Modality Outcomes
Zheng et al. (2019)18 Diagnosis of H. pylori infection Retrospective pilot Training: 11,729 images, testing: 3,755 images CNN White-light endoscopy AUC: 0.93. Accuracy: 84.5% in a single image diagnosis
Shichijo et al. (2019)19 Diagnosis of H. pylori infection Retrospective Training set: 98,564 images, testing: 23,699 images CNN White-light endoscopy Accuracy: 80% (465/582) of negative diagnoses, 84% (147/174) eradicated, and 48% (44/91) positive were accurate. The time needed to diagnose 23,699 images was 261  seconds
Nakashima et al. (2018)20 Diagnosis of H. pylori infection Prospective pilot 222 patients (training: 162, testing: 60) CNN White-light endoscopy and image-enhanced endoscopy, such as blue laser imaging and linked color imaging AUC: 0.96 (blue laser imaging), 0.95 (linked color imaging)
Itoh et al. (2018)21 Diagnosis of H. pylori infection Prospective Training: 149 images (596 images through data augmentation), testing: 30 images CNN White-light endoscopy AUC: 0.956
Shichijo et al. (2017)22 Diagnosis of H. pylori Infection Retrospective Training: 32,208 images, testing: 11,481 images CNN White-light endoscopy Accuracy: 83.1%
Huang et al. (2004)23 Diagnosis of H. pylori infection Prospective Training: 30 patients, testing: 74 patients Refined feature selection with neural network White-light endoscopy Accuracy over 80 % in predicting the presence of gastric atrophy, intestinal metaplasia and the severity of H. pylori-related gastric inflammation

CNN, convolutional neural network; AUC, area under the curve.

Table 4.
Summary of Clinical Studies Using Machine Learning in the Gastric Neoplasms
Study Aim of study Design of study Number of subjects Type of artificial intelligence Modality Outcomes
Cho et al. (2019)24 Diagnosis of gastric neoplasms Retrospective model establishment and prospective validation Training and testing: 5,017 images, validation: 200 images CNN White-light endoscopy AUCs of classifying gastric cancer: 0.877, gastric neoplasm: 0.927
Yoon et al. (2019)25 Classification of endoscopic images as early gastric cancer (T1a or T1b) or non-cancer Retrospective 11,539 endoscopic images (896 T1a-, 809 T1b-, and 9834 non-early gastric cancer) CNN White-light endoscopy AUC of early gastric cancer detection: 0.981, depth prediction: 0.851
Zhu et al. (2019)26 Diagnosis of depth of invasion in gastric cancer (mucosa/SM1/ deeper than SM1) Retrospective Training: 790 images, testing: 203 images CNN White-light endoscopy Accuracy: 89.2%, AUC: 0.94
Hirasawa et al. (2018)28 Detection of gastric cancers Retrospective Training: 13,584 images, testing: 2,296 images CNN White-light endoscopy, chromoendoscopy, narrow-band imaging Accurate detection rate with a diameter of 6 mm or more: 98.6%
Kanesaka et al. (2018)29 Diagnosis and delineation of early gastric cancer using magnifying narrow-band imaging images Retrospective Training: 126 images, testing: 81 images SVM Magnifying narrow-band imaging Accuracy: 96.3%
Kubota et al. (2012)27 Diagnosis of depth of invasion in gastric cancer Retrospective 902 images ANN White-light endoscopy Accuracy: 77.2%, 49.1%, 51.0%, and 55.3% for T1-4 staging, respectively
Lee et al. (2020)30 Classification of normal, benign ulcer, and gastric cancer Retrospective 200 normal, 367 cancer, and 220 ulcer cases CNN White-light endoscopy Accuracy: normal vs. ulcer/normal vs. cancer: above 90%; ulcer vs. cancer: 77.1%

CNN, convolutional neural network; AUC, area under the curve; SM, submucosa; SVM, support vector machine; ANN, artificial neural network.

Table 5.
Summary of Clinical Studies Using Machine Learning in the Upper Gastrointestinal Hemorrhage
Study Aim of study Design of study Number of subjects Type of artificial intelligence Outcomes
Shung et al. (2020)33 Develop a model to calculate the risk of hospital-based intervention or death in patients with upper gastrointestinal hemorrhage Prospective Training and internal validation set: 1,958 patients, external validation: 399 patients Gradient Boosting Algorithm AUC: 0.91 (internal validation) AUC: 0.90 (external validation)
Rotondano et al. (2011)34 Develop a model to predict any death occurring within 30 days of the index bleeding episode Prospective Training and testing: 2,380 patients ANN Accuracy: 96.8%, AUC: 0.95
Das et al. (2008)35 Develop a model to predict stigmata of recent hemorrhage and need for endoscopic therapy Prospective Training: 194 patients, testing: 193 patients, external validation: 200 patients ANN Accuracy: 77% (predict stigmata of recent hemorrhage), 61% (need for endoscopic therapy) in external validation
Grossi et al. (2008)36 Develop a model to predict the risk of death in patients with nonvariceal upper gastrointestinal bleeding Prospective Training and testing: 807 patients ANN Accuracy: 89%

AUC, area under the curve; ANN, artificial neural network.

TOOLS
ORCID iDs

Chang Seok Bang
https://orcid.org/0000-0003-4908-5431

Similar articles