Journal List > J Vet Sci > v.20(4) > 1130032

Yoon, Hwang, Choi, and Lee: Classification of radiographic lung pattern based on texture analysis and machine learning

Abstract

This study evaluated the feasibility of using texture analysis and machine learning to distinguish radiographic lung patterns. A total of 1200 regions of interest (ROIs) including four specific lung patterns (normal, alveolar, bronchial, and unstructured interstitial) were obtained from 512 thoracic radiographs of 252 dogs and 65 cats. Forty-four texture parameters based on eight methods of texture analysis (first-order statistics, spatial gray-level-dependence matrices, gray-level-difference statistics, gray-level run length image statistics, neighborhood gray-tone difference matrices, fractal dimension texture analysis, Fourier power spectrum, and Law's texture energy measures) were used to extract textural features from the ROIs. The texture parameters of each lung pattern were compared and used for training and testing of artificial neural networks. Classification performance was evaluated by calculating accuracy and the area under the receiver operating characteristic curve (AUC). Forty texture parameters showed significant differences between the lung patterns. The accuracy of lung pattern classification was 99.1% in the training dataset and 91.9% in the testing dataset. The AUCs were above 0.98 in the training set and above 0.92 in the testing dataset. Texture analysis and machine learning algorithms may potentially facilitate the evaluation of medical images.

INTRODUCTION

Lung disease pathologically alters lung tissue and usually changes the opacity of the lungs in radiographs. Depending on the tissue affected, a characteristic lung pattern is obtained. Accordingly, classification of the observed lung pattern is very important for the differentiation of lung diseases in thoracic radiography.
Texture or pattern is the intuitive quality of an image area. Computer-based texture analysis can be used to numerically quantify the characteristic features of a texture such as smooth, heterogeneous, coarse, and so on [1]. Texture analyses are categorized as structural, model-based, transformational, and statistical methods. Structural methods that represent texture using well-defined primitive features provide a decent symbolic description of the image [2]. Sophisticated mathematical models such as fractal or stochastic methods have also been used to analyze texture [34]. Fourier and wavelet transform procedures facilitate the analysis of texture in a different space [56]. The statistical approaches represent texture based on properties governing the distribution and relationships of intensities [2]. Texture analysis has been utilized in a variety of applications including automated inspection, document processing, and remote sensing [78910]. It was also applied in a series of studies of medical images [1112131415].
Machine learning, a subfield of computer science and artificial intelligence, involves study in which algorithms are constructed based on data learning and predictions [16]. Unlike statistical modeling, machine learning does not require prior assumptions about the underlying relationships between the variables [17]. Artificial neural networks, inspired by the biological neural networks that constitute the brain, are a set of well-established machine learning algorithms [1819]. Each connection between neuron-like nodes transmits numbers from one network to another, and the networks alter the binding strength of the neurons. Based on their ability for classification or regression, artificial neural networks have been used in a wide range of disciplines including movement control [20], finance [21], pattern recognition [22], and medical diagnosis [23].
Computer-aided detection is an interdisciplinary technology combining machine learning and computer vision with medical imaging [32425262728]. It facilitates disease diagnosis and provides a second opinion for clinicians. While conventional computer-aided detection uses numerical features as a parameter, recent computer-aided detection uses a medical image as a parameter [252729]. Most studies of computer-aided detection include the following steps: cropping the region of interest (ROI) from medical images, extracting features from the ROIs, training an algorithm using the features, and prediction using the trained algorithm [18243031]. This approach suggests that lung patterns could be automatically classified. Lung patterns can be represented by an ROI, quantified by texture analysis, and that quantified data could be used for machine learning.
In this study, the goal was to determine useful texture parameters for discrimination of the four specific lung patterns and develop a predictive model that distinguishes the lung patterns based on the selected parameters.

MATERIALS AND METHODS

Radiographs over the period 2010 to 2016 were collected from a database at the Veterinary Medical Teaching Hospital of Gyeongsang National University because a large number of radiographs could be acquired and clients had agreed to use the radiographs in research. The REGIUS Model 190 (Konica Minolta, Japan) direct digitizer was used for computed radiography (50–70 kVp, 300 mA, and 0.02 sec). Thoracic radiographs with lateromedial and ventrodorsal projections were used. Follow-up images of the same patient were included to increase the number data images. Up to three follow-up images were used. WIZPACS (version 1.027, Medien, Korea) was used for interpretation. Three veterinary radiologists (Y Yoon, T Hwang, and H Lee) made decisions regarding subject inclusion or exclusion. Each radiologist evaluated six lung regions (right cranial lobe, right middle lobe, right caudal lobe, cranial segment of left cranial lobe, caudal segment of left cranial lobe, and left caudal lobe) on ventrodorsal projection and two lung fields (cranioventral lung field and caudodorsal lung field) on lateromedial projection. They then classified the lung regions into one of the following four patterns: normal lung pattern (P1), alveolar pattern (P2), bronchial pattern (P3), or unstructured interstitial pattern (P4). The lung lobes were excluded if they presented a mixed pattern or were inconsistent among individual interpretations in order to minimize the false-negative rate.
The algorithms for ROI selection, texture analysis, and machine learning were coded in MATLAB (R2016b, MathWorks, USA). The toolboxes used in MATLAB included computer vision system, curve fitting, data acquisition, global optimization, image acquisition, image processing, neural network, optimization, parallel computing, and statistics and machine learning tools. A computer with Microsoft Windows 10 (64 bit), an Intel Core i7 4.9 gigahertz central processing unit, 32-gigabyte random-access memory, and an NVIDIA Quadro M4000 graphics card was used for this study.
ROI selection was performed by one (Y Yoon) of the evaluators and was based on the following criteria: 1) a maximum rectangular area that did not overlap with ribs, major vessels, diaphragm, or mediastinum, 2) at least 30 pixels in width and height, and 3) three or fewer ROIs selected from a single lobe. The number of ROIs used was the same as the minimum of the obtained lung patterns; this was done to avoid problems arising from imbalanced data, such as that associated with a false trained model that highly favors the over-represented class [32]. A total of 1200 ROIs, 300 for each pattern were obtained from 252 dogs and 65 cats. The numbers of males and females were 162 and 155. The mean age and its standard deviation were 8.3 and 4.0 years. The number of pixels in an ROI ranged from 1,225 to 10,710 (mean 1,914.7 and standard deviation 1,129.0).
The texture analysis was based on 8-bit converted radiographs. A total of 44 texture parameters (Tables 1, 2, and 3) were selected from those previously used in eight methods of texture analysis [4678111315]: method 1) first-order statistics: mean value (parameter 1), medial value (parameter 2), standard deviation (parameter 3), skewness (parameter 4), and kurtosis (parameter 5) were derived; method 2) spatial gray-level-dependence matrices: angular second moment (parameter 6), contrast (parameter 7), correlation (parameter 8), sum of squares (parameter 9), inverse difference moment (parameter 10), sum of average (parameter 11), sum of variance (parameter 12), sum of entropy (parameter 13), entropy (parameter 14), difference variance (parameter 15), and difference entropy (parameter 16) were computed; method 3) gray-level-difference statistics: contrast (parameter 17), angular second moment (parameter 18), entropy (parameter 19), and mean (parameter 20) were calculated; method 4) gray-level run length image statistics: short run emphasis (parameter 21), long run emphasis (parameter 22), gray-level nonuniformity (parameter 23), run percentage (parameter 24), run length nonuniformity (parameter 25), low gray-level run emphasis (parameter 26), and high gray-level run emphasis (parameter 27) were derived; method 5) neighborhood gray-tone difference matrices: coarseness (parameter 28), contrast (parameter 29), business (parameter 30), complexity (parameter 31), and strength (parameter 32) are calculated; method 6) fractal dimension texture analysis: the Hurst coefficients for dimension 4 (parameter 33), 3 (parameter 34), 2 (parameter 35), and 1 (parameter 36) were computed; method 7) Fourier power spectrum: radial sum (parameter 37) and angular sum (parameter 38) were derived; method 8) Law's texture energy measures: LL-texture energy from LL kernel (parameter 39), EE-texture energy from EE-kernel (parameter 40), SS-texture energy from SS-kernel (parameter 41), LE-average texture energy from LE and EL kernels (parameter 42), ES-average texture energy from ES and SE kernels (parameter 43), and LS-average texture energy from LS and SL kernels (parameter 44) were derived. These parameters were used to extract texture features from ROIs using the algorithm coded in MATLAB. One-way analysis of variance with Tukey's post-hoc test was used to compare the parameters (SPSS version 19.0, SPSS Inc., USA). A p value of less than 0.05 indicated statistical significance of the comparison.
Table 1

Mean and standard deviation values for texture analysis parameters used for distinguishing radiographic lung patterns

jvs-20-e44-i001
Parameters P1 P2 P3 P4
Five parameters of first-order statistics method
1* 46.82 ± 18.71 117.07 ± 36.18 94.09 ± 28.87 72.4 ± 19.63
2* 45.54 ± 18.81 117.4 ± 36.17 93.51 ± 29.29 71.6 ± 19.82
3* 6.16 ± 3.72 11.09 ± 4.8 11.73 ± 4.46 8.34 ± 2.27
4* 0.89 ± 0.78 -0.02 ± 0.38 0.21 ± 0.45 0.54 ± 0.53
5* 5.27 ± 6.68 2.63 ± 0.55 2.83 ± 0.83 3.68 ± 1.82
Eleven parameters of spatial gray-level-dependence matrix method
6 0.096 ± 0.807 0.006 ± 0.010 0.004 ± 0.006 0.007 ± 0.011
7* 7.48 ± 4.33 15.83 ± 7.40 22.2 ± 9.35 15.94 ± 5.36
8 0.95 ± 1.20 0.89 ± 0.08 0.89 ± 0.05 0.84 ± 0.07
9* 50.05 ± 69.00 145.82 ± 136.78 156.48 ± 130.35 74.19 ± 43.13
10 1.08 ± 7.67 0.30 ± 0.08 0.26 ± 0.06 0.30 ± 0.07
11* 95.23 ± 37.57 236.08 ± 72.34 190.11 ± 57.76 146.61 ± 39.25
12* 192.7 ± 273.6 567.5 ± 544.9 603.7 ± 515.3 280.8 ± 171.4
13* 3.43 ± 0.69 4.16 ± 0.51 4.24 ± 0.38 3.88 ± 0.35
14* 4.66 ± 1.09 5.82 ± 0.77 6.00 ± 0.61 5.53 ± 0.67
15* 3.38 ± 1.79 6.25 ± 2.80 8.85 ± 3.78 6.52 ± 2.17
16* 1.72 ± 0.77 2.05 ± 0.29 2.23 ± 0.24 2.04 ± 0.27
P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.
*p-value < 0.001; p-value < 0.05.
Table 2

Mean and standard deviation values for texture analysis parameters used for distinguishing radiographic lung patterns

jvs-20-e44-i002
Parameters P1 P2 P3 P4
Four parameters of gray-level-difference statistics method
17* 7.47 ± 4.33 15.79 ± 7.39 22.14 ± 9.32 15.91 ± 5.34
18* 0.25 ± 0.18 0.15 ± 0.05 0.12 ± 0.04 0.15 ± 0.06
19* 1.68 ± 0.38 2.06 ± 0.29 2.24 ± 0.24 2.05 ± 0.27
20* 1.90 ± 0.68 2.96 ± 0.74 3.52 ± 0.72 2.96 ± 0.57
Seven parameters of gray-level run length image statistics method
21* 0.64 ± 0.17 0.66 ± 0.08 0.69 ± 0.05 0.68 ± 0.07
22* 10.80 ± 34.61 4.53 ± 2.18 3.62 ± 1.01 3.94 ± 1.78
23* 607.0 ± 289.0 424.9 ± 220.7 361.1 ± 176.7 556.9 ± 306.1
24 17.55 ± 188.72 2.30 ± 0.34 2.44 ± 0.24 2.40 ± 0.33
25* 1795.0 ± 937.2 1790.8 ± 897.0 1687.2 ± 816.6 2225.6 ± 1166.7
26* 44.93 ± 34.66 74.13 ± 17.97 68.66 ± 18.07 53.79 ± 19.41
27* 23.99 ± 5.60 20.10 ± 4.59 18.63 ± 3.75 22.96 ± 5.48
Five parameters of neighborhood gray-tone difference matrix method
28* 11.78 ± 5.14 17.84 ± 8.63 15.97 ± 5.41 12.65 ± 4.63
29 0.18 ± 0.38 0.19 ± 0.18 0.25 ± 0.4 0.22 ± 0.35
30 2.2E+00 ± 2.7E+01 1.8E-05 ± 2.2E-05 2.3E-05 ± 3.5E-05 3.8E-05 ± 4.7E-05
31* 903.8 ± 1,158.9 2,624.0 ± 2,024.0 3,280.2 ± 3,087.1 1,784.7 ± 1,112.9
32* 10,946.1 ± 18,069.9 30,058.0 ± 41,868.7 28,759.5 ± 31,153.7 14,667.1 ± 13,414.7
P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.
*p-value < 0.001; p-value < 0.05.
Table 3

Mean and standard deviation values for texture analysis parameters used for distinguishing radiographic lung patterns

jvs-20-e44-i003
Parameters P1 P2 P3 P4
Four parameters of fractal dimension texture analysis method
33* 0.21 ± 0.04 0.23 ± 0.07 0.30 ± 0.06 0.25 ± 0.05
34* 0.31 ± 0.05 0.36 ± 0.07 0.39 ± 0.05 0.32 ± 0.05
35* 0.33 ± 0.09 0.39 ± 0.07 0.31 ± 0.07 0.30 ± 0.07
36 11.56 ± 138.15 0.27 ± 0.13 0.15 ± 0.11 0.21 ± 0.12
Two parameters of the Fourier power spectrum method
37* 2,112.7 ± 931.7 4,893.7 ± 1,863.6 3,624.7 ± 1,418.0 3,156.7 ± 1,024.0
38 296.9 ± 862.3 408.5 ± 276.4 336.4 ± 157.6 291.3 ± 133.7
Six parameters of Law's texture energy measures method
39* 19,970.2 ± 13,453.3 39,054.6 ± 19,385.4 39,921.5 ± 16,661.1 27,429.1 ± 9,010.2
40* 257.6 ± 78.7 392.4 ± 110.6 553.8 ± 150.2 431.3 ± 82.2
41* 67.45 ± 130.02 81.49 ± 20.84 95.44 ± 19.50 85.00 ± 17.53
42* 1,405.6 ± 428.4 2,441.3 ± 784.6 3,776.8 ± 1,169.4 2,573.1 ± 452.7
43* 117.5 ± 51.2 165.1 ± 42.5 208.7 ± 48.6 177.7 ± 35.0
44* 482.4 ± 147.1 725.7 ± 205.6 1,027.4 ± 281.6 796.3 ± 149.3
P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.
*p-value < 0.001; p-value < 0.05.
All of the parameters from ROIs were used for artificial neural networks. All of the selected data samples were divided such that 70% were used to train the algorithms and 30% for testing the models (261 training samples and 139 testing samples for each lung pattern). The detailed settings of the algorithm were determined at the highest performance level through trial and error in a pilot study (Fig. 1). Other configurations were used according to the default settings of MATLAB. The performance of the classifier was evaluated by determining accuracy ([correct predictions]/[number of samples]) and the area under the receiver operating characteristic curve (AUC).
Figure 1

Schematic diagram of detailed configuration and processing of artificial neural networks. Artificial neural networks consisted of one input layer with 44 nodes, two hidden layers with 15 nodes each, and one output layer with 4 nodes. Cross entropy tried to find minima or maxima by iteration. Bayesian regularization was used to calculate a gradient that was needed in the calculation of the weights to be used in the network.

jvs-20-e44-g001

RESULTS

The one-way analysis of variance showed significant differences between lung patterns in 40 texture parameters. Parameter 6, 10, 29, and 38 showed p values less than 0.05. Parameter 1, 2, 3, 4, 5, 7, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 31, 32, 33, 34, 35, 37, 39, 40, 41, 42, 43, and 44 showed p values less than 0.001. In the post-hoc test, the parameters for which there were significant differences between all lung patterns (P1-P2, P1-P3, P1-P4, P2-P3, P2-P4, and P3-P4) were 1, 3, 4, 11, 14, 26, 31, 33, 34, 37, 40, 43, and 44. The accuracy of the artificial neural networks was 99.1% in the training set and 91.9% in the testing set. In addition, the AUCs were above 0.98 in the training set and above 0.92 in the testing set (Table 4).
Table 4

Performance of the artificial neural networks for lung pattern classification

jvs-20-e44-i004
Dataset Accuracy (%) AUC
P1 P2 P3 P4
Training dataset 99.1 1.00 1.00 0.98 0.99
Testing dataset 91.9 0.99 0.93 0.92 0.94
AUC, Area under the receiver operating characteristic curve; P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.

DISCUSSION

Previous studies of computer-aided detection focused on disease detection by using a small-sized ROI [19253033]. Diagnosis based on a small-sized ROI may lead to misdiagnosis because it does not include a global impression or multiple findings. In addition, non-radiological information such as signalment or clinical sign, which may be important in detecting a disease, cannot be expressed by an ROI. Accordingly, using a single ROI may be more appropriate for detecting radiological findings rather than for disease differentiation. Therefore, we attempted to use ROIs to classify the specific lung pattern.
Lung patterns are broadly divided into normal, alveolar, interstitial (including unstructured and nodular pattern), bronchial, and vascular patterns [34]. In this study, the nodular interstitial and vascular patterns were excluded because of the ROI selection criteria there were used. Ribs in an ROI can affect texture parameters. Thus, the ROI had to be obtained in the intercostal space and the size of the ROI had to be small. A small-sized ROI would have obtained square-shaped ROI filled with soft tissue opacity rather than an ROI containing nodules. It would also be limited in the comparison of arteries and veins or in the expression of vascular branching. In addition, mixed patterns were excluded from this study. If all the mixed patterns were classified into one class, it would be difficult to obtain consistent texture characteristics. In the case of classifying mixed patterns according to the prominent pattern, the criteria for selecting the prominent patterns would be ambiguous. Also, even if the lobes were classified into a mixed pattern, it would be highly likely that only a single pattern might be included in a small-sized ROI. Therefore, nodular, vascular, and mixed patterns were excluded.
A texture parameter commonly used in diagnostic imaging such as computed tomography is the average Hounsfield unit value for an ROI. However, other features such as fineness or coarseness, homogeneous or heterogeneous enhancement, and well-delineated or ill-defined topology depend on subjective evaluation. Even if differentiation of these textures is based on a consensus between radiologists, it is difficult to agree on the extent. In this case, texture analysis can be used because it can be quantified. Although the texture parameters may vary in value even in the same region of the same patient depending on the image acquisition equipment or post-processing kernel used, texture analysis can still be used to compare ROIs obtained from the same equipment. Therefore, this study used texture analysis for ROIs acquired from the same equipment, and 40 of the 44 texture parameters showed significant differences between lung patterns. The results suggest that such parameters can be used for lung pattern classification. Furthermore, texture analysis may be used to evaluate clinically important patterns and to determine the quality of the image acquisition equipment.
The number of parameters to be included is one of the important factors in constructing a predictive model. Multiple parameters may reduce the performance of the model by causing unnecessary fluctuation [35]. In this study, the 44 parameters were used to train the artificial neural networks resulting in an accuracy of 91.9% in the testing dataset. In a pilot study, models trained using the parameters from each analysis showed substantially lower performance than the study model in the testing set. Pilot study model accuracies were: first-order statistic (5 parameters), 63.8% accuracy; spatial gray-level-dependence matrix (11 parameters), 70.5%; gray-level-difference statistics (4 parameters), 54.4%; gray-level run length image statistics (7 parameters), 60.7%; neighborhood gray-tone difference matrix (5 parameters), 55.6%; fractal dimension analysis (4 parameters), 57.6%; Fourier power spectrum (2 parameters), 47.0%; and Law's texture energy measures (6 parameters), 68.6% accuracy. These results indicated that all of the selected parameters can positively influence the performance of lung pattern artificial neural networks. Parameters 11, 14, 31, 40, 43, and 44 produced p value less than 0.001 in one-way analysis of variance, and they showed significant differences between all of the lung patterns in the post-hoc test results. Thus, parameters 11, 14, 31, 40, 43, and 44 were the more important parameters for use in lung pattern classification.
The accuracy obtained from the training data is generally higher or similar to that of the test data because, during machine learning, the model is fitted to the training data in order to obtain reliable predictions with general untrained data. If the model is overfitted to the training data, it yields poor performance as it overreacts to minor fluctuations in the training data. In the performance evaluation of the artificial neural networks developed in this study, small gaps were present between the training and testing datasets, thereby resulting in high accuracy. The results demonstrate successful learning by the model of generalized trends in the textures of the lung patterns studied.
We are aware of several limitations of this study. All of the radiographs were generated by a single machine. It is difficult to judge how many texture parameters may be altered in images acquired from other equipment. In order to objectively validate the results of this study, data from other computed or digital radiographs should be assessed. In this study, texture analysis was applied to 8-bit converted radiographs for computation in MATLAB, whereas the actual computed radiographs were 12-bit images. Intensity information may be partially missing or integrated during the conversion. Further research is needed to assess whether such conversion has a positive or negative effect on accuracy. In this study, lung lobes disputed by the evaluators were excluded in order to minimize the false-negative rate associated with the radiologists. Therefore, the data in this study might only be composed of obvious patterns. An adequate number of unclear lung patterns should be included in further validation studies. In the present study, a reproducibility evaluation of each radiologist was not undertaken, and ROI selection was performed by a single evaluator. Even if all the evaluators agreed on the pattern classification of the lobes, the artificial neural networks might have been influenced by the subjective judgment of the evaluator who selected the ROIs.
This study attempted to evaluate the utility of texture analysis and machine learning to discriminate four specific lung patterns. A number of texture parameters showed significant differences between the patterns. The developed artificial neural networks demonstrated high performance in discriminating the patterns. Texture analysis and machine learning algorithms may have potential for application in the evaluation of medical images.

Notes

Conflict of Interest The authors declare no conflicts of interest.

Author Contributions

  • Conceptualization: Yoon Y.

  • Data curation: Yoon Y, Lee H.

  • Formal analysis: Yoon Y, Hwang T, Choi H, Lee H.

  • Methodology: Yoon Y.

  • Project administration: Yoon Y, Lee H.

  • Resources: Lee H.

  • Software: Yoon Y, Hwang T.

  • Supervision: Choi H, Lee H.

  • Validation: Choi H, Lee H.

  • Visualization: Yoon Y.

  • Writing - original draft: Yoon Y.

  • Writing - review & editing: Yoon Y.

References

1. Akay MF. Support vector machines combined with feature selection for breast cancer diagnosis. Expert Syst Appl. 2009; 36:3240–3247.
crossref
2. Baxt WG. Use of an artificial neural network for the diagnosis of myocardial infarction. Ann Intern Med. 1991; 115:843–848.
crossref
3. Carbonell JG, Michalski RS, Mitchell TM. An Overview of Machine Learning. Heidelberg: Springer;1983. p. 3–23.
4. Chen CC, Daponte JS, Fox MD. Fractal feature analysis and classification in medical imaging. IEEE Trans Med Imaging. 1989; 8:133–142.
crossref
5. Chen EL, Chung PC, Chen CL, Tsai HM, Chang CI. An automatic diagnostic system for CT liver image classification. IEEE Trans Biomed Eng. 1998; 45:783–794.
crossref
6. Chen HL, Huang CC, Yu XG, Xu X, Sun X, Wang G, Wang SJ. An efficient diagnosis system for detection of Parkinson's disease using fuzzy k-nearest neighbor approach. Expert Syst Appl. 2013; 40:263–271.
crossref
7. Christensen O. Functions, Spaces, and Expansions. 1st ed. Birkhäuser Verlag: Springer;2010. p. 159–180.
8. Cooley JW, Lewis PA, Welch PD. The fast Fourier transform and its applications. IEEE Trans Educ. 1969; 12:27–34.
crossref
9. French J. The time traveller’'s CAPM. Invest Anal J. 2017; 46:81–96.
crossref
10. Ganesan N, Venkatesh K, Rama M, Palani AM. Application of neural networks in diagnosing cancer disease using demographic data. Int J Comput Appl. 2010; 1:76–85.
crossref
11. Gelhar LW, Axness CL. Three‐dimensional stochastic analysis of macrodispersion in aquifers. Water Resour Res. 1983; 19:161–180.
crossref
12. Gilbert FJ, Astley SM, Gillan MG, Agbaje OF, Wallis MG, James J, Boggis CR, Duffy SW. CADET II Group. Single reading with computer-aided detection for screening mammography. N Engl J Med. 2008; 359:1675–1684.
crossref
13. Haralick RM. Statistical and structural approaches to texture. Proc IEEE. 1979; 67:786–804.
crossref
14. Haralick RM, Shanmugam K, Dinstein IH. Textural features for image classification. IEEE Trans Syst Man Cybern. 1973; 3:610–621.
crossref
15. Harms H, Gunzer U, Aus HM. Combined local color and texture analysis of stained cells. Comput Vis Graph Image Process. 1986; 33:364–376.
crossref
16. Insana MF, Wagner RF, Garra BS, Brown DG, Shawker TH. Analysis of ultrasound image texture via generalized Rician statistics. Opt Eng. 1986; 25:256743.
crossref
17. Ji Q, Engel J, Craine E. Texture analysis for classification of cervix lesions. IEEE Trans Med Imaging. 2000; 19:1144–1149.
crossref
18. Jiang M, Zhang S, Li H, Metaxas DN. Computer-aided diagnosis of mammographic masses using scalable image retrieval. IEEE Trans Biomed Eng. 2015; 62:783–792.
crossref
19. Kadah YM, Farag AA, Zurada JM, Badawi AM, Youssef AM. Classification algorithms for quantitative tissue characterization of diffuse liver disease from ultrasound images. IEEE Trans Med Imaging. 1996; 15:466–478.
crossref
20. Mahmoud-Ghoneim D, Toussaint G, Constans JM, de Certaines JD. Three dimensional texture analysis in MRI: a preliminary evaluation in gliomas. Magn Reson Imaging. 2003; 21:983–987.
crossref
21. Malon CD, Cosatto E. Classification of mitotic figures with convolutional neural networks and seeded blob features. J Pathol Inform. 2013; 4:9.
crossref
22. Mathias JM, Tofts PS, Losseff NA. Texture analysis of spinal cord pathology in multiple sclerosis. Magn Reson Med. 1999; 42:929–935.
crossref
23. Polat K, Güneş S. Breast cancer diagnosis using least square support vector machine. Digit Signal Process. 2007; 17:694–701.
crossref
24. Ravandi SH, Toriumi K. Fourier transform analysis of plain weave fabric appearance. Text Res J. 1995; 65:676–683.
crossref
25. Seiffert C, Khoshgoftaar TM, Van Hulse J, Napolitano A. A comparative study of data sampling and cost sensitive learning. In : IEEE International Conference on Data Mining Workshops; December 15-19, 2008; Pisa, Italy.
26. Sengupta N, Sahidullah M, Saha G. Lung sound classification using cepstral-based statistical features. Comput Biol Med. 2016; 75:118–129.
crossref
27. Shiraishi J, Li Q, Appelbaum D, Doi K. Computer-aided diagnosis and artificial intelligence in clinical imaging. Semin Nucl Med. 2011; 41:449–462.
crossref
28. Sujana H, Swarnamani S, Suresh S. Application of artificial neural networks for the classification of liver lesions by image texture parameters. Ultrasound Med Biol. 1996; 22:1177–1181.
crossref
29. Tobias S, Victoria J. BSAVA Manual of Canine and Feline Thoracic Imaging. Quedgeley: British Small Animal Veterinary Association;2008. p. 250–260.
30. Tu JV. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. J Clin Epidemiol. 1996; 49:1225–1231.
crossref
31. Weszka JS, Dyer CR, Rosenfeld A. A comparative study of texture measures for terrain classification. IEEE Trans Syst Man Cybern. 1976; 6:269–285.
crossref
32. Zhang G, Patuwo BE, Hu MY. Forecasting with artificial neural networks: The state of the art. Int J Forecast. 1998; 14:35–62.
33. Zhu C, Yang X. Study of remote sensing image texture analysis and classification using wavelet. Int J Remote Sens. 1998; 19:3197–3203.
crossref
34. Zhu Y, Tan T, Wang Y. Font recognition based on global texture analysis. IEEE Trans Pattern Anal Mach Intell. 2001; 23:1192–1200.
crossref
35. Zissis D, Xidias EK, Lekkas D. A cloud based architecture capable of perceiving and predicting multiple vessel behaviour. Appl Soft Comput. 2015; 35:652–661.
crossref
TOOLS
ORCID iDs

Youngmin Yoon
https://orcid.org/0000-0003-0525-8724

Taesung Hwang
https://orcid.org/0000-0001-6730-6061

Hojung Choi
https://orcid.org/0000-0001-7167-0755

Heechun Lee
https://orcid.org/0000-0001-5936-9118

Similar articles