<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "JATS-journalpublishing1.dtd">
<article xml:lang="EN" article-type="research-article">

<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">J Vet Sci</journal-id>
<journal-id journal-id-type="publisher-id">JVS</journal-id>
<journal-title-group>
<journal-title>Journal of Veterinary Science</journal-title>
</journal-title-group>
<issn pub-type="ppub">1229-845X</issn>
<issn pub-type="epub">1976-555X</issn>
<publisher>
<publisher-name>The Korean Society of Veterinary Science</publisher-name>
</publisher>
</journal-meta>

<article-meta>
<article-id pub-id-type="doi">10.4142/jvs.2019.20.e44</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Article</subject>
<subj-group subj-group-type="subheading">
<subject>Medical Imaging</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Classification of radiographic lung pattern based on texture analysis and machine learning</article-title>
</title-group>

<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid" authenticated="true">https://orcid.org/0000-0003-0525-8724</contrib-id>
<name>
<surname>Yoon</surname>
<given-names>Youngmin</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
</contrib>

<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid" authenticated="true">https://orcid.org/0000-0001-6730-6061</contrib-id>
<name>
<surname>Hwang</surname>
<given-names>Taesung</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
</contrib>

<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid" authenticated="true">https://orcid.org/0000-0001-7167-0755</contrib-id>
<name>
<surname>Choi</surname>
<given-names>Hojung</given-names>
</name>
<xref ref-type="aff" rid="A2">2</xref>
</contrib>

<contrib contrib-type="author" corresp="yes">
<contrib-id contrib-id-type="orcid" authenticated="true">https://orcid.org/0000-0001-5936-9118</contrib-id>
<name>
<surname>Lee</surname>
<given-names>Heechun</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
</contrib>
</contrib-group>

<aff id="A1"><label>1</label>Institute of Animal Medicine, College of Veterinary Medicine, Gyeongsang National University, Jinju 52828, <country>Korea</country>.</aff>
<aff id="A2"><label>2</label>College of Veterinary Medicine, Chungnam National University, Daejeon 34134, <country>Korea</country>.</aff>

<author-notes>
<corresp>Corresponding author: Heechun Lee. Institute of Animal Medicine, College of Veterinary Medicine, Gyeongsang National University, 501 Jinju-daero, Jinju 52829, Korea. <email>lhc@gnu.ac.kr</email>
</corresp>
</author-notes>

<pub-date pub-type="collection">
<month>07</month>
<year>2019</year>
</pub-date>
<pub-date pub-type="epub">
<day>16</day>
<month>07</month>
<year>2019</year>
</pub-date>
<volume>20</volume>
<issue>4</issue>
<elocation-id>e44</elocation-id>

<history>
<date date-type="received">
<day>27</day>
<month>02</month>
<year>2019</year>
</date>
<date date-type="rev-recd">
<day>08</day>
<month>05</month>
<year>2019</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>07</month>
<year>2019</year>
</date>
</history>

<permissions>
<copyright-statement>&#x00A9; 2019 The Korean Society of Veterinary Science</copyright-statement>
<copyright-year>2019</copyright-year>
<copyright-holder>The Korean Society of Veterinary Science</copyright-holder>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc/4.0">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (<ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://creativecommons.org/licenses/by-nc/4.0">https://creativecommons.org/licenses/by-nc/4.0</ext-link>) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>

<abstract>
<p>This study evaluated the feasibility of using texture analysis and machine learning to distinguish radiographic lung patterns. A total of 1200 regions of interest (ROIs) including four specific lung patterns (normal, alveolar, bronchial, and unstructured interstitial) were obtained from 512 thoracic radiographs of 252 dogs and 65 cats. Forty-four texture parameters based on eight methods of texture analysis (first-order statistics, spatial gray-level-dependence matrices, gray-level-difference statistics, gray-level run length image statistics, neighborhood gray-tone difference matrices, fractal dimension texture analysis, Fourier power spectrum, and Law's texture energy measures) were used to extract textural features from the ROIs. The texture parameters of each lung pattern were compared and used for training and testing of artificial neural networks. Classification performance was evaluated by calculating accuracy and the area under the receiver operating characteristic curve (AUC). Forty texture parameters showed significant differences between the lung patterns. The accuracy of lung pattern classification was 99.1% in the training dataset and 91.9% in the testing dataset. The AUCs were above 0.98 in the training set and above 0.92 in the testing dataset. Texture analysis and machine learning algorithms may potentially facilitate the evaluation of medical images.</p>
</abstract>

<kwd-group kwd-group-type="author">
<kwd>Neural network model</kwd>
<kwd>visual pattern recognition</kwd>
<kwd>thoracic radiography</kwd>
</kwd-group>


</article-meta>
</front>

<body>
<sec sec-type="intro">
<title>INTRODUCTION</title>
<p>Lung disease pathologically alters lung tissue and usually changes the opacity of the lungs in radiographs. Depending on the tissue affected, a characteristic lung pattern is obtained. Accordingly, classification of the observed lung pattern is very important for the differentiation of lung diseases in thoracic radiography.</p>
<p>Texture or pattern is the intuitive quality of an image area. Computer-based texture analysis can be used to numerically quantify the characteristic features of a texture such as smooth, heterogeneous, coarse, and so on [<xref ref-type="bibr" rid="B1">1</xref>]. Texture analyses are categorized as structural, model-based, transformational, and statistical methods. Structural methods that represent texture using well-defined primitive features provide a decent symbolic description of the image [<xref ref-type="bibr" rid="B2">2</xref>]. Sophisticated mathematical models such as fractal or stochastic methods have also been used to analyze texture [<xref ref-type="bibr" rid="B3">3</xref><xref ref-type="bibr" rid="B4">4</xref>]. Fourier and wavelet transform procedures facilitate the analysis of texture in a different space [<xref ref-type="bibr" rid="B5">5</xref><xref ref-type="bibr" rid="B6">6</xref>]. The statistical approaches represent texture based on properties governing the distribution and relationships of intensities [<xref ref-type="bibr" rid="B2">2</xref>]. Texture analysis has been utilized in a variety of applications including automated inspection, document processing, and remote sensing [<xref ref-type="bibr" rid="B7">7</xref><xref ref-type="bibr" rid="B8">8</xref><xref ref-type="bibr" rid="B9">9</xref><xref ref-type="bibr" rid="B10">10</xref>]. It was also applied in a series of studies of medical images [<xref ref-type="bibr" rid="B11">11</xref><xref ref-type="bibr" rid="B12">12</xref><xref ref-type="bibr" rid="B13">13</xref><xref ref-type="bibr" rid="B14">14</xref><xref ref-type="bibr" rid="B15">15</xref>].</p>
<p>Machine learning, a subfield of computer science and artificial intelligence, involves study in which algorithms are constructed based on data learning and predictions [<xref ref-type="bibr" rid="B16">16</xref>]. Unlike statistical modeling, machine learning does not require prior assumptions about the underlying relationships between the variables [<xref ref-type="bibr" rid="B17">17</xref>]. Artificial neural networks, inspired by the biological neural networks that constitute the brain, are a set of well-established machine learning algorithms [<xref ref-type="bibr" rid="B18">18</xref><xref ref-type="bibr" rid="B19">19</xref>]. Each connection between neuron-like nodes transmits numbers from one network to another, and the networks alter the binding strength of the neurons. Based on their ability for classification or regression, artificial neural networks have been used in a wide range of disciplines including movement control [<xref ref-type="bibr" rid="B20">20</xref>], finance [<xref ref-type="bibr" rid="B21">21</xref>], pattern recognition [<xref ref-type="bibr" rid="B22">22</xref>], and medical diagnosis [<xref ref-type="bibr" rid="B23">23</xref>].</p>
<p>Computer-aided detection is an interdisciplinary technology combining machine learning and computer vision with medical imaging [<xref ref-type="bibr" rid="B3">3</xref><xref ref-type="bibr" rid="B24">24</xref><xref ref-type="bibr" rid="B25">25</xref><xref ref-type="bibr" rid="B26">26</xref><xref ref-type="bibr" rid="B27">27</xref><xref ref-type="bibr" rid="B28">28</xref>]. It facilitates disease diagnosis and provides a second opinion for clinicians. While conventional computer-aided detection uses numerical features as a parameter, recent computer-aided detection uses a medical image as a parameter [<xref ref-type="bibr" rid="B25">25</xref><xref ref-type="bibr" rid="B27">27</xref><xref ref-type="bibr" rid="B29">29</xref>]. Most studies of computer-aided detection include the following steps: cropping the region of interest (ROI) from medical images, extracting features from the ROIs, training an algorithm using the features, and prediction using the trained algorithm [<xref ref-type="bibr" rid="B18">18</xref><xref ref-type="bibr" rid="B24">24</xref><xref ref-type="bibr" rid="B30">30</xref><xref ref-type="bibr" rid="B31">31</xref>]. This approach suggests that lung patterns could be automatically classified. Lung patterns can be represented by an ROI, quantified by texture analysis, and that quantified data could be used for machine learning.</p>
<p>In this study, the goal was to determine useful texture parameters for discrimination of the four specific lung patterns and develop a predictive model that distinguishes the lung patterns based on the selected parameters.</p>
</sec>

<sec sec-type="materials|methods">
<title>MATERIALS AND METHODS</title>
<p>Radiographs over the period 2010 to 2016 were collected from a database at the Veterinary Medical Teaching Hospital of Gyeongsang National University because a large number of radiographs could be acquired and clients had agreed to use the radiographs in research. The REGIUS Model 190 (Konica Minolta, Japan) direct digitizer was used for computed radiography (50&#x2013;70 kVp, 300 mA, and 0.02 sec). Thoracic radiographs with lateromedial and ventrodorsal projections were used. Follow-up images of the same patient were included to increase the number data images. Up to three follow-up images were used. WIZPACS (version 1.027, Medien, Korea) was used for interpretation. Three veterinary radiologists (Y Yoon, T Hwang, and H Lee) made decisions regarding subject inclusion or exclusion. Each radiologist evaluated six lung regions (right cranial lobe, right middle lobe, right caudal lobe, cranial segment of left cranial lobe, caudal segment of left cranial lobe, and left caudal lobe) on ventrodorsal projection and two lung fields (cranioventral lung field and caudodorsal lung field) on lateromedial projection. They then classified the lung regions into one of the following four patterns: normal lung pattern (P1), alveolar pattern (P2), bronchial pattern (P3), or unstructured interstitial pattern (P4). The lung lobes were excluded if they presented a mixed pattern or were inconsistent among individual interpretations in order to minimize the false-negative rate.</p>
<p>The algorithms for ROI selection, texture analysis, and machine learning were coded in MATLAB (R2016b, MathWorks, USA). The toolboxes used in MATLAB included computer vision system, curve fitting, data acquisition, global optimization, image acquisition, image processing, neural network, optimization, parallel computing, and statistics and machine learning tools. A computer with Microsoft Windows 10 (64 bit), an Intel Core i7 4.9 gigahertz central processing unit, 32-gigabyte random-access memory, and an NVIDIA Quadro M4000 graphics card was used for this study.</p>
<p>ROI selection was performed by one (Y Yoon) of the evaluators and was based on the following criteria: 1) a maximum rectangular area that did not overlap with ribs, major vessels, diaphragm, or mediastinum, 2) at least 30 pixels in width and height, and 3) three or fewer ROIs selected from a single lobe. The number of ROIs used was the same as the minimum of the obtained lung patterns; this was done to avoid problems arising from imbalanced data, such as that associated with a false trained model that highly favors the over-represented class [<xref ref-type="bibr" rid="B32">32</xref>]. A total of 1200 ROIs, 300 for each pattern were obtained from 252 dogs and 65 cats. The numbers of males and females were 162 and 155. The mean age and its standard deviation were 8.3 and 4.0 years. The number of pixels in an ROI ranged from 1,225 to 10,710 (mean 1,914.7 and standard deviation 1,129.0).</p>
<p>The texture analysis was based on 8-bit converted radiographs. A total of 44 texture parameters (<xref ref-type="table" rid="T1">Tables 1</xref>, <xref ref-type="table" rid="T2">2</xref>, and <xref ref-type="table" rid="T3">3</xref>) were selected from those previously used in eight methods of texture analysis [<xref ref-type="bibr" rid="B4">4</xref><xref ref-type="bibr" rid="B6">6</xref><xref ref-type="bibr" rid="B7">7</xref><xref ref-type="bibr" rid="B8">8</xref><xref ref-type="bibr" rid="B11">11</xref><xref ref-type="bibr" rid="B13">13</xref><xref ref-type="bibr" rid="B15">15</xref>]: method 1) first-order statistics: mean value (parameter 1), medial value (parameter 2), standard deviation (parameter 3), skewness (parameter 4), and kurtosis (parameter 5) were derived; method 2) spatial gray-level-dependence matrices: angular second moment (parameter 6), contrast (parameter 7), correlation (parameter 8), sum of squares (parameter 9), inverse difference moment (parameter 10), sum of average (parameter 11), sum of variance (parameter 12), sum of entropy (parameter 13), entropy (parameter 14), difference variance (parameter 15), and difference entropy (parameter 16) were computed; method 3) gray-level-difference statistics: contrast (parameter 17), angular second moment (parameter 18), entropy (parameter 19), and mean (parameter 20) were calculated; method 4) gray-level run length image statistics: short run emphasis (parameter 21), long run emphasis (parameter 22), gray-level nonuniformity (parameter 23), run percentage (parameter 24), run length nonuniformity (parameter 25), low gray-level run emphasis (parameter 26), and high gray-level run emphasis (parameter 27) were derived; method 5) neighborhood gray-tone difference matrices: coarseness (parameter 28), contrast (parameter 29), business (parameter 30), complexity (parameter 31), and strength (parameter 32) are calculated; method 6) fractal dimension texture analysis: the Hurst coefficients for dimension 4 (parameter 33), 3 (parameter 34), 2 (parameter 35), and 1 (parameter 36) were computed; method 7) Fourier power spectrum: radial sum (parameter 37) and angular sum (parameter 38) were derived; method 8) Law's texture energy measures: LL-texture energy from LL kernel (parameter 39), EE-texture energy from EE-kernel (parameter 40), SS-texture energy from SS-kernel (parameter 41), LE-average texture energy from LE and EL kernels (parameter 42), ES-average texture energy from ES and SE kernels (parameter 43), and LS-average texture energy from LS and SL kernels (parameter 44) were derived. These parameters were used to extract texture features from ROIs using the algorithm coded in MATLAB. One-way analysis of variance with Tukey's post-hoc test was used to compare the parameters (SPSS version 19.0, SPSS Inc., USA). A <italic>p</italic> value of less than 0.05 indicated statistical significance of the comparison.</p>
<table-wrap id="T1" position="float">
<?Table Small?>
<label>Table 1</label>
<caption>
<title>Mean and standard deviation values for texture analysis parameters used for distinguishing radiographic lung patterns</title>
</caption>
<alternatives>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jvs-20-e44-i001"></graphic>
<table frame="hsides" rules="rows">
<col width="3.72%"/>
<col width="16.72%"/>
<col width="17.96%"/>
<col width="21.05%"/>
<col width="21.05%"/>
<col width="19.5%"/>
<thead>
<tr>
<th valign="top" align="left" rowspan="1" colspan="2" style="background-color:rgb(238,248,254)">Parameters</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P1</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P2</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P3</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P4</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6">Five parameters of first-order statistics method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">1<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">46.82 &#x00b1; 18.71</td>
<td valign="top" align="center" rowspan="1" colspan="1">117.07 &#x00b1; 36.18</td>
<td valign="top" align="center" rowspan="1" colspan="1">94.09 &#x00b1; 28.87</td>
<td valign="top" align="center" rowspan="1" colspan="1">72.4 &#x00b1; 19.63</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">2<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">45.54 &#x00b1; 18.81</td>
<td valign="top" align="center" rowspan="1" colspan="1">117.4 &#x00b1; 36.17</td>
<td valign="top" align="center" rowspan="1" colspan="1">93.51 &#x00b1; 29.29</td>
<td valign="top" align="center" rowspan="1" colspan="1">71.6 &#x00b1; 19.82</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">3<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">6.16 &#x00b1; 3.72</td>
<td valign="top" align="center" rowspan="1" colspan="1">11.09 &#x00b1; 4.8</td>
<td valign="top" align="center" rowspan="1" colspan="1">11.73 &#x00b1; 4.46</td>
<td valign="top" align="center" rowspan="1" colspan="1">8.34 &#x00b1; 2.27</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">4<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">0.89 &#x00b1; 0.78</td>
<td valign="top" align="center" rowspan="1" colspan="1">-0.02 &#x00b1; 0.38</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.21 &#x00b1; 0.45</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.54 &#x00b1; 0.53</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">5<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">5.27 &#x00b1; 6.68</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.63 &#x00b1; 0.55</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.83 &#x00b1; 0.83</td>
<td valign="top" align="center" rowspan="1" colspan="1">3.68 &#x00b1; 1.82</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6" style="background-color:rgb(238,248,254)">Eleven parameters of spatial gray-level-dependence matrix method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">6<sup>&#x2020;</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.096 &#x00b1; 0.807</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.006 &#x00b1; 0.010</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.004 &#x00b1; 0.006</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.007 &#x00b1; 0.011</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">7<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">7.48 &#x00b1; 4.33</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">15.83 &#x00b1; 7.40</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">22.2 &#x00b1; 9.35</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">15.94 &#x00b1; 5.36</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">8</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.95 &#x00b1; 1.20</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.89 &#x00b1; 0.08</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.89 &#x00b1; 0.05</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.84 &#x00b1; 0.07</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">9<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">50.05 &#x00b1; 69.00</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">145.82 &#x00b1; 136.78</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">156.48 &#x00b1; 130.35</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">74.19 &#x00b1; 43.13</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">10<sup>&#x2020;</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">1.08 &#x00b1; 7.67</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.30 &#x00b1; 0.08</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.26 &#x00b1; 0.06</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.30 &#x00b1; 0.07</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">11<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">95.23 &#x00b1; 37.57</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">236.08 &#x00b1; 72.34</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">190.11 &#x00b1; 57.76</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">146.61 &#x00b1; 39.25</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">12<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">192.7 &#x00b1; 273.6</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">567.5 &#x00b1; 544.9</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">603.7 &#x00b1; 515.3</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">280.8 &#x00b1; 171.4</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">13<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">3.43 &#x00b1; 0.69</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">4.16 &#x00b1; 0.51</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">4.24 &#x00b1; 0.38</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">3.88 &#x00b1; 0.35</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">14<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">4.66 &#x00b1; 1.09</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">5.82 &#x00b1; 0.77</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">6.00 &#x00b1; 0.61</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">5.53 &#x00b1; 0.67</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">15<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">3.38 &#x00b1; 1.79</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">6.25 &#x00b1; 2.80</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">8.85 &#x00b1; 3.78</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">6.52 &#x00b1; 2.17</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">16<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">1.72 &#x00b1; 0.77</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2.05 &#x00b1; 0.29</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2.23 &#x00b1; 0.24</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2.04 &#x00b1; 0.27</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<p>P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.</p>
<p><sup>*</sup><italic>p</italic>-value &lt; 0.001; <sup>&#x2020;</sup><italic>p</italic>-value &lt; 0.05.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap id="T2" position="float">
<?Table Small?>
<label>Table 2</label>
<caption>
<title>Mean and standard deviation values for texture analysis parameters used for distinguishing radiographic lung patterns</title>
</caption>
<alternatives>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jvs-20-e44-i002"></graphic>
<table frame="hsides" rules="rows">
<col width="3.72%"/>
<col width="16.72%"/>
<col width="17.96%"/>
<col width="21.05%"/>
<col width="21.05%"/>
<col width="19.5%"/>
<thead>
<tr>
<th valign="top" align="left" rowspan="1" colspan="2" style="background-color:rgb(238,248,254)">Parameters</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P1</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P2</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P3</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P4</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6">Four parameters of gray-level-difference statistics method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">17<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">7.47 &#x00b1; 4.33</td>
<td valign="top" align="center" rowspan="1" colspan="1">15.79 &#x00b1; 7.39</td>
<td valign="top" align="center" rowspan="1" colspan="1">22.14 &#x00b1; 9.32</td>
<td valign="top" align="center" rowspan="1" colspan="1">15.91 &#x00b1; 5.34</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">18<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">0.25 &#x00b1; 0.18</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.15 &#x00b1; 0.05</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.12 &#x00b1; 0.04</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.15 &#x00b1; 0.06</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">19<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">1.68 &#x00b1; 0.38</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.06 &#x00b1; 0.29</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.24 &#x00b1; 0.24</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.05 &#x00b1; 0.27</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">20<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">1.90 &#x00b1; 0.68</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.96 &#x00b1; 0.74</td>
<td valign="top" align="center" rowspan="1" colspan="1">3.52 &#x00b1; 0.72</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.96 &#x00b1; 0.57</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6" style="background-color:rgb(238,248,254)">Seven parameters of gray-level run length image statistics method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">21<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.64 &#x00b1; 0.17</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.66 &#x00b1; 0.08</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.69 &#x00b1; 0.05</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.68 &#x00b1; 0.07</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">22<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">10.80 &#x00b1; 34.61</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">4.53 &#x00b1; 2.18</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">3.62 &#x00b1; 1.01</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">3.94 &#x00b1; 1.78</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">23<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">607.0 &#x00b1; 289.0</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">424.9 &#x00b1; 220.7</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">361.1 &#x00b1; 176.7</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">556.9 &#x00b1; 306.1</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">24</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">17.55 &#x00b1; 188.72</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2.30 &#x00b1; 0.34</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2.44 &#x00b1; 0.24</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2.40 &#x00b1; 0.33</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">25<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">1795.0 &#x00b1; 937.2</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">1790.8 &#x00b1; 897.0</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">1687.2 &#x00b1; 816.6</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2225.6 &#x00b1; 1166.7</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">26<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">44.93 &#x00b1; 34.66</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">74.13 &#x00b1; 17.97</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">68.66 &#x00b1; 18.07</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">53.79 &#x00b1; 19.41</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">27<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">23.99 &#x00b1; 5.60</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">20.10 &#x00b1; 4.59</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">18.63 &#x00b1; 3.75</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">22.96 &#x00b1; 5.48</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6">Five parameters of neighborhood gray-tone difference matrix method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">28<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">11.78 &#x00b1; 5.14</td>
<td valign="top" align="center" rowspan="1" colspan="1">17.84 &#x00b1; 8.63</td>
<td valign="top" align="center" rowspan="1" colspan="1">15.97 &#x00b1; 5.41</td>
<td valign="top" align="center" rowspan="1" colspan="1">12.65 &#x00b1; 4.63</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">29<sup>&#x2020;</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">0.18 &#x00b1; 0.38</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.19 &#x00b1; 0.18</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.25 &#x00b1; 0.4</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.22 &#x00b1; 0.35</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">30</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.2E+00 &#x00b1; 2.7E+01</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.8E-05 &#x00b1; 2.2E-05</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.3E-05 &#x00b1; 3.5E-05</td>
<td valign="top" align="center" rowspan="1" colspan="1">3.8E-05 &#x00b1; 4.7E-05</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">31<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">903.8 &#x00b1; 1,158.9</td>
<td valign="top" align="center" rowspan="1" colspan="1">2,624.0 &#x00b1; 2,024.0</td>
<td valign="top" align="center" rowspan="1" colspan="1">3,280.2 &#x00b1; 3,087.1</td>
<td valign="top" align="center" rowspan="1" colspan="1">1,784.7 &#x00b1; 1,112.9</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">32<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">10,946.1 &#x00b1; 18,069.9</td>
<td valign="top" align="center" rowspan="1" colspan="1">30,058.0 &#x00b1; 41,868.7</td>
<td valign="top" align="center" rowspan="1" colspan="1">28,759.5 &#x00b1; 31,153.7</td>
<td valign="top" align="center" rowspan="1" colspan="1">14,667.1 &#x00b1; 13,414.7</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<p>P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.</p>
<p><sup>*</sup><italic>p</italic>-value &lt; 0.001; <sup>&#x2020;</sup><italic>p</italic>-value &lt; 0.05.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap id="T3" position="float">
<?Table Small?>
<label>Table 3</label>
<caption>
<title>Mean and standard deviation values for texture analysis parameters used for distinguishing radiographic lung patterns</title>
</caption>
<alternatives>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jvs-20-e44-i003"></graphic>
<table frame="hsides" rules="rows">
<col width="3.72%"/>
<col width="16.72%"/>
<col width="17.96%"/>
<col width="21.05%"/>
<col width="21.05%"/>
<col width="19.5%"/>
<thead>
<tr>
<th valign="top" align="left" rowspan="1" colspan="2" style="background-color:rgb(238,248,254)">Parameters</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P1</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P2</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P3</th>
<th valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P4</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6">Four parameters of fractal dimension texture analysis method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">33<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">0.21 &#x00b1; 0.04</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.23 &#x00b1; 0.07</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.30 &#x00b1; 0.06</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.25 &#x00b1; 0.05</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">34<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">0.31 &#x00b1; 0.05</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.36 &#x00b1; 0.07</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.39 &#x00b1; 0.05</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.32 &#x00b1; 0.05</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">35<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">0.33 &#x00b1; 0.09</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.39 &#x00b1; 0.07</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.31 &#x00b1; 0.07</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.30 &#x00b1; 0.07</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">36</td>
<td valign="top" align="center" rowspan="1" colspan="1">11.56 &#x00b1; 138.15</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.27 &#x00b1; 0.13</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.15 &#x00b1; 0.11</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.21 &#x00b1; 0.12</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6" style="background-color:rgb(238,248,254)">Two parameters of the Fourier power spectrum method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">37<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">2,112.7 &#x00b1; 931.7</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">4,893.7 &#x00b1; 1,863.6</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">3,624.7 &#x00b1; 1,418.0</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">3,156.7 &#x00b1; 1,024.0</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)"></td>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">38<sup>&#x2020;</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">296.9 &#x00b1; 862.3</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">408.5 &#x00b1; 276.4</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">336.4 &#x00b1; 157.6</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">291.3 &#x00b1; 133.7</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="6">Six parameters of Law's texture energy measures method</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">39<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">19,970.2 &#x00b1; 13,453.3</td>
<td valign="top" align="center" rowspan="1" colspan="1">39,054.6 &#x00b1; 19,385.4</td>
<td valign="top" align="center" rowspan="1" colspan="1">39,921.5 &#x00b1; 16,661.1</td>
<td valign="top" align="center" rowspan="1" colspan="1">27,429.1 &#x00b1; 9,010.2</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">40<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">257.6 &#x00b1; 78.7</td>
<td valign="top" align="center" rowspan="1" colspan="1">392.4 &#x00b1; 110.6</td>
<td valign="top" align="center" rowspan="1" colspan="1">553.8 &#x00b1; 150.2</td>
<td valign="top" align="center" rowspan="1" colspan="1">431.3 &#x00b1; 82.2</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">41<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">67.45 &#x00b1; 130.02</td>
<td valign="top" align="center" rowspan="1" colspan="1">81.49 &#x00b1; 20.84</td>
<td valign="top" align="center" rowspan="1" colspan="1">95.44 &#x00b1; 19.50</td>
<td valign="top" align="center" rowspan="1" colspan="1">85.00 &#x00b1; 17.53</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">42<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">1,405.6 &#x00b1; 428.4</td>
<td valign="top" align="center" rowspan="1" colspan="1">2,441.3 &#x00b1; 784.6</td>
<td valign="top" align="center" rowspan="1" colspan="1">3,776.8 &#x00b1; 1,169.4</td>
<td valign="top" align="center" rowspan="1" colspan="1">2,573.1 &#x00b1; 452.7</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">43<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">117.5 &#x00b1; 51.2</td>
<td valign="top" align="center" rowspan="1" colspan="1">165.1 &#x00b1; 42.5</td>
<td valign="top" align="center" rowspan="1" colspan="1">208.7 &#x00b1; 48.6</td>
<td valign="top" align="center" rowspan="1" colspan="1">177.7 &#x00b1; 35.0</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">44<sup>*</sup></td>
<td valign="top" align="center" rowspan="1" colspan="1">482.4 &#x00b1; 147.1</td>
<td valign="top" align="center" rowspan="1" colspan="1">725.7 &#x00b1; 205.6</td>
<td valign="top" align="center" rowspan="1" colspan="1">1,027.4 &#x00b1; 281.6</td>
<td valign="top" align="center" rowspan="1" colspan="1">796.3 &#x00b1; 149.3</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<p>P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.</p>
<p><sup>*</sup><italic>p</italic>-value &lt; 0.001; <sup>&#x2020;</sup><italic>p</italic>-value &lt; 0.05.</p>
</table-wrap-foot>
</table-wrap>
<p>All of the parameters from ROIs were used for artificial neural networks. All of the selected data samples were divided such that 70% were used to train the algorithms and 30% for testing the models (261 training samples and 139 testing samples for each lung pattern). The detailed settings of the algorithm were determined at the highest performance level through trial and error in a pilot study (<xref ref-type="fig" rid="F1">Fig. 1</xref>). Other configurations were used according to the default settings of MATLAB. The performance of the classifier was evaluated by determining accuracy ([correct predictions]/[number of samples]) and the area under the receiver operating characteristic curve (AUC).</p>
<fig id="F1" position="float" fig-type="figure">
<?Figure Large?>
<label>Figure 1</label>
<caption>
<title>Schematic diagram of detailed configuration and processing of artificial neural networks. Artificial neural networks consisted of one input layer with 44 nodes, two hidden layers with 15 nodes each, and one output layer with 4 nodes. Cross entropy tried to find minima or maxima by iteration. Bayesian regularization was used to calculate a gradient that was needed in the calculation of the weights to be used in the network.</title>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jvs-20-e44-g001"></graphic>
</fig>
</sec>

<sec sec-type="results">
<title>RESULTS</title>
<p>The one-way analysis of variance showed significant differences between lung patterns in 40 texture parameters. Parameter 6, 10, 29, and 38 showed <italic>p</italic> values less than 0.05. Parameter 1, 2, 3, 4, 5, 7, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 31, 32, 33, 34, 35, 37, 39, 40, 41, 42, 43, and 44 showed <italic>p</italic> values less than 0.001. In the post-hoc test, the parameters for which there were significant differences between all lung patterns (P1-P2, P1-P3, P1-P4, P2-P3, P2-P4, and P3-P4) were 1, 3, 4, 11, 14, 26, 31, 33, 34, 37, 40, 43, and 44. The accuracy of the artificial neural networks was 99.1% in the training set and 91.9% in the testing set. In addition, the AUCs were above 0.98 in the training set and above 0.92 in the testing set (<xref ref-type="table" rid="T4">Table 4</xref>).</p>
<table-wrap id="T4" position="float">
<?Table Small?>
<label>Table 4</label>
<caption>
<title>Performance of the artificial neural networks for lung pattern classification</title>
</caption>
<alternatives>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jvs-20-e44-i004"></graphic>
<table frame="hsides" rules="rows">
<col width="23.8%"/>
<col width="17.47%"/>
<col width="12.65%"/>
<col width="16.27%"/>
<col width="15.36%"/>
<col width="14.46%"/>
<thead>
<tr>
<th valign="middle" align="left" rowspan="2" colspan="1" style="background-color:rgb(238,248,254)">Dataset</th>
<th valign="middle" align="center" rowspan="2" colspan="1" style="background-color:rgb(238,248,254)">Accuracy (%)</th>
<th valign="middle" align="center" rowspan="1" colspan="4" style="background-color:rgb(238,248,254)">AUC</th>
</tr>
<tr>
<th valign="middle" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P1</th>
<th valign="middle" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P2</th>
<th valign="middle" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P3</th>
<th valign="middle" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">P4</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Training dataset</td>
<td valign="top" align="center" rowspan="1" colspan="1">99.1</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.00</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.00</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.98</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.99</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">Testing dataset</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">91.9</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.99</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.93</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.92</td>
<td valign="top" align="center" rowspan="1" colspan="1" style="background-color:rgb(238,248,254)">0.94</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<p>AUC, Area under the receiver operating characteristic curve; P1, normal lung; P2, alveolar pattern; P3, bronchial pattern; P4, unstructured interstitial pattern.</p>
</table-wrap-foot>
</table-wrap>
</sec>

<sec sec-type="discussion">
<title>DISCUSSION</title>
<p>Previous studies of computer-aided detection focused on disease detection by using a small-sized ROI [<xref ref-type="bibr" rid="B19">19</xref><xref ref-type="bibr" rid="B25">25</xref><xref ref-type="bibr" rid="B30">30</xref><xref ref-type="bibr" rid="B33">33</xref>]. Diagnosis based on a small-sized ROI may lead to misdiagnosis because it does not include a global impression or multiple findings. In addition, non-radiological information such as signalment or clinical sign, which may be important in detecting a disease, cannot be expressed by an ROI. Accordingly, using a single ROI may be more appropriate for detecting radiological findings rather than for disease differentiation. Therefore, we attempted to use ROIs to classify the specific lung pattern.</p>
<p>Lung patterns are broadly divided into normal, alveolar, interstitial (including unstructured and nodular pattern), bronchial, and vascular patterns [<xref ref-type="bibr" rid="B34">34</xref>]. In this study, the nodular interstitial and vascular patterns were excluded because of the ROI selection criteria there were used. Ribs in an ROI can affect texture parameters. Thus, the ROI had to be obtained in the intercostal space and the size of the ROI had to be small. A small-sized ROI would have obtained square-shaped ROI filled with soft tissue opacity rather than an ROI containing nodules. It would also be limited in the comparison of arteries and veins or in the expression of vascular branching. In addition, mixed patterns were excluded from this study. If all the mixed patterns were classified into one class, it would be difficult to obtain consistent texture characteristics. In the case of classifying mixed patterns according to the prominent pattern, the criteria for selecting the prominent patterns would be ambiguous. Also, even if the lobes were classified into a mixed pattern, it would be highly likely that only a single pattern might be included in a small-sized ROI. Therefore, nodular, vascular, and mixed patterns were excluded.</p>
<p>A texture parameter commonly used in diagnostic imaging such as computed tomography is the average Hounsfield unit value for an ROI. However, other features such as fineness or coarseness, homogeneous or heterogeneous enhancement, and well-delineated or ill-defined topology depend on subjective evaluation. Even if differentiation of these textures is based on a consensus between radiologists, it is difficult to agree on the extent. In this case, texture analysis can be used because it can be quantified. Although the texture parameters may vary in value even in the same region of the same patient depending on the image acquisition equipment or post-processing kernel used, texture analysis can still be used to compare ROIs obtained from the same equipment. Therefore, this study used texture analysis for ROIs acquired from the same equipment, and 40 of the 44 texture parameters showed significant differences between lung patterns. The results suggest that such parameters can be used for lung pattern classification. Furthermore, texture analysis may be used to evaluate clinically important patterns and to determine the quality of the image acquisition equipment.</p>
<p>The number of parameters to be included is one of the important factors in constructing a predictive model. Multiple parameters may reduce the performance of the model by causing unnecessary fluctuation [<xref ref-type="bibr" rid="B35">35</xref>]. In this study, the 44 parameters were used to train the artificial neural networks resulting in an accuracy of 91.9% in the testing dataset. In a pilot study, models trained using the parameters from each analysis showed substantially lower performance than the study model in the testing set. Pilot study model accuracies were: first-order statistic (5 parameters), 63.8% accuracy; spatial gray-level-dependence matrix (11 parameters), 70.5%; gray-level-difference statistics (4 parameters), 54.4%; gray-level run length image statistics (7 parameters), 60.7%; neighborhood gray-tone difference matrix (5 parameters), 55.6%; fractal dimension analysis (4 parameters), 57.6%; Fourier power spectrum (2 parameters), 47.0%; and Law's texture energy measures (6 parameters), 68.6% accuracy. These results indicated that all of the selected parameters can positively influence the performance of lung pattern artificial neural networks. Parameters 11, 14, 31, 40, 43, and 44 produced <italic>p</italic> value less than 0.001 in one-way analysis of variance, and they showed significant differences between all of the lung patterns in the post-hoc test results. Thus, parameters 11, 14, 31, 40, 43, and 44 were the more important parameters for use in lung pattern classification.</p>
<p>The accuracy obtained from the training data is generally higher or similar to that of the test data because, during machine learning, the model is fitted to the training data in order to obtain reliable predictions with general untrained data. If the model is overfitted to the training data, it yields poor performance as it overreacts to minor fluctuations in the training data. In the performance evaluation of the artificial neural networks developed in this study, small gaps were present between the training and testing datasets, thereby resulting in high accuracy. The results demonstrate successful learning by the model of generalized trends in the textures of the lung patterns studied.</p>
<p>We are aware of several limitations of this study. All of the radiographs were generated by a single machine. It is difficult to judge how many texture parameters may be altered in images acquired from other equipment. In order to objectively validate the results of this study, data from other computed or digital radiographs should be assessed. In this study, texture analysis was applied to 8-bit converted radiographs for computation in MATLAB, whereas the actual computed radiographs were 12-bit images. Intensity information may be partially missing or integrated during the conversion. Further research is needed to assess whether such conversion has a positive or negative effect on accuracy. In this study, lung lobes disputed by the evaluators were excluded in order to minimize the false-negative rate associated with the radiologists. Therefore, the data in this study might only be composed of obvious patterns. An adequate number of unclear lung patterns should be included in further validation studies. In the present study, a reproducibility evaluation of each radiologist was not undertaken, and ROI selection was performed by a single evaluator. Even if all the evaluators agreed on the pattern classification of the lobes, the artificial neural networks might have been influenced by the subjective judgment of the evaluator who selected the ROIs.</p>
<p>This study attempted to evaluate the utility of texture analysis and machine learning to discriminate four specific lung patterns. A number of texture parameters showed significant differences between the patterns. The developed artificial neural networks demonstrated high performance in discriminating the patterns. Texture analysis and machine learning algorithms may have potential for application in the evaluation of medical images.</p>
</sec>
</body>

<back>
<fn-group>
<fn fn-type="conflict">
<label>Conflict of Interest</label>
<p>The authors declare no conflicts of interest.</p>
</fn>

<fn fn-type="con">
<label>Author Contributions</label>
  <p>
  <list list-type="simple">
    <list-item>
      <p><bold>Conceptualization:</bold> Yoon Y.</p>
    </list-item>
    <list-item>
      <p><bold>Data curation:</bold> Yoon Y, Lee H.</p>
    </list-item>
    <list-item>
      <p><bold>Formal analysis:</bold> Yoon Y, Hwang T, Choi H, Lee H.</p>
    </list-item>
    <list-item>
      <p><bold>Methodology:</bold> Yoon Y.</p>
    </list-item>
    <list-item>
      <p><bold>Project administration:</bold> Yoon Y, Lee H.</p>
    </list-item>
    <list-item>
      <p><bold>Resources:</bold> Lee H.</p>
    </list-item>
    <list-item>
      <p><bold>Software:</bold> Yoon Y, Hwang T.</p>
    </list-item>
    <list-item>
      <p><bold>Supervision:</bold> Choi H, Lee H.</p>
    </list-item>
    <list-item>
      <p><bold>Validation:</bold> Choi H, Lee H.</p>
    </list-item>
    <list-item>
      <p><bold>Visualization:</bold> Yoon Y.</p>
    </list-item>
    <list-item>
      <p><bold>Writing - original draft:</bold> Yoon Y.</p>
    </list-item>
    <list-item>
      <p><bold>Writing - review &#x0026; editing:</bold> Yoon Y.</p>
    </list-item>
  </list>
  </p>
</fn>
</fn-group>

<ref-list>
<ref id="B1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Akay</surname>
<given-names>MF</given-names>
</name>
</person-group>
<article-title>Support vector machines combined with feature selection for breast cancer diagnosis</article-title>
<source>Expert Syst Appl</source>
<year>2009</year>
<volume>36</volume>
<fpage>3240</fpage>
<lpage>3247</lpage>
</element-citation>
</ref>
<ref id="B2">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baxt</surname>
<given-names>WG</given-names>
</name>
</person-group>
<article-title>Use of an artificial neural network for the diagnosis of myocardial infarction</article-title>
<source>Ann Intern Med</source>
<year>1991</year>
<volume>115</volume>
<fpage>843</fpage>
<lpage>848</lpage>
</element-citation>
</ref>

<ref id="B3">
<label>3</label>
    <element-citation publication-type="book">
      <person-group person-group-type="author">
        <name>
          <surname>Carbonell</surname>
          <given-names>JG</given-names>
        </name>
        <name>
          <surname>Michalski</surname>
          <given-names>RS</given-names>
        </name>
        <name>
          <surname>Mitchell</surname>
          <given-names>TM</given-names>
        </name>
      </person-group>
      <source>An Overview of Machine Learning</source>
      <publisher-loc>Heidelberg</publisher-loc>
      <publisher-name>Springer</publisher-name>
      <year>1983</year>
      <fpage>3</fpage>
      <lpage>23</lpage>
    </element-citation>
</ref>
<ref id="B4">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>CC</given-names>
</name>
<name>
<surname>Daponte</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>MD</given-names>
</name>
</person-group>
<article-title>Fractal feature analysis and classification in medical imaging</article-title>
<source>IEEE Trans Med Imaging</source>
<year>1989</year>
<volume>8</volume>
<fpage>133</fpage>
<lpage>142</lpage>
</element-citation>
</ref>
<ref id="B5">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>EL</given-names>
</name>
<name>
<surname>Chung</surname>
<given-names>PC</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>CL</given-names>
</name>
<name>
<surname>Tsai</surname>
<given-names>HM</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>CI</given-names>
</name>
</person-group>
<article-title>An automatic diagnostic system for CT liver image classification</article-title>
<source>IEEE Trans Biomed Eng</source>
<year>1998</year>
<volume>45</volume>
<fpage>783</fpage>
<lpage>794</lpage>
</element-citation>
</ref>
<ref id="B6">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>HL</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>CC</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>XG</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>SJ</given-names>
</name>
</person-group>
<article-title>An efficient diagnosis system for detection of Parkinson's disease using fuzzy k-nearest neighbor approach</article-title>
<source>Expert Syst Appl</source>
<year>2013</year>
<volume>40</volume>
<fpage>263</fpage>
<lpage>271</lpage>
</element-citation>
</ref>

<ref id="B7">
<label>7</label>
    <element-citation publication-type="book">
      <person-group person-group-type="author">
        <name>
          <surname>Christensen</surname>
          <given-names>O</given-names>
        </name>
      </person-group>
      <source>Functions, Spaces, and Expansions</source>
      <edition>1st ed</edition>
      <publisher-loc>Birkh&#x00E4;user Verlag</publisher-loc>
      <publisher-name>Springer</publisher-name>
      <year>2010</year>
      <fpage>159</fpage>
      <lpage>180</lpage>
    </element-citation>
</ref>
<ref id="B8">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cooley</surname>
<given-names>JW</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Welch</surname>
<given-names>PD</given-names>
</name>
</person-group>
<article-title>The fast Fourier transform and its applications</article-title>
<source>IEEE Trans Educ</source>
<year>1969</year>
<volume>12</volume>
<fpage>27</fpage>
<lpage>34</lpage>
</element-citation>
</ref>
<ref id="B9">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>French</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>The time traveller&#x2019;'s CAPM</article-title>
<source>Invest Anal J</source>
<year>2017</year>
<volume>46</volume>
<fpage>81</fpage>
<lpage>96</lpage>
</element-citation>
</ref>
<ref id="B10">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ganesan</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Venkatesh</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Rama</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Palani</surname>
<given-names>AM</given-names>
</name>
</person-group>
<article-title>Application of neural networks in diagnosing cancer disease using demographic data</article-title>
<source>Int J Comput Appl</source>
<year>2010</year>
<volume>1</volume>
<fpage>76</fpage>
<lpage>85</lpage>
</element-citation>
</ref>
<ref id="B11">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gelhar</surname>
<given-names>LW</given-names>
</name>
<name>
<surname>Axness</surname>
<given-names>CL</given-names>
</name>
</person-group>
<article-title>Three&#x2010;dimensional stochastic analysis of macrodispersion in aquifers</article-title>
<source>Water Resour Res</source>
<year>1983</year>
<volume>19</volume>
<fpage>161</fpage>
<lpage>180</lpage>
</element-citation>
</ref>
<ref id="B12">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gilbert</surname>
<given-names>FJ</given-names>
</name>
<name>
<surname>Astley</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Gillan</surname>
<given-names>MG</given-names>
</name>
<name>
<surname>Agbaje</surname>
<given-names>OF</given-names>
</name>
<name>
<surname>Wallis</surname>
<given-names>MG</given-names>
</name>
<name>
<surname>James</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Boggis</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Duffy</surname>
<given-names>SW</given-names>
</name>
</person-group>
<collab>CADET II Group</collab>
<article-title>Single reading with computer-aided detection for screening mammography</article-title>
<source>N Engl J Med</source>
<year>2008</year>
<volume>359</volume>
<fpage>1675</fpage>
<lpage>1684</lpage>
</element-citation>
</ref>
<ref id="B13">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haralick</surname>
<given-names>RM</given-names>
</name>
</person-group>
<article-title>Statistical and structural approaches to texture</article-title>
<source>Proc IEEE</source>
<year>1979</year>
<volume>67</volume>
<fpage>786</fpage>
<lpage>804</lpage>
</element-citation>
</ref>
<ref id="B14">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haralick</surname>
<given-names>RM</given-names>
</name>
<name>
<surname>Shanmugam</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Dinstein</surname>
<given-names>IH</given-names>
</name>
</person-group>
<article-title>Textural features for image classification</article-title>
<source>IEEE Trans Syst Man Cybern</source>
<year>1973</year>
<volume>3</volume>
<fpage>610</fpage>
<lpage>621</lpage>
</element-citation>
</ref>
<ref id="B15">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harms</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Gunzer</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Aus</surname>
<given-names>HM</given-names>
</name>
</person-group>
<article-title>Combined local color and texture analysis of stained cells</article-title>
<source>Comput Vis Graph Image Process</source>
<year>1986</year>
<volume>33</volume>
<fpage>364</fpage>
<lpage>376</lpage>
</element-citation>
</ref>
<ref id="B16">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Insana</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>RF</given-names>
</name>
<name>
<surname>Garra</surname>
<given-names>BS</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>DG</given-names>
</name>
<name>
<surname>Shawker</surname>
<given-names>TH</given-names>
</name>
</person-group>
<article-title>Analysis of ultrasound image texture via generalized Rician statistics</article-title>
<source>Opt Eng</source>
<year>1986</year>
<volume>25</volume>
<elocation-id>256743</elocation-id>
</element-citation>
</ref>
<ref id="B17">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ji</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Engel</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Craine</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Texture analysis for classification of cervix lesions</article-title>
<source>IEEE Trans Med Imaging</source>
<year>2000</year>
<volume>19</volume>
<fpage>1144</fpage>
<lpage>1149</lpage>
</element-citation>
</ref>
<ref id="B18">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Metaxas</surname>
<given-names>DN</given-names>
</name>
</person-group>
<article-title>Computer-aided diagnosis of mammographic masses using scalable image retrieval</article-title>
<source>IEEE Trans Biomed Eng</source>
<year>2015</year>
<volume>62</volume>
<fpage>783</fpage>
<lpage>792</lpage>
</element-citation>
</ref>
<ref id="B19">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kadah</surname>
<given-names>YM</given-names>
</name>
<name>
<surname>Farag</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Zurada</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Badawi</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Youssef</surname>
<given-names>AM</given-names>
</name>
</person-group>
<article-title>Classification algorithms for quantitative tissue characterization of diffuse liver disease from ultrasound images</article-title>
<source>IEEE Trans Med Imaging</source>
<year>1996</year>
<volume>15</volume>
<fpage>466</fpage>
<lpage>478</lpage>
</element-citation>
</ref>
<ref id="B20">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mahmoud-Ghoneim</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Toussaint</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Constans</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>de Certaines</surname>
<given-names>JD</given-names>
</name>
</person-group>
<article-title>Three dimensional texture analysis in MRI: a preliminary evaluation in gliomas</article-title>
<source>Magn Reson Imaging</source>
<year>2003</year>
<volume>21</volume>
<fpage>983</fpage>
<lpage>987</lpage>
</element-citation>
</ref>
<ref id="B21">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Malon</surname>
<given-names>CD</given-names>
</name>
<name>
<surname>Cosatto</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Classification of mitotic figures with convolutional neural networks and seeded blob features</article-title>
<source>J Pathol Inform</source>
<year>2013</year>
<volume>4</volume>
<fpage>9</fpage>
</element-citation>
</ref>
<ref id="B22">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mathias</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Tofts</surname>
<given-names>PS</given-names>
</name>
<name>
<surname>Losseff</surname>
<given-names>NA</given-names>
</name>
</person-group>
<article-title>Texture analysis of spinal cord pathology in multiple sclerosis</article-title>
<source>Magn Reson Med</source>
<year>1999</year>
<volume>42</volume>
<fpage>929</fpage>
<lpage>935</lpage>
</element-citation>
</ref>
<ref id="B23">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polat</surname>
<given-names>K</given-names>
</name>
<name>
<surname>G&#x00FC;ne&#x015F;</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Breast cancer diagnosis using least square support vector machine</article-title>
<source>Digit Signal Process</source>
<year>2007</year>
<volume>17</volume>
<fpage>694</fpage>
<lpage>701</lpage>
</element-citation>
</ref>
<ref id="B24">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ravandi</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Toriumi</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Fourier transform analysis of plain weave fabric appearance</article-title>
<source>Text Res J</source>
<year>1995</year>
<volume>65</volume>
<fpage>676</fpage>
<lpage>683</lpage>
</element-citation>
</ref>
<ref id="B25">
<label>25</label>
    <element-citation publication-type="confproc">
      <person-group person-group-type="author">
        <name>
          <surname>Seiffert</surname>
          <given-names>C</given-names>
        </name>
        <name>
          <surname>Khoshgoftaar</surname>
          <given-names>TM</given-names>
        </name>
        <name>
          <surname>Van Hulse</surname>
          <given-names>J</given-names>
        </name>
        <name>
          <surname>Napolitano</surname>
          <given-names>A</given-names>
        </name>
      </person-group>
      <source>A comparative study of data sampling and cost sensitive learning</source>
      <conf-name>IEEE International Conference on Data Mining Workshops</conf-name>
      <conf-date>December 15-19, 2008</conf-date>
      <conf-loc>Pisa, Italy</conf-loc>
    </element-citation>
</ref>
<ref id="B26">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sengupta</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Sahidullah</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Saha</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Lung sound classification using cepstral-based statistical features</article-title>
<source>Comput Biol Med</source>
<year>2016</year>
<volume>75</volume>
<fpage>118</fpage>
<lpage>129</lpage>
</element-citation>
</ref>
<ref id="B27">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shiraishi</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Appelbaum</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Doi</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Computer-aided diagnosis and artificial intelligence in clinical imaging</article-title>
<source>Semin Nucl Med</source>
<year>2011</year>
<volume>41</volume>
<fpage>449</fpage>
<lpage>462</lpage>
</element-citation>
</ref>
<ref id="B28">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sujana</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Swarnamani</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Suresh</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Application of artificial neural networks for the classification of liver lesions by image texture parameters</article-title>
<source>Ultrasound Med Biol</source>
<year>1996</year>
<volume>22</volume>
<fpage>1177</fpage>
<lpage>1181</lpage>
</element-citation>
</ref>

<ref id="B29">
<label>29</label>
    <element-citation publication-type="book">
      <person-group person-group-type="author">
        <name>
          <surname>Tobias</surname>
          <given-names>S</given-names>
        </name>
        <name>
          <surname>Victoria</surname>
          <given-names>J</given-names>
        </name>
      </person-group>
      <source>BSAVA Manual of Canine and Feline Thoracic Imaging</source>
      <publisher-loc>Quedgeley</publisher-loc>
      <publisher-name>British Small Animal Veterinary Association</publisher-name>
      <year>2008</year>
      <fpage>250</fpage>
      <lpage>260</lpage>
    </element-citation>
</ref>
<ref id="B30">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tu</surname>
<given-names>JV</given-names>
</name>
</person-group>
<article-title>Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes</article-title>
<source>J Clin Epidemiol</source>
<year>1996</year>
<volume>49</volume>
<fpage>1225</fpage>
<lpage>1231</lpage>
</element-citation>
</ref>
<ref id="B31">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weszka</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Dyer</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Rosenfeld</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>A comparative study of texture measures for terrain classification</article-title>
<source>IEEE Trans Syst Man Cybern</source>
<year>1976</year>
<volume>6</volume>
<fpage>269</fpage>
<lpage>285</lpage>
</element-citation>
</ref>
<ref id="B32">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Patuwo</surname>
<given-names>BE</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>MY</given-names>
</name>
</person-group>
<article-title>Forecasting with artificial neural networks: The state of the art</article-title>
<source>Int J Forecast</source>
<year>1998</year>
<volume>14</volume>
<fpage>35</fpage>
<lpage>62</lpage>
</element-citation>
</ref>
<ref id="B33">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>X</given-names>
</name>
</person-group>
<article-title>Study of remote sensing image texture analysis and classification using wavelet</article-title>
<source>Int J Remote Sens</source>
<year>1998</year>
<volume>19</volume>
<fpage>3197</fpage>
<lpage>3203</lpage>
</element-citation>
</ref>
<ref id="B34">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
</person-group>
<article-title>Font recognition based on global texture analysis</article-title>
<source>IEEE Trans Pattern Anal Mach Intell</source>
<year>2001</year>
<volume>23</volume>
<fpage>1192</fpage>
<lpage>1200</lpage>
</element-citation>
</ref>
<ref id="B35">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zissis</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Xidias</surname>
<given-names>EK</given-names>
</name>
<name>
<surname>Lekkas</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>A cloud based architecture capable of perceiving and predicting multiple vessel behaviour</article-title>
<source>Appl Soft Comput</source>
<year>2015</year>
<volume>35</volume>
<fpage>652</fpage>
<lpage>661</lpage>
</element-citation>
</ref>
</ref-list>

</back>
</article>

 

