Journal List > Healthc Inform Res > v.25(3) > 1130330

Sela, Pulungan, Widyaningrum, and Shantiningsih: Method for Automated Selection of the Trabecular Area in Digital Periapical Radiographic Images Using Morphological Operations



The aim of this study is to propose a method that automatically select the trabecular bone area in digital periapical radiographic images using a sequence of morphological operations.


The study involved 50 digital periapical radiographic images of women aged from 36 to 58 years old. The proposed method consists of three stages: teeth detection, trabecular identification, and validation. A series of morphological operations—top-hat and bottom-hat filtering, automatic thresholding, closing, labeling, global thresholding, and image subtraction—are performed to automatically obtain the trabecular bone area in images. For validation, the results of the proposed method were compared with those of two dentists pixel by pixel. Three parameters were used in the validation: trabecular area, percentage of agreed area, and percentage of disagreed area.


The proposed method obtains the trabecular bone area in a polygon. The obtained trabecular bone area is usually larger than that of previous studies, but is usually smaller than the dentists'. On average over all images, the trabecular area produced by the proposed method is 5.83% smaller than that identified by dentists. Furthermore, the average percentage of agreed area and the average percentage of disagreed area of the proposed method against the dentists' results were 75.22% and 8.75%, respectively.


The shape of the trabecular bone area produced by the proposed method is similar and closer to that identified by dentists. The method, which consists of only simple morphological operations on digital periapical radiographic images, can be considered for selecting the trabecular bone area automatically.

I. Introduction

Radiographic images are the most widely used diagnostic tools by dentists to assist clinical examination [1] because they can be obtained easily and quickly. Radiographic images can provide data on the internal structure of teeth and other supporting parts to help diagnose dental and oral diseases, such as dental caries [2], periodontitis, fracture of tooth, gingivitis, abscess, and interdental bone loss [3]. In addition, in the area of public health, dental periapical radiographic images are becoming an alternative in determining the relationship between mandibular bone and bone mineral density [456], and tumors in the mouth [7]. In the forensic field, they can be used to determine human identity [8]. However, there is a weakness in radiographic images; radiographic images are usually low in quality such that objects (teeth, trabeculae, cortex, etc.) are still difficult to identify visually [91011].
To address this problem, dentists use their abilities and experience to diagnose diseases from patients' dental radiographic images. Each dentist has different abilities and experience, so the diagnoses provided to patients can vary. The inconsistency of diagnosis is certainly detrimental to the patient because treatment decisions can be different and even contradictory [11]. To alleviate this problem, computer-aided systems to help dentists in the assessment of diseases when using radiographic images have been developed [1213].
In general, the stages used to carry out diagnosis in previous studies are collecting dental images, selecting the region of interest (ROI), preprocessing the ROI, feature extraction, and identification [13].
One part of the ROI that is often used for identifying diseases by dentists is the trabecular bone, which is located under the root of the tooth (Figure 1). Previous studies have required image enhancement because of the poor quality of ROI images on the trabecular area [141516]. Originally, the selection of ROIs was usually performed by visual observation and cropping using tools, such as Photoshop, Corel Draw, etc. [17].
The selection of ROI areas has been carried out by semiautomatic [18] and non-automatic [192021] methods. The main weakness of these ROI selection methods is that they still rely on an observer, and it is possible for the observer to select an incorrect ROI. In addition, the size of the selected ROI tends to be too small compared to the trabecular area. The issue of ROI selection may be one of the causes of inaccurate identification results.
Because of the visual limitations of researchers in distinguishing trabecular areas from non-trabecular areas, the obtained trabecular area becomes relatively subjective and does not represent the area of the object being observed. Until now, researchers have had problems in selecting the ROI of the trabecular area in dental radiograph images. These problems should be identified and studied to devise automatic ROI selection methods for the trabeculae.
This paper proposes a method for automated selection of the trabecular area in digital periapical radiographic images using only a series of simple morphological operations. To validate the proposed method, the ROIs selected by the proposed method were compared with the ROIs selected by two experienced dentists pixel by pixel. Computational morphology as the basis of image processing has various types of operations, such as dilation, erosion, closing, filtering, edge detection, region filling, and labeling. In this study, we attempted to develop an automatic ROI selection method for trabeculae that is based on the application of basic morphological operations to distinguish objects on dental radiographic images. The rest of this paper is organized as follows. Previous methods as well as the proposed method for automatic selection of the trabecular bone area are discussed in Section II. We assess the performance of the proposed method in Section III. Finally, Section IV discusses the results and concludes the paper.

II. Methods

In this study, we used 50 digital periapical radiographs obtained from the Dentomaxillofacial Imaging Centre of Universitas Gadjah Mada. The only inclusion criterion was that the dental radiographs belonged to postmenopausal women aged from 36 to 58 years old, who were kept anonymous. All images were assessed for quality assurance by a dentist. These images were taken using dental X-ray Villa Sistemi Medicali endos ACP CEI specification 70 kVp, 8 mA, and 3.2 seconds, by a radiographer. Photostimulable phosphor (PSP) plates were used as image receptors. All periapical images were processed using digital radiography (DBSWin 4.5; Dürr Dental, Bietigheim-Bissingen, Germany). Thus, we obtained images measuring 1252 × 1645 pixels in grayscale and saved in JPG format.
Previous studies have selected ROIs on dental radiographics images in various shapes, sizes, and methods. Figure 2 shows the ROI produced by the method of [22]. Figure 2A is an initial 1262 × 1645 pixel image, in which the trabecular area as an ROI will be selected. A starting point in the desired trabecular area is then selected manually by the user by using a pointer (Figure 2B), which in this example is located about 108 pixels or 3 mm below the right and left mandibular first tooth (31 and 41). Figure 2C shows an ROI selected by the method of [22], whose size is 250 × 250 pixels.
In the research reported in [19], a 3.7 mm × 5.8 mm ROI located on the right side of the lower jaw between the first and second premolar was selected. This study focused on the perception of coarseness of trabecular patterns on dental radiographs. In [20], the ROI was taken in the trabecular area between the premolar and molar teeth using Adobe Photoshop software for the assessment of osteoporosis. The ROI formed a square of 100 × 100 pixels, and it was manually obtained by cropping the trabecula area below the root of the front tooth with a rectangular shape. In [21], ROI selection was performed to evaluate trabecular patterns on panoramic radiographs to predict age-related osteoporosis in postmenopausal women. This research used 560 regions of interest (each of 51 × 51 pixels) in 6 sites on panoramic radiographs obtained using ImageJ. Square ROIs were selected at the interdental 1/3 region including 6 sites, i.e., anterior, premolar, and molar regions of the jaws. The methods of determining ROIs in previous studies are less efficient because they require additional software or manual involvement to determine the ROIs.
The proposed method consists of three main stages, namely, teeth detection, trabecular identification, and validation, as depicted in Figure 3, and elaborated on as follows.

1. Teeth Detection

This stage begins with cropping an initial image. Cropping is used to eliminate marking on the top or bottom of the image (red boxes). The images used in this study had different photographic markings: at the upper right corner (Figure 4A) or at the lower left corner (Figure 4B). From the original (1252 × 1651 pixels), the initial image was automatically cropped, and the cropped image size was 825 × 1550 pixels (Figure 5).
The next step was segmentation of the cropped image. For this purpose, we used an adaptive segmentation method that has been carried out by [22]. The segmentation process consisted of filtering, thresholding, closing, and labelling. The filtering methods used in our previous study were top-hat and bottom-hat filtering. Top-hat filtering shows the maximum gray image value, while bottom-hat filtering shows the minimum gray image value. This filtering produces an image that is brighter than the original image. Subsequently, thresholding is carried out, which is performed automatically on the filtered image using its sub-images. The size of a sub-image is 40 × 40 pixels, and the number of sub-images is the same as the number of pixels that will be changed to 0 (zero) or 1 (one). Those pixels are changed based on the threshold value of the sub-image. This threshold is obtained by finding the average pixel intensity of the sub-image. The pixel intensity in the middle of the sub-image is changed to 1 (one) if it is greater than or equal to the threshold; otherwise, it is changed to 0 (zero). The thresholding process results in a binary image, and it is shown in Figure 6A. The closing operation is performed for connecting pixels of the same object. This is followed by labeling the connected components with colors so that objects near each other tend to have similar colors (Figure 6B).
On the other side, we performed thresholding using the Otsu with a threshold value of 0.74, on the cropping image (Figure 5A). In this process, the teeth are still not completely separated from other objects. Furthermore, we applied watershed segmentation to isolate the teeth. Watershed segmentation is a region-based method that utilizes mathematical morphology [23]. In this method, an image is assumed to reflect a topographic landscape that has ridges and valleys. The elevation values of the topographic landscape are defined by the gray values of the respective pixels. Watershed segmentation has been used in image analysis for medical visualization [232425]. The segmentation results are then changed to RGB color images. Then a composite images operation is performed between the global thresholding image and the RGB color image, followed by the process of removing all connected objects from the composited image. The resulting image is shown in Figure 6C. To obtain the teeth object, we modify the method proposed in [18], which is used to determine the direction of the pores on the trabecular area, and calculate the porous area by counting black pixels on the respective pose objects. A pore object is identified by three directions (D), i.e., vertical (−30° ≤ D ≤ 30°), oblique (−60° < D < −30° or 30° < D < 60°), and horizontal (−90° ≤ D ≤ −60° or 60° ≤ D ≤ 90°). In addition to obtain object directions, the study also classifies objects as small (with an area of less than 72 pixels) and large (with an area larger than or equal to 72 pixels). In this study, the direction used for teeth detection was objects with vertical direction only, whose area is greater than 500 pixels. Furthermore, objects that do not conform to the direction and area will be colored white, and the others (teeth objects) are colored green (Figure 6D). The final step is subtracting the labeled image and the isolated teeth image (Figure 6E).

2. Trabecular Identification

This stage consists of two steps, namely, enhancement of the teeth area and isolation of the trabecular area. Enhancement of the teeth area is performed by using a dilation operation, with the purpose of minimizing the cavity between teeth. Dilation using a disk element structure is then performed on the subtracted image, and the result is then cropped to isolate the ROI. The size of the final image is 825 × 900 pixels. In this final image, the trabecular area is black, while non-trabecular area has non-black color (Figure 6F).

3. Validation

In the validation stage, two dentists manually select the trabecular area of each image, by providing some points at the trabecular area according their respective knowledge.
Let the ROIs produced by the proposed method and a dentist be img1 and img2, respectively. These two ROIs will be compared based on three parameters, i.e., trabecular area (A), percentage of agreed area (S), and percentage of disagreed area (D).
The trabecular area produced by the proposed method Aimg1 and the dentist Aimg2 is calculated by simply counting the number of black pixels in the respective final image. The percentage of agreed area is then defined by
where B is the number of pixels colored black by both the proposed method and the dentist. The percentage of disagreed area, on the other hand, is defined by Eq. (2):
where Cimg1 is the number of pixels colored black in img1 but colored otherwise in img2, and vice versa, Cimg2 is the number of pixels colored black in img2 but colored otherwise in img1.

III. Results

Some results of this study can be seen in Table 1. The ROI area is represented by the black pixels on the final image. For the final image, the trabecular area, the percentage of agreed area, and the percentage of disagree area are calculated. Table 2 shows each of the three measures averaged over the 50 images used in this study. The average trabecular areas of the proposed method, and the 1st and 2nd dentists were 732581.6, 786683.1, and 769142.1 pixels, respectively. From the results of the two dentists, their final average trabecular area was 777912.6 pixels, which is 5.83% larger than the proposed method's 732581.6 pixels. The average percentages of agreed area between the proposed method and the 1st and 2nd dentist were 73.42% and 77.03%, respectively. The average percentages of disagreed area between the proposed method and the 1st and 2nd dentists were 9.19% and 8.31%, respectively. Overall, the final average percentages of agreed and disagreed area were 75.22% and 8.75%, respectively.

IV. Discussion

In this paper, a method for automatic selection of the trabecular area in periapical radiographic images using only morphological operations was presented. We have shown that the implementation of the proposed method produces an ROI in the form of the trabecular area of each image input. Furthermore, the ROI area generated by the method was compared to the ROI area obtained by two dentists. The dentists used a tool to select a set of points that were then connected to form the ROI area (Figure 7).
From Table 1, it can be seen that the shape of the produced ROI area is a polygon. In general, the ROI areas produced by the proposed method in this study tend to be smaller than those by the dentists' (Table 2). After validation with the two dentists, based on the average percentages of agreed and disagreed areas, the proposed method's ROI is more similar to the 2nd dentist's ROI.
In comparison with previous work, the ROI produced by our proposed method is larger (Table 3). In comparison with the dentists', the trabecular area produced by the proposed method was 94.17% of that produced by the dentists. Furthermore, the average percentages of agreed and disagreed areas between the proposed method and the dentists were 75.22% and 8.75%, respectively. These performance measures suggest that the proposed image-processing-based method is adequate and can be considered for automatic selection of the trabecular area in digital periapical radiographic images. In the future, we plan to study and incorporate non-morphological operations into the current method to further improve the accuracy of ROI selection.

Figures and Tables

Figure 1

Periapical radiographic image.

Figure 2

Selection of region of interest (ROI) in [22]: (A) initial image, (B) the starting point on the trabecular area, and (C) the selected ROI.

Figure 3

Proposed method.

Figure 4

Examples of cropping to eliminate marks: (A) photo mark in the top right corner and (B) photo mark in the lower left corner.

Figure 5

Cropping images: (A) photo mark in the top right corner and the cropped image and (B) photo mark in the lower left corner and the cropped image.

Figure 6

(A) Binary image, (B) labelled image, (C) watershed segmentation, (D) teeth detection, (E) subtracted image, and (F) trabecular area.

Figure 7

Examples of region of interest (ROI) selected by the 1st dentist (A) and the 2nd dentist (B).

Table 1

Examples of ROIs obtained by the proposed method and two dentists


ROI: region of interest.

Table 2

Overall performance of the proposed method validated against the results of two dentists

Table 3

Comparison of the proposed method and several previous studies on several parameters


ROI: region of interest.


The authors would like to thank the ministry of research, technology, and higher education of the Republic of Indonesia for funding this research in the scheme of Penelitian Pasca Doktor (PPD) in 2018. We also thank the Department of Dentomaxillofacial Radiology, Universitas Gadjah Mada, for providing the dental radiographic images.


Conflict of Interest No potential conflict of interest relevant to this article was reported.


1. Watanabe PC, Faria V, Camargo AJ. Multiple radiographic analysis (systemic disease): dental panoramic radiography. J Oral Health Dent Care. 2017; 1(1):007.
2. Maia AM, Karlsson L, Margulis W, Gomes AS. Evaluation of two imaging techniques: near-infrared transillumination and dental radiographs for the detection of early approximal enamel caries. Dentomaxillofac Radiol. 2011; 40(7):429–433.
3. Shivpuje BV, Sable GS. A review on digital dental radiographic images for disease identification and classification. Int J Eng Res Appl. 2016; 6(7):38–42.
4. Suprijanto , Juliastuti E, Diputra Y, Mayantasari M, Azhari . Dental panoramic image analysis on mandibular bone for osteoporosis early detection. In : Proceedings of 2013 3rd International Conference on Instrumentation Control and Automation (ICA); 2013 Aug 28–30; Ungasan, Indonesia. p. 138–143.
5. Vishnu T, Saranya K, Arunkumar R, Devi MG. Efficient and early detection of osteoporosis using trabecular region. In : Proceedings of 2015 Online International Conference on Green Engineering and Technologies (IC-GET); 2015 Nov 27; Coimbatore, India. p. 1–5.
6. Sela EI, Widyaningrum R. Osteoporosis detection using important shape-based features of the porous trabecular bone on the dental X-ray images. Int J Adv Comput Sci Appl. 2015; 6(9):247–250.
7. Jatti A, Joshi R. Image processing and parameter extraction of digital panoramic dental X-rays with ImageJ. In : Proceedings of 2016 International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS); 2016 Oct 6–8; Bangalore, India. p. 450–454.
8. Modi CK, Desai NP. A simple and novel algorithm for automatic selection of ROI for dental radiograph segmentation. In : Proceedings of 2011 24th Canadian Conference on Electrical and Computer Engineering (CCECE); 2011 May 8–11; Niagara Fall, Canada. p. 000504–000507.
9. Shah N, Bansal N, Logani A. Recent advances in imaging technologies in dentistry. World J Radiol. 2014; 6(10):794–807.
10. Raju J, Modi CK. A proposed feature extraction technique for dental X-ray images based on multiple features. In : Proceedings of 2011 International Conference on Communication Systems and Network Technologies; 2011 Jun 3–5; Katra, India. p. 545–549.
11. Tuan TM, Duc NT, Van Hai P. Dental diagnosis from X-ray images using fuzzy rule-based systems. Int J Fuzzy Syst Appl. 2017; 6(1):1–16.
12. Mani VR, Arivazhagan S. Survey of medical image registration. J Biomed Eng Technol. 2013; 1(2):8–25.
13. Dighe S, Shriram R. Preprocessing, segmentation and matching of dental radiographs used in dental biometrics. Int J Sci Appl Inf Technol. 2012; 1(2):52–56.
14. Sela EI, Sutarman . Extracting the potential features of digital panoramic radiograph images by combining radio morphometry index, texture analysis, and morphological features. J Comput Sci. 2018; 14(2):144–152.
15. Sulistyani LD, Priaminiarti M, Auerkari EI, Kusdhany LS, Latief BS. Mandibular cortex correlates to alveolar bone density in indonesian women aged 40 to 75 years. J Int Dent Med Res. 2016; 9(3):215–220.
16. Majumder MI, Harun MA. Alveolar bone changes in post-menopausal osteopenic and osteoporosis women: an original research. Int J Dent Med Spec. 2015; 2(2):9–14.
17. Lira PH, Giraldi GA, Neves LA, Feijoo RA. Dental R-ray image segmentation using texture recognition. IEEE Lat Am Trans. 2014; 12(4):694–698.
18. Sela EI, Hartati S, Harjoko A, Wardoyo R, Mudjosemedi M. Feature selection of the combination of porous trabecular with anthropometric features for osteoporosis screening. Int J Electr Comput Eng. 2015; 5(1):78–83.
19. Geraets WG, Lindh C, Verheij H. Sparseness of the trabecular pattern on dental radiographs: visual assessment compared with semi-automated measurements. Br J Radiol. 2012; 85(1016):e455–e460.
20. Amer ME, Heo MS, Brooks SL, Benavides E. Anatomical variations of trabecular bone structure in intraoral radiographs using fractal and particles count analyses. Imaging Sci Dent. 2012; 42(1):5–12.
21. Koh KJ, Park HN, Kim KA. Prediction of age-related osteoporosis using fractal analysis on panoramic radiographs. Imaging Sci Dent. 2012; 42(4):231–235.
22. Sela EI, Hartati S, Harjoko A, Wardoyo R, Munakhir MS. Segmentation on the dental periapical X-ray images for osteoporosis screening. Int J Adv Comput Sci Appl. 2013; 4(7):147–151.
23. El Allaoui A, Nasri M. Medical image segmentation by marker-controlled watershed and mathematical morphology. Int J Multimed Its Appl. 2012; 4(3):1–9.
24. Preim B, Botha C. Image analysis for medical visualization. In : Preim B, Botha C, editors. Visual computing for medicine: theory, algorithms, and application. 2nd ed. Waltham (MA): Elsevier/Morgan Kaufmann;2014. p. 111–175.
25. Elsalamony HA. Detecting distorted and benign blood cells using the Hough transform based on neural networks and decision trees. In : Deligiannidis L, Arabnia H, editors. Emerging trends in image processing, computer vision and pattern recognition. Waltham (MA): Elsevier/Morgan Kaufmann;2015. p. 457–473.

Enny Itje Sela

Reza Pulungan

Rini Widyaningrum

Rurie Ratna Shantiningsih

Similar articles