Abstract
Objectives
Since the easiest way to identify pills and obtain information about them is to distinguish them visually, many studies on image processing technology exist. However, no automatic system for generating pill image data has yet been developed. Therefore, we propose a system for automatically generating image data by taking pictures of pills from various angles. This system is referred to as the pill filming system in this paper.
Methods
We designed the pill filming system to have three components: structure, controller, and a graphical user interface (GUI). This system was manufactured with black polylactic acid using a 3D printer for lightweight and easy manufacturing. The mainboard controls data storage, and the entire process is managed through the GUI. After one reciprocating movement of the seesaw, the web camera at the top shoots the target pill on the stage. This image is then saved in a specific directory on the mainboard.
Results
The pill filming system completes its workflow after generating 300 pill images. The total time to collect data per pill takes 21 minutes and 25 seconds. The generated image size is 1280 × 960 pixels, the horizontal and vertical resolutions are both 96 DPI (dot per inch), and the file extension is .jpg.
Conclusions
This paper proposes a system that can automatically generate pill image data from various angles. The pill observation data from various angles include many cases. In addition, the data collected in the same controlled environment have a uniform background, making it easy to process the images. Large quantities of high-quality data from the pill filming system can contribute to various studies using pill images.
Taking prescribed pills correctly accounts for a large proportion of healthcare at home. However, busy modern people sometimes forget information about prescribed pills and how to take them accurately. In addition, people sometimes forget to carry or take pills, and if this situation occurs with diabetes or blood pressure pills, a dangerous situation for one’s health may rapidly ensure [1–3]. Therefore, a pill-taking aid tool is vital in families without medical personnel. Interest in helping people who need protection, such as the elderly, children, and others, has also increased significantly in the setting of healthcare services [4]. Active research and development on tools to aid pill-taking are being conducted to meet this demand [5,6].
Since the easiest way to identify a pill and obtain information about it is to distinguish it visually, many studies on image processing technology have used images of pills [7–9]. Researchers have also recently used deep learning technology to improve the performance of pill-taking aid tools [10–12]. Deep learning models require large quantities of data in the learning stage; therefore, the demand for pill image data has also increased significantly.
In related works, Chang et al. [10] developed a wearable smart glasses-based drug pill recognition system using deep learning for visually impaired chronic disease patients. Their system consists of steps in which a patient puts a pill on his or her palm ad takes a picture; the system then classifies the pill through deep learning and guides the patient on how to take it. Wang et al. [13] studied recognition using minimally labeled data. Their study used front-side and back-side images they obtained themselves, as well as a consumer data set provided by the NIH Pill Challenge. In another study, Zeng et al. [14] created a mobile deep learning system for recognizing unconstrained pill images. They performed data augmentation to address the lack of a large volume of training images.
Currently, no automatic system exists for generating pill image data, Those who need images of the pill shot it manually and use them. However, taking a picture of a pill on a flat floor only results in images of the front and back of the flat pill. Another problem can be that a pill without a sharp angle rolls around, making it challenging to take pictures of the pill in a specific orientation is challenging. The image data generated in this way have a limited ability to contain information on pill appearance. The low quality of the data may diminish the accuracy of experiments based on the pill images.
Therefore, we propose a system to automatically generate image data by taking pictures of pills from various angles. This system is referred to as the pill filming system in this paper.
We designed this system to have three components: the instrument structure, control part, and graphic user interface (GUI) to assist in the visualization of system operation. This system focused on capturing the appearance of pills from various angles, with the ultimate goal of making images that are easy for researchers to use.
Figure 1 shows that the DC motor, motor cap, and column involved in the device’s power are located at the bottom. The device converts the rotary motion of the DC motor into a linear motion using a crank, which works as a seesaw moves. When the seesaw operates, columns next to each side of the seesaw fix the seesaw’s motion axis. A motor cap fixes the DC motor to the baseplate and reduces vibrations generated during device operation. A camera holder secures the position of the web camera to see inside the seesaw from above vertically. The camera guide adjusts the height of the camera holder from the baseplate along the groove on the side so that the web camera can locate and focus on the pill inside the seesaw.
Figure 2 shows the interior appearance of the seesaw, which has two bumps (referred to as “hills”) of different sizes inside, and these hills cause the pills to roll around and change direction and position while the seesaw operates. When the seesaw is in the direction where the stage is facing down, the pills stop on the stage in a random direction and position. At this time, the stage floor has a slope, so various configurations occur where different sides of the pill lean against the wall of the seesaw. This system was manufactured with black polylactic acid using a 3D printer for lightweight and easy manufacturing.
Figure 3 is a block diagram of the circuit configuration of the pill filming system, which consists of a power supply, mainboard [15], web camera, motor controller, and DC motor. The external power supply (5 V and 3 A) provides sufficient voltage and current for the operation of the mainboard and DC motor. The mainboard manages the pill filming system, controls the functions of the connected parts, and stores data.
We use the H-bridge motor driver L293D (STMicroelectronics, Geneva, Switzerland) as a motor controller, and it converts the 5 V voltage supplied from the mainboard into 12 V and delivers it to the DC motor. The DC motor’s speed is 15,000 rpm and the gear reduction ratio is 150:1; consequently, the number of rotations is 100 rpm. The motor speed is controlled by setting the duty cycle to 13%, thereby obtaining approximately 14 images per minute.
While the DC motor is running, the seesaw moves and the pill inside the seesaw has a random direction and position on the stage. When the seesaw finishes its reciprocating motion once, the web camera automatically focuses on the pill and captures the screen projected by the lens. The mainboard saves the generated images. After generating a total of 300 images, the system is shut down.
As shown in Figure 4, we designed the GUI to visualize system operation using Qt Designer (The Qt Company, Espoo, Finland). The word on each GUI button is a label on the pill surface, and we select it as a tool for identification. When the user clicks the GUI button, 300 images of the pill of the user’s choice are placed in a directory on the mainboard that matches the corresponding label.
Figure 5 shows an actual view of the pill filming system. The pill filming system’s overall size is 195 mm × 189 mm × 206 mm, and it weighs 0.52 kg, not including a pill.
The use of the pill filming system proceeds in the following order. The user inputs the label of a targeted pill to collect data into the button of the GUI and places the pill on the seesaw. Next, if the user clicks the button with the selected pill label, the DC motor of the pill filming system runs, and the seesaw moves. The pill inside the seesaw rolls randomly. After the seesaw has completed one reciprocating motion, the pill is placed on the stage in a random position and direction. The top web camera focuses on the pill and captures the stage area. The mainboard creates a directory with the label in the internal storage and stores the generated image there. The pill filming system is initially set to complete after generating 300 images, giving enough time for the pill to change its position and orientation. Users can choose how much image data they need for a given study and how long it takes to collect those images.
Figure 6 presents four examples each of two types of generated data, where the target pill is located on the surface background of white gauze on the stage. The image size is 1280 pixels × 960 pixels, the horizontal and vertical resolutions are both 300 DPI (dot per inch) and the file extension is .jpg.
The easiest way to identify a pill and obtain information about it is to observe it visually, which requires pill image data. In recent years, research using deep learning technology has been actively conducted, requiring large quantities of data. However, there currently is no tool capable of automatically generating image data, including the appearance of a pill. Researchers who need pill image data have manually taken pictures of pills themselves, which is cumbersome and has limitations in that it is difficult to photograph pills from a specific angle.
Therefore, this paper proposes a system that can automatically generate pill image data from various angles. The pill observation data from various angles in a 3D field includes many configurations. Furthermore, the data collected in the same controlled environment has a uniform background, making it easy to process the images. Large amounts of high-quality data from the pill filming system can contribute to various studies using pill images. As an example of using pill images, if a neural network is trained with the data obtained from the pill filming system, it may be possible to develop a model that can determine the type of pill with only one picture. Abundant data facilitates high judgment accuracy of neural networks, so we initially set the pill filming system to generate 300 images [16]. However, the user can set the amount of data collected and the execution time through the GUI.
The pill filming system proposed in this study does not have a separate lighting system. The original image obtained from the pill filming system contains shadows. Therefore, in a following study, we intend to refine the pill filming system by including additional lighting devices such as a shadowless lamp used in a hospital’s operating room.
Acknowledgments
This research was supported by the GRRC program of Gyeonggi province (No. GRRC-Gachon2020(B01), AI-based Medical Image Analysis) and the Gachon University (No. GCU-202205980001).
References
1. Srinivasan S, Florez JC. Therapeutic challenges in diabetes prevention: we have not found the “exercise pill”. Clin Pharmacol Ther. 2015; 98(2):162–9.
https://doi.org/10.1002/cpt.146
.
2. Volpe M, Gallo G, Tocci G. New approach to blood pressure control: triple combination pill. Trends Cardiovasc Med. 2020; 30(2):72–7.
https://doi.org/10.1016/j.tcm.2019.03.002
.
3. Yi JY, Kim Y, Cho YM, Kim H. Self-management of chronic conditions using mHealth interventions in Korea: a systematic review. Healthc Inform Res. 2018; 24(3):187–97.
https://doi.org/10.4258/hir.2018.24.3.187
.
4. Hovareshti P, Roeder S, Holt LS, Gao P, Xiao L, Zalkin C, et al. VestAid: a tablet-based technology for objective exercise monitoring in vestibular rehabilitation. Sensors (Basel). 2021; 21(24):8388.
https://doi.org/10.3390/s21248388
.
5. Chen RC, Chan YK, Chen YH, Bau CT. An automatic drug image identification system based on multiple image features and dynamic weights. Int J Innov Comput Inf Control. 2012; 8(5):2995–3013.
6. Ahmad S, Hasan M, Shahabuddin M, Tabassum T, Allvi MW. IoT based pill reminder and monitoring system. Int J Comput Sci Netw Secur. 2020; 20(7):152–8.
7. Lee YB, Park U, Jain AK, Lee SW. Pill-ID: matching and retrieval of drug pill images. Pattern Recognit Lett. 2012; 33(7):904–10.
https://doi.org/10.1016/j.patrec.2011.08.022
.
8. Cordeiro LS, Lima JS, Ribeiro AI, Bezerra FN, Reboucas Filho PP, Neto AR. Pill image classification using machine learning. In : Proceedings of 2019, 8th Brazilian Conference on Intelligent Systems (BRACIS); 2019 Oct 15–18; Salvador, Brazil. p. 556–61.
https://doi.org/10.1109/BRACIS.2019.00103
.
9. Yu J, Chen Z, Kamata SI. Pill recognition using imprint information by two-step sampling distance sets. In : Proceedings of 2014, 22nd International Conference on Pattern Recognition; 2014 Aug 24–28; Stockholm, Sweden. p. 3156–61.
https://doi.org/10.1109/ICPR.2014.544
.
10. Chang WJ, Chen LB, Hsu CH, Chen JH, Yang TC, Lin CP. MedGlasses: a wearable smart-glasses-based drug pill recognition system using deep learning for visually impaired chronic patients. IEEE Access. 2020; 8:17013–24.
https://doi.org/10.1109/ACCESS.2020.2967400
.
11. Kwon HJ, Kim HG, Lee SH. Pill detection model for medicine inspection based on deep learning. Chemosensors. 2021; 10(1):4.
https://doi.org/10.3390/chemosensors10010004
.
12. Yaniv Z, Faruque J, Howe S, Dunn K, Sharlip D, Bond A, et al. The national library of medicine pill image recognition challenge: an initial report. In : Proceedings of 2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR); 2016 Oct 18–20; Washington, DC. p. 1–9.
https://doi.org/10.1109/AIPR.2016.8010584
.
13. Wang Y, Ribera J, Liu C, Yarlagadda S, Zhu F. Pill recognition using minimal labeled data. In : Proceedings of 2017 IEEE 3rd International Conference on Multimedia Big Data (BigMM); 2017 Apr 19–21; Laguna Hills, CA. p. 346–53.
https://doi.org/10.1109/BigMM.2017.61
.
14. Zeng X, Cao K, Zhang M. MobileDeepPill: a small-footprint mobile deep learning system for recognizing unconstrained pill images. In : Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services; 2017 Jun 19–23; Niagara Falls, NY. p. 56–67.
https://doi.org/10.1145/3081333.3081336
.
15. Raspberry-Pi 4 model B [Internet]. Cambridge, UK: Raspberry Pi Foundation;c2022. [cited at 2023 Jan 20]. Available from: https://www.raspberrypi.com/products/raspberry-pi-4-model-b
.
16. Kim JW, Park SM, Choi SW. Real-time photoplethysmographic heart rate measurement using deep neural network filters. ETRI J. 2021; 43(5):881–90.
https://doi.org/10.4218/etrij.2020-0394
.