Journal List > Healthc Inform Res > v.29(2) > 1516082707

Laghari, Huong, Tay, and Chew: Dorsal Hand Vein Pattern Recognition: A Comparison between Manual and Automatic Segmentation Methods

Abstract

Objectives

Various techniques for dorsal hand vein (DHV) pattern extraction have been introduced using small datasets with poor and inconsistent segmentation. This work compared manual segmentation with our proposed hybrid automatic segmentation method (HHM) for this classification problem.

Methods

Manual segmentation involved selecting a region-of-interest (ROI) in images from the Bosphorus dataset to generate ground truth data. The HHM combined histogram equalization and morphological and thresholding-based algorithms to localize veins from hand images. The data were divided into training, validation, and testing sets with an 8:1:1 ratio before training AlexNet. We considered three image augmentation strategies to enlarge our training sets. The best training hyperparameters were found using the manually segmented dataset.

Results

We obtained a good test accuracy (91.5%) using the model trained with manually segmented images. The HHM method showed slightly inferior performance (76.5%). Considerable improvement was observed in the test accuracy of the model trained with the inclusion of automatically segmented and augmented images (84%), with low false acceptance and false rejection rates (0.00035% and 0.095%, respectively). A comparison with past studies further demonstrated the competitiveness of our technique.

Conclusions

Our technique can be feasible for extracting the ROI in DHV images. This strategy provides higher consistency and greater efficiency than the manual approach.

I. Introduction

The purpose of biometrics is to identify individual physical or behavioral features based on their natural traits. Physical characteristics include fingerprints [1], the iris of the eye, the face, palmprints, and dorsal hand veins (DHVs) [2], while behavioral features consist of voice, gait, keystrokes, and signatures [3]. Given the lack of evidence and trust in the security of behavioral biometric information, most researchers have moved toward physiological characteristics. However, image quality and the surveillance angle are the major drawbacks of facial biometric systems [4]. Similarly, the problems with iris recognition systems include the setting of light illumination and eyeball movement while images are being captured [5]. Thus, many scholars have shifted their focus to DHV recognition systems.
DHV systems have several advantages over traditional recognition systems. The DHVs constitute a tree-like vascular network of blood located at the backside of the hand. The main benefit of using the DHVs as a biometric system is its high identification performance. The system detects only live hand veins, which have a low resemblance rate and high acceptability ratio [6]. The vein pattern is unaffected by humidity and temperature, since veins are located beneath the skin. Furthermore, although the DHV pattern varies from infants to 15 years of age [7], it remains unchanged unless a major accident occurs [8].
Although studies have been conducted in the past for DHV recognition using images captured by devices such as near-infrared (NIR) cameras [7], complementary monochrome metal-oxide-semiconductor (CMOS) cameras [9], and digital single-lens reflex (DSLR) cameras [10]. As benchmarks, the Bosphorus [11], North China University of Technology (NCUT) [12], Badawi [11], Indian Institute of Technology Delhi (IITD), and 11k Hands [13] databases are also freely available and publicly accessible. These public databases contain a large number of images for training image recognition models, which may prevent overfitting the network. Other work [14] also reported using self-collected data from the authors’ laboratory for demonstration.
It is very often necessary to obtain a region-of-interest (ROI) from an image for proper recognition. Hence, different image segmentation techniques have been implemented with varying degrees of success. These techniques include manual cropping with a combined matched filter and a local binary fitting model to locate tiny boundaries (small veins) in images [15]. The image centroid technique is also used for the segmentation of DHVs [16]. The major drawbacks of these manual segmentation techniques are the loss of significant information, they are time and labor-consuming, and there is high variability in the produced results because the process is based on judgments made using subjective intuition.
Meanwhile, the automatic cropping technique is highly effective and time-saving in extracting DHV patterns. For instance, the determination of coordinates method has been adopted to achieve the ROI of an image [17]. A morphological operation (top-hat transformation) is another useful technique that adjusts the intensity values to increase the visibility of inconsistent image background pixels [18]. A hybrid technique combining the grayscale morphology method and local thresholding technique has been used for the similar task [19]. It was emphasized that image enhancement methods are necessary to improve the contrast of an image from its background [20]. Histogram equalization (HE) is the most widely used traditional enhancing method to improve the intensity of an image globally rather than in the area of interest [21]. Among the variations in HE, contrast-limited adaptive histogram equalization (CLAHE) has been found to be an effective method to enhance the targeted area of DHV images [16,17,22].
In addition to the machine learning (ML) techniques mentioned above, other related studies have adopted strategies, such as artificial neural networks [22] and the Mahalanobis distance method [23]. Other research [24] has used convolutional neural network (CNN)-based model for segmentation. Unlike conventional ML methods, which can be time-consuming for manually determining the features for DHV recognition [16], CNN is an increasingly popular tool for decision-making. A CNN model was first introduced by LeCun et al. [25] to make recognition training simple and time-efficient (in the automatic extraction of useful information). Many pre-trained CNN models are available for image processing applications, such as AlexNet, VGGNet, GoogLeNet, DenseNet, ResNet, and SqueezeNet. While most of these models are used for classification, some CNN models have been adopted for image segmentation problems. Nonetheless, traditional ML techniques have advantages and perform better than CNNs [24], especially when texture features are the primary information sources for decision-making. The classification accuracy of CNN models depends upon tuned hyperparameters and the nature of the dataset. Previous studies [2,1113,26] used CNN techniques to recognize the pattern of DHV. On that note, a study [11] recommended using AlexNet, VGG16, and VGG19 due to their high training accuracy (i.e., 99%), but efforts had yet to be made to test the trained model against unseen data. Even though another study [13] included the testing of the AlexNet model trained with augmented (randomly rotated) images, entire-hand images without an ROI extraction were used as input. Thus, instead of vein patterns, the model may have been trained to recognize the hand contour or image shape. Those prior studies have also not considered the false acceptance rate (FAR) or the false rejection rate (FRR) in their evaluations. A robust and secure biometric system has both low FAR and FRR [27]. In this study, we introduced a hybrid system combining HE, thresholding, and morphological techniques for enhanced, effective, and time-saving segmentation of DHV regions as compared to the manual approach. This study compared the performance of AlexNet transfer learning using the manual and hybrid automatically segmented data due to its time-efficiency. We analyzed the performance of the model trained with the enlarged dataset and compared it with previous research.

II. Methods

This section describes the dataset and methods used in this study. All the simulations were carried out using MATLAB R2020b.

1. Dorsal Hand Vein Dataset

The dorsal hand vein images used in this study are from the Bosphorus database (www.bosphorus.ee.boun.edu.tr). This is an open-access resource with a collection of 1,575 images from different experimental conditions. In our investigation, a total of 1,500 dorsal hand images from 100 subjects were selected from the original 1,575 images to balance the data size for each class. The considered images included 1,200 left-hand images of the recruited subjects acquired under different activities, namely normal (or at rest), after carrying a 3-kg bag for a minute, after squeezing (closing and opening) an elastic ball for a minute, and after placing a piece of ice on the back of the hand. The remaining 300 images were of their right hand in at-rest condition.

2. Manual Segmentation

In the first experiment, the cropping process was performed manually. The main areas of interest were the center regions of the dorsal side of the hand. The identified regions were outlined and segmented from the original images one at a time by using the imcrop function available in MATLAB, as shown in Figure 1. By doing this, the background and fingers were removed from the original image, leaving the region containing the hand veins. The selection was intuitive, and the process produced different image sizes. These images were resized to a dimension of 227 × 227 × 3 to match the input size of AlexNet prior to training.

3. Hybrid Automatic Segmentation

The second experiment involved ML techniques to extract the ROI of DHVs. Since most ML techniques work in the grayscale color space, the first step was to convert the color image to gray using the rgb2gray function, as shown in Figure 2. The edges of the hand images were then enhanced with the CLAHE method. Next, all images were changed into binary versions by invoking the imbinarize function prior to performing morphological operations. In our study, morphological structuring elements (disks) with radii of 5, 10, 15, and 20 were applied to binary images to define the mask for the regions of interest. The morphological bottom-hat technique was also applied to the mask to filter the ROI of DHVs (vein portion) for this same purpose. This was followed by color inversion of the image using the imcomplement function and the pixel-difference method to locate the vein regions. The generated mask was overlaid on the original image to remove the fingers. The resulting image was thresholded by setting pixels with a value greater than 0.7 as NaN (i.e., an invalid number). This value is arbitrarily chosen for better smoothing quality. A mask frame was then introduced on the left and right sides of the original images to block the edges with lower pixel values (due to shadowing), which may have similar intensity as the vein’s region. The processed image was again converted to binary before applying the mask to remove the remaining background. CLAHE was then applied to further enhance the appearance of the vein region. In the final step, a thresholding operation was applied to the enhanced image to obtain a clearer visualization of the image, as shown in Figure 2. Finally, the produced images were resized to 227 × 227 × 3 before further processing. Since this method is mainly based on hybrid histogram analysis and morphological operations, hereafter, we refer to this approach as the HHM method.

4. Data Augmentation

After segmentation, the training dataset was augmented and split into training, validation, and testing sets at an 8:1:1 ratio, and their distribution is shown in Table 1. We used a constant random seed number of 10 in the dataset division process to ensure consistency in our comparisons. The training dataset was enlarged through augmentation using randomAffine2d and flip functions to improve the model performance. This process was done by random rotation of the images in two different angle intervals, (−30°, 30°) and (−50°, 50°), and random horizontal flipping, as shown in Figure 3.

5. Transfer-Learning AlexNet

Despite the lightweight and simple features of AlexNet, it produced comparable classification accuracy to its deeper counterparts, such as VGG16, ResNet-50, and SqueezeNet [26], suggesting its efficiency in extracting important information. Thus, AlexNet was used for our experiments. This model was trained with the manually and automatically segmented dataset shown in Figure 4. In these experiments, we froze the entire network backbone, except the last three layers—that is, the fully connected layer was replaced with 100 nodes corresponding to 100 users, softmax, and output. Stochastic gradient descent with momentum (SGDM) was employed as the solver due to its shorter computational time and high accuracy. The model trained with the manually segmented dataset was used as the gold standard for identifying the optimal training hyperparameters using the grid search approach. For this purpose, 50 trials were attempted. We adjusted the epoch, initial learning rate, and mini-batch size while keeping the rest of the parameters fixed. The epoch number was varied from 10 to 90 at a step of 10; the batch-size number was adjusted from 21 to 211, and the learning rates ranged from 0.00009 to 0.1 at a resolution of 0.00001. Our results showed that an epoch number of 50, an initial learning rate of 0.0008, and a mini-batch size of 128 were the best hyperparameters that yielded the best training and validation accuracies. This combination was chosen for the remaining experiments using other strategies, as shown in Table 2. Since it was our intention to demonstrate the efficiency of the proposed HHM method, the best model trained with the HHM was chosen based on the highest test accuracy. The quality of this biometric verification system was then evaluated in terms of the FAR and FRR. We considered threshold values of 0.3 and 0.5 following a previous recommendation [28]. A decision threshold of 0.5 has been deemed the most optimal in many studies in the field.

III. Results

This section presents the results obtained from training AlexNet, as well as a comparison of our results with the state-of-the-art.

1. Model Training

The training accuracy (Tacc) and validation accuracy (Vacc) of the model trained following the implementation of different strategies in Table 2 were obtained using the best combination of hyperparameters identified using the manual segmentation approach on original-size data. The improvement in the classification results with the inclusion of augmentation data is consistent with the observations in many previous studies [7,13]. The proposed HHM technique produced 100% Tacc compared to the manual method using both the original and augmented datasets. The computing time of the model trained with the inclusion of augmented data was approximately three times longer than the model trained with the original data.

2. System Performance and Comparison with Existing Methods

The test accuracy of the AlexNet model trained using the datasets segmented via the different strategies in Table 2 is shown in Figure 5. The models trained with the augmented dataset produced higher accuracy than those without augmentation. The manually segmented dataset achieved higher test accuracy than automatic segmentation. This difference was comparatively small with the augmented dataset. To further our research, we compared our results with the findings of published papers using AlexNet for the same problem [11,13,26]. Some of those studies used a different dataset and employed different strategies than ours in improving the test accuracy. Since the best model trained with the HHM method was the model that included augmentation, as shown in Figure 5, this model was used to test the efficiency of the biometric system. Our results showed mean FAR and FRR values of 0.00065% and 0.07% at a score threshold of 0.3, and 0.00035% and 0.095% with a 0.5 threshold.

IV. Discussion

In this study, we demonstrated the performance of AlexNet models trained using manually and automatically segmented datasets. We tested our models with an independent (unseen) dataset, and we did not consider a cross-validation (CV) scheme in the performance validation of the model because concerns have been raised about the grossly over-optimistic results from the CV method due to its lack of independent and external validation [29]. The manual method took nearly 20 working days using our CPU (Intel Core M-5Y71 with a 1.40 GHz processor and 8 GB of RAM) in segmenting the 1,500 images. This was three times longer than the time required for the HHM model. Based on the pre-experiment results, we found that tuning the training hyperparameters—namely, the mini-batch size, learning rate, and epoch number—was sufficient to enhance classification accuracy. We identified an epoch number of 50 as optimal for the employed model to learn important features using the segmented image dataset and minimize underfitting. Similarly, the mini-batch size value was tuned to 128. A large mini-batch size resulted in the training taking longer to reach convergence, while setting the value too low led to unstable learning problems that affected the overall classification performance. The best initial learning rate of 0.0008 was identified after running the network multiple times with different values. A small value caused the training procedure to take a long time, while a large value caused an unstable training process. During these tuning processes, we noticed significant changes in the training and validation accuracies, ranging between 80% and 98.44% and between 38% and 93%, respectively, using the manually segmented dataset as the benchmark set. This combination was found to work acceptably well for other strategies. Even though the test accuracy from the HHM method (non-augmented case) shown in Table 2 was lower than that achieved with the manual method, the performance of the model improved considerably to 84% with the inclusion of augmented data. Interestingly, in the manual method, it was found that the existing datasets may be sufficient for the network to learn all the important features. Hence, the inclusion of augmentation did not significantly improve the classification performance.
It must be mentioned here that there was evidence of model overfitting (Tacc = 100%) in the HHM method, which we attribute to the improper segmentation of certain images. The main cause of this was the highly consistent pixel values between the vein regions and undesired regions (i.e., background and fingers), resulting in either over-segmented or under-segmented results in certain images. The automatically and manually segmented datasets combined with the augmentation method produced generally better test accuracy values of 88% and 91.5%, respectively, as compared to those without augmentation (76.5% and 87.5%, respectively), shown in Figure 5. This suggests the possibility of improved model learning of deeper representations with greater variation in the training data. We do not rule out the possibility that the results would improve with a deeper and wider network for the extraction of more complex and richer features. The test accuracy in Figure 5 is consistent with the validation accuracies observed in Table 2, wherein the inference accuracy showed a notable improvement of more than 10% using the model trained with HHM-segmented data combined with an augmentation strategy.
A comparison with earlier works [13,26], as presented in Table 3, showed consistency with our results regarding the efficacy of augmentation. One study [11] used the original (hand) images without segmentation; thus, the model may have been trained to recognize the hand shape, instead of the vein pattern, which may be inappropriate for authentication tasks.
It is found that a manually segmented dataset produced a generally higher classification accuracy at the price of a more laborious and time-consuming procedure. There was also substantial inconsistency in the judgment process. In contrast, our automatic strategy is time-saving, requires less effort during segmentation, and produces repeatable results. This method can be suitably used for practical purposes due to its relatively low FAR and FRR, which were 0.00035% and 0.095% at a cutoff threshold of 0.5. These values were close to those obtained using a commercial biometric system [30], with a reported FAR and FRR of 0.0001% and 0.01%.
Nonetheless, there is still a need for a robust segmentation method to overcome the over-segmentation or under-segmentation problems that occur in 20% of images. This may be achieved with the use of hybrid ML and CNN, which may be explored in the future.

Notes

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Acknowledgment

Communication of this research is made possible through monetary assistance by Universiti Tun Hussein Onn Malaysia (UTHM) Publisher’s Office via Publication Fund E15216.

References

1. Chowdhury AM, Imtiaz MH. Contactless fingerprint recognition using deep learning: a systematic review. J Cybersecur Priv. 2022; 2(3):714–30. https://doi.org/10.3390/jcp2030036.
crossref
2. Laghari WA, Tay KG, Huong A, Choy YY, Chew CC. Dorsal hand vein identification using transfer learning from AlexNet. Int J Integr Eng. 2022; 14(3):111–9. https://doi.org/10.30880/ijie.2022.14.03.012.
crossref
3. Dhieb T, Boubaker H, Njah S, Ben Ayed M, Alimi AM. A novel biometric system for signature verification based on score level fusion approach. Multimed Tools Appl. 2022; 81(6):7817–45. https://doi.org/10.1007/s11042-022-12140-7.
crossref
4. Zulfiqar M, Syed F, Khan MJ, Khurshid K. Deep face recognition for biometric authentication. In : Proceedings of 2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE); 2019 Jul 24–25; Swat, Pakistan. p. 1–6. https://doi.org/10.1109/ICECCE47252.2019.8940725.
crossref
5. Nithya AA, Lakshmi C. Iris recognition techniques: a literature survey. Int J Appl Eng Res. 2015; 10(12):32525–46.
6. Jia W, Xia W, Zhang B, Zhao Y, Fei L, Kang W, et al. A survey on dorsal hand vein biometrics. Pattern Recognit. 2021; 120:108122. https://doi.org/10.1016/j.patcog.2021.108122.
crossref
7. Kumar R, Singh RC, Kant S. Dorsal hand vein-biometric recognition using convolution neural network. Gupta D, Khanna A, Bhattacharyya S, Hassanien AE, Anand S, Jaiswal A, editors. International Conference on Innovative Computing and Communications. Singapore: Springer;2021. 1087–107. https://doi.org/10.1007/978-981-15-5113-0_92.
crossref
8. Rajalakshmi M, Ganapathy V, Rengaraj R. Palm-dorsal vein pattern authentication using convoluted neural network (CNN). Int J Pure Appl Math. 2017; 116(23):525–32.
9. Raghavendra R, Surbiryala J, Busch C. Hand dorsal vein recognition: Sensor, algorithms and evaluation. In : Proceedings of 2015 IEEE International Conference on Imaging Systems and Techniques (IST); 2021 Sep 16–18; Macau, China. 1–6. https://doi.org/10.1109/IST.2015.7294557.
crossref
10. Khan MH, Khan NA. Investigating linear discriminant analysis (LDA) on dorsal hand vein images. In : Proceedings of the 3rd International Conference on Innovative Computing Technology (INTECH); 2013 Aug 29–31; London, UK. p. 54–9. https://doi.org/10.1109/INTECH.2013.6653626.
crossref
11. Al-johania NA, Elrefaei LA. Dorsal hand vein recognition by convolutional neural networks: feature learning and transfer learning approaches. Int J Intell Eng Syst. 2019; 12(3):178–91. https://doi.org/10.22266/IJIES2019.0630.19.
crossref
12. Li X, Huang D, Wang Y. Comparative study of deep learning methods on dorsal hand vein recognition. Biometric recognition. Cham, Switzerland: Springer;2016. 296–306. https://doi.org/10.1007/978-3-319-46654-5_33.
crossref
13. Mohaghegh M, Payne A. Automated biometric identification using dorsal hand images and convolutional neural networks. J Phys Conf Ser. 2021; 1880:012014. https://doi.org/10.1088/1742-6596/1880/1/012014.
crossref
14. Wang J, Wang G. Quality-specific hand vein recognition system. IEEE Trans Inf Forensic Secur. 2017; 12(11):2599–610. https://doi.org/10.1109/TIFS.2017.2713340.
crossref
15. Guo Z, Ma Y, Min X, Li H, Liu Q, Han C, et al. A novel algorithm of dorsal hand vein image segmentation by integrating matched filter and local binary fitting level set model. In : Proceedings of 2020 7th International Conference on Information Science and Control Engineering (ICISCE); 2020 Dec 18–20; Changsha, China. p. 81–5. https://doi.org/10.1109/ICISCE50968.2020.00027.
crossref
16. Chanthamongkol S, Purahong B, Lasakul A. Dorsal hand vein image enhancement for improve recognition rate based on SIFT keypoint matching. In : Proceedings of the 2nd International Symposium on Computer, Communication, Control and Automation; 2013 Dec 1–2; Singapore. p. 174–7. https://doi.org/10.2991/3ca-13.2013.44.
crossref
17. Sontakke BM, Humbe VT, Yannawar PL. Automatic ROI extraction and vein pattern imaging of dorsal hand vein images. Int J Sci Adv Res Technol. 2018; 4(3):1678–83.
18. Manju RA, Koshy G, Simon P. Improved method for enhancing dark images based on CLAHE and morphological reconstruction. Procedia Comput Sci. 2019; 165:391–8. https://doi.org/10.1016/j.procs.2020.01.033.
crossref
19. Chen L, Zheng H, Li L, Xie P, Liu S. Near-infrared dorsal hand vein image segmentation by local thresholding using grayscale morphology. In : Proceedings of 2007 1st International Conference on Bioinformatics and Biomedical Engineering; 2007 Jul 6–8; Wuhan, China. p. 868–71. https://doi.org/10.1109/ICBBE.2007.226.
crossref
20. Sheet SS, Tan TS, As’ari MA, Hitam WH, Sia JS. Retinal disease identification using upgraded CLAHE filter and transfer convolution neural network. ICT Express. 2022; 8(1):142–50. https://doi.org/10.1016/j.icte.2021.05.002.
crossref
21. Hitam MS, Awalludin EA, Yussof WN, Bachok Z. Mixture contrast limited adaptive histogram equalization for underwater image enhancement. In : Proceedings of 2013 International Conference on Computer Applications Technology (ICCAT); 2013 Jan 20–22; Sousse, Tunisia. p. 1–5. https://doi.org/10.1109/ICCAT.2013.6522017.
crossref
22. Chin SW, Tay KG, Chew CC, Huong A, Rahim RA. Dorsal hand vein authentication system using artificial neural network. Indones J Electr Eng Comput Sci. 2021; 21(3):1837–46. http://doi.org/10.11591/ijeecs.v21.i3.pp1837-1846.
crossref
23. Akram MU, Awan HM, Khan AA. Dorsal hand veins based person identification. In : Proceedings of 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA); 2014 Oct 14–17; Paris, France. p. 1–6. https://doi.org/10.1109/IPTA.2014.7001975.
crossref
24. Hofbauer H, Jalilian E, Uhl A. Exploiting superior CNN-based iris segmentation for better recognition accuracy. Pattern Recognit Lett. 2019; 120:17–23. https://doi.org/10.1016/j.patrec.2018.12.021.
crossref
25. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998; 86(11):2278–324. https://doi.org/10.1109/5.726791.
crossref
26. Lefkovits S, Lefkovits L, Szilagyi L. CNN approaches for dorsal hand vein based identification. In : Proceedings of International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG); Plzen, Czech Republic. p. 51–60. https://doi.org/10.24132/CSRN.2019.2902.2.7.
crossref
27. Malik J, Girdhar D, Dahiya R, Sainarayanan G. Reference threshold calculation for biometric authentication. Int J Image Graph Signal Process. 2014; 2:46–53. https://doi.org/10.5815/ijigsp.2014.02.06.
crossref
28. Wijewickrama R, Maiti A, Jadliwala M. Write to know: on the feasibility of wrist motion based user-authentication from handwriting. In : Proceedings of the 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks; 2021 Jun 28–Jul 2; Abu Dhabi, United Arab Emirates. p. 335–46. https://doi.org/10.1145/3448300.3468290.
crossref
29. Picart-Armada S, Barrett SJ, Wille DR, Perera-Lluna A, Gutteridge A, Dessailly BH. Benchmarking network propagation methods for disease gene identification. PLoS Comput Biol. 2019; 15(9):e1007276. https://doi.org/10.1371/journal.pcbi.1007276.
crossref
30. Junyulong Technology. Junyulong Technology product center live biometrics [Internet]. Shenzhen, China: Junyulong Technology;c2021. [cited at 2023 Mar 30]. Available from: http://www.junyulong.com.cn/product/html/284.html.

Figure 1
Manual cropping process: (A) defined boundary box of an image and (B) segmented output.
hir-2023-29-2-152f1.gif
Figure 2
Flowchart of hybrid automatic segmentation (i.e., the HHM method). CLAHE: contrast-limited adaptive histogram equalization, HHM: hybrid automatic segmentation method.
hir-2023-29-2-152f2.gif
Figure 3
Example of data augmentation operations on a segmented image.
hir-2023-29-2-152f3.gif
Figure 4
Schematic diagram of the dorsal hand vein processing and training workflow. HHM: hybrid automatic segmentation method.
hir-2023-29-2-152f4.gif
Figure 5
Test accuracy of the model trained with datasets segmented using the manual and HHM methods, and with and without an augmentation strategy. HHM: hybrid automatic segmentation method.
hir-2023-29-2-152f5.gif
Table 1
Distribution of images for training, validation, and testing of the model
Dataset Without augmentation With augmentation
Training 1,200 3,600
Validation 100 100
Testing 200 200
Total image 1,500 3,900
Table 2
Training (Tacc) and validation accuracy (Vacc) of the model trained using the data processing strategies and training parameters adopted in this study
Strategy Tacc (%) Vacc (%) Training options
Manual method 98.44 93.00 Epoch = 50
Mini-batch size = 128
Learning rate = 0.0008
HHM method 100.00 69.00
Manual method with augmentation 99.22 95.00
HHM method with augmentation 100.00 84.00

HHM: hybrid automatic segmentation method.

Table 3
Comparison of classification accuracy between our method and the state-of-the-art
Study Dataset Segmentation/quality enhancement strategy Classification accuracy (%)
Al-johania and Elrefaei [11] Bosphorus - 95.51
Badawi 100

Lefkovits et al. [26] NCUT - 96.50
CLAHE 96.08
Coded 95.10
Coarse vein 91.67

Mohaghegh and Payne [13] 11k Hands Augmentation 93.70
IITD 80.10

Proposed Bosphorus Manual method 87.50
HHM method 76.50
Manuala 91.50
HHMa 88.00

NCUT: North China University of Technology, CLAHE: contrast-limited adaptive histogram equalization, HMM: hybrid automatic segmentation method.

a With augmentation.

TOOLS
Similar articles