最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會(huì)員登陸 & 注冊(cè)

2023KISS Fingerprint Recognition utilizing learning-based method

2023-08-25 16:58 作者:栗悟飯與カメハメ波  | 我要投稿

Abstract

We have developed a fingerprint recognition system that utilizes Siamese Neural Network (SNN) to compare input fingerprint with data in the fingerprint database, generating similarity scores and obtaining recognition results. To enhance the system's robustness, we address?damaged regions in fingerprints by using a Residual Neural Network with a U-Net structure to repair the fingerprint images. The restored images are then fed into the SNN module, thereby improving the system's robustness and accuracy.

In this work, we propose a fingerprint reconstruction model based on U-net architecture. When tested on SOCOFing database, containing damaged fingerprints of varying levels, our model achieved significantly improved restoration capabilities in comparison to the traditional U-net; the MSE between the repaired images produced by our model and the undamaged prints is 0.007. After attaching U-Res-Net to the front of SNN, the accuracy reaches 96.35% with only 10 epochs of training.

Keywords: fingerprint recognition, U-net, Siamese network, residual network, SOCOFing

1.Introduction

????Fingerprint, as an important biometric feature, can be used to verify the identity of users. Therefore, fingerprint recognition plays a significant role in various electronic terminals. Fingerprint lock are an essential application, not only used for unlocking smartphones but also widely applied in many access control systems. Traditional fingerprint recognition involves complex image processing, which faces the challenges of the speed and accuracy of recognition. In this study, what we focus is comparing and matching prints with the same label or not, which will then output as a score showing that those two images are similar or not. Compared to traditional methods, the use of neural networks offers greater accuracy and speed, making it the key in this research.

Figure1-1. Example of the structure of Siamese neural network

????Furthermore, fingerprint images are inevitably subject to certain degrees of damage in certain regions. Damaged fingerprints can be either in the fingerprint databases or the test fingerprints. When comparing the damaged data with the normal one, differences in the damaged region can result in a decrease in image similarity scores, decreasing the accuracy. The performance can degrade when SNN is applied on severely altered-hard cases, therefore, restoring these damaged regions before inputting them into the SNN can alleviate this issue. The accuracy of SNN using reset blocks, is quite higher, with 98.7 for easy cases, 98.6 for the medium and 98 for the difficult cases, compared with the SNN using basic conventional layers.,However, the varying scale of damaged prints have different accuracy.

????U-net is a network widely used for medical image segmentation. It adopts a fully convolutional approach, utilizing convolution and pooling operations for feature extraction. The image restoration is achieved through up-sampling and convolution. During the up-sampling process, the extracted feature maps are connected with the obtained image to better preserve features and achieve image segmentation.

????In the process of training U-net for image segmentation, pre-annotated segmentation templates are used for training. However, for this project, we aim to utilize U-net for image restoration, which requires replacing the segmentation templates with corresponding healthy fingerprint images for training.

Figure1-2. Example of the structure of U-net

Figure1-3. Image Segmentation via U-net

????For the data set selection, we chose the SOCOFing data set. The Sokoto Coventry Fingerprint Dataset (SOCOFing) is a biometric fingerprint database specifically designed for academic research purposes. SOCOFing consists of 6,000 fingerprint images collected from 600 African subjects. It includes various unique attributes such as gender labels, hand and finger names, as well as synthetically altered versions of the fingerprints. These alterations include three different levels of obliteration (Obl), central rotation (CR), and z-cut (Zcut). Therefore, we can first use U-net to restore the altered images and then input them into the SNN to compare their similarity, thereby obtaining the recognition results.


Figure1-4. SOCOFing image with 3 types of damaged and levels


????One of the most successful schemes optimizing U-net is Residual Neural Network (ResNet). In traditional neural networks, increasing the depth of the network often leads to improved performance. However, as the network becomes very deep, the gradients can vanish or explode during backpropagation, making training difficult or unstable. These problems limit the depth of neural networks and prevent fully exploiting the potential of deeper architectures. ResNet tackles these problems by introducing "residual blocks." Each residual block contains a skip connection, enabling gradients to directly flow across layers in the network, thereby mitigating the vanishing and exploding gradient issues. This skip connection bypasses one or more convolutional layers and directly adds the input information to the output of subsequent layers.

Figure1-5. Basic blocks of Residual Neural Network (Basic Block, Bottleneck Block)

II. Related Work

????In 2015, Olaf Ronneberger et al. proposed the U-net network architecture and applied it to the field of image segmentation and In the same year, Gregory Koch et al. introduced the Siamese neural network model under one-shot learning conditions, laying the structural foundation for our system. Regarding the optimization of U-net, the residual neural network structure proposed by Kaiming He et al. in 2015 was effectively utilized. Their work played a significant role in deepening neural networks and reducing degradation, thereby enhancing the performance of the U-net part.The Transformer model presented by Ashish Vaswani et al. in 2017 improved the feature extraction capability of the SNN system, leading to favorable results in the improvement of the SNN component of our system. Additionally, the U-net++ model proposed by Zongwei Zhou et al. in 2018, along with its ideas concerning optimization and storage of network parameters, was also considered during the design of our system. Furthermore, the stable diffusion model of the U-net structure, introduced by Andreas Lugmayr et al. in 2022, offers an alternative direction for the further optimization and design of our system.

III. Method

????Firstly, I reproduced the network structure of Figure 1-1 and made some minor modifications. For the convolution process, I set the padding to 1 to ensure that the convolution operation does not change the size of the image. Secondly, to address the issue of feature loss in the max-pooling layer, I replaced it with convolution operations that maintain the same number of channels, thereby preserving the original features. During the training process, I initially used both the BCE (Binary Cross Entropy) loss function and the Cross Entropy loss function, as these two loss functions are widely applied in the learning process of image segmentation. However, after testing, I found that the values of these two loss functions during the backpropagation process did not effectively decrease. Therefore, I opted for the simpler MSE (Mean Squared Error) loss function as an alternative. The MSE loss function computes the Euclidean distance and takes the mean between the pixel values of two images, allowing it to effectively reflect the fine differences between the two images. With this simpler loss function, I trained the first U-net model to address the issue at hand.

Figure 3-1. Restored result for altered-easy image *(Input, Output, Target)

????The results indicate that the current U-net model is already capable of repairing simple types of damage. Therefore, I conducted specific network training for moderate and challenging images. However, unlike the results for the simple group, when it comes to "Obl" type of damage, the network can recognize features quite well and perform effective repairs. On the other hand, for "CR" and "Zcut" types of damage, the repair performance is unsatisfactory.

Figure 3-2. Restored result for altered-hard image *(Input, Output, Target)

????Although the initial repairs have already helped improve the recognition accuracy of the SNN, exploring a more powerful image repair algorithm remains a good option.

????Firstly, in order to enhance the system's versatility and stability, it is beneficial to mix the three levels of damaged data into one comprehensive dataset for training, as it aligns better with real-world scenarios. Therefore, I reconstructed the dataset importing module.

????Secondly, to address the blurriness in the repair results, adjusting the loss function might be a good choice. I speculated that the blurriness in the results might indicate that the loss function is stuck in a local optimum rather than the global optimum, making it difficult for the training process to escape this "trap". Thus, I attempted to use various loss functions, including edge loss function (which also trains the image's gradient information), SSIM (Structural Similarity Index) loss function (resulting in more structured and systematic outcomes), and a combination of loss functions (weighted and integrated together for different loss functions). However, the results showed that using these new loss functions led to even blurrier images, indicating that the MSE loss function was considered the best choice initially.

????To escape local optima, a Scheduler that dynamically adjusted the learning rate was added, leading to better learning outcomes. Additionally, I changed the learning rate and continued training. These operations slightly improved the system's performance, but there still remains a certain gap in fully resolving the repair issue.

????Furthermore, regarding the blurry repair results, I hypothesized that these images might be similar to the 'Obl' type in the original data. Therefore, I attempted to connect 2~3 U-net models with the same weights during the testing process in the hope of achieving better repair outcomes. However, the repair results did not show significant improvement.


Figure 3-3. Restored result via U-net (easy version) *(Input, Output_Unet1&2&3, Target)

????It is worth noting that due to the inherent introduction of noise-like values during the U-net repair process, passing through U-net multiple times can act as a signal amplifier, continuously amplifying the noise and leading to larger deviations in the output results. Through verification, it has been found that the magnitude of introduced noise is dependent on the version of U-net applied. For U-net applied to simple damage, the noise is relatively small, while for U-net applied to challenging damage, the noise is more significant.

????By replacing the convolutional modules in the U-net network with basic residual network modules, the network's feature extraction capability can be effectively enhanced. For different network depths, residual networks apply different residual blocks to adapt to better output results. In the U-net model, applying residual networks to the encoding side is a mainstream optimization approach for image segmentation. However, using residual networks on the decoding side also improves network performance.

????Therefore, my final approach is to use basic residual blocks on both the encoding and decoding sides, controlling the convolution stride to form a continuous feature extraction layer, and replace the original convolution layers with this feature extraction layer. By adopting this model, I effectively overcome the issue of "blurry damage regions" caused by local optima encountered previously. This approach demonstrates excellent repair performance for images with varying degrees of mixed damage. Under similar training epoch conditions, the original U-net achieved an MSE of 0.3, the preliminarily optimized U-net reached an MSE of 0.02, and the final Res-U-net achieved an MSE around 0.005.

Table 3-1. MSE of different version of U-net on test set

? ? Finally, I applied this network to the training and testing process of the SNN network, forming a complete fingerprint recognition system. Additionally, I wrote several transformation functions to enable batch U-net image repairs on the dataset images and embedded the image repair application within the SNN network training process.

IV. Result

(1)?Res-U-Net:

Figure 4-1. Final result via Res-U-net *(Input Output Target)
Table 4-1. MSE of different versions of Res-U-net on test set

(2)?SNN:

????The Siamese network was trained for 10 epochs with batch size of 16, employing an 80:20 split ratio for the train and test datasets, respectively. During the testing phase, the Euclidean distance was computed, and a threshold of 0.62 was carefully selected based on the model's performance. This threshold served as a boundary, enabling the classification of image pairs into "match" (label 0) or "no match" (label 1) categories, depending on their similarity scores. Remarkably, the Siamese network achieved an impressive accuracy of 97.83%.

(3)?U-Net-Siamese pipeline:

????The U-Net-Siamese pipeline underwent training for 10 epochs with a batch size of 4 due to high memory usage of the connected model. As a result, the pipeline achieved a final accuracy of 96.35% based on a threshold of 0.85. We surmise that training the U-Net and the Siamese Network jointly can increase accuracy of the pipeline.

V. Conclusion

????We constructed a ResNet based on the U-net structure, where residual blocks are used in both the encoding and decoding parts. This network possesses powerful feature extraction capabilities. Compared to the basic U-net network structure, the MSE performance has been improved by approximately 43 times. This type of network can be applied to the front-end of SNN fingerprint recognition for repairing damaged or injured fingerprints,. Although the U-Net and Siamese pipeline did not increase the accuracy of fingerprint detection, this may be resolved by jointly training the components of the pipeline.

VI. Subsequent work

????The subsequent work can focus on several directions. Firstly, the complex structural parameters of this network can be optimized and simplified. The U-net++ structure can be referenced to implement deep supervision and network pruning during the prediction phase. Additionally, exploring the optimal number of channel settings is recommended.

????Secondly, in the current network structure applied in this study, no feature processing is conducted in the long connection module. It would be beneficial to refer to the stable diffusion model for feature extraction and propagation to improve network efficiency.

????Thirdly, it is important to explore the generalizability of the network to different fingerprint databases. In order to enhance training and prediction efficiency, the current system performs grayscale conversion and adjusts the input image size. These operations result in the loss of some feature-containing information, such as RGB fingerprint databases where colors can reflect the relative height of the fingerprints. Therefore, the subsequent work can verify the model's ability to handle different fingerprint applications and restoration tasks.

2023KISS Fingerprint Recognition utilizing learning-based method的評(píng)論 (共 條)

分享到微博請(qǐng)遵守國(guó)家法律
沅江市| 邵武市| 藁城市| 且末县| 揭东县| 宝坻区| 外汇| 安多县| 花垣县| 罗江县| 泉州市| 宁河县| 沛县| 孝昌县| 绿春县| 逊克县| 扶沟县| 攀枝花市| 沙雅县| 保定市| 平顺县| 垦利县| 遂宁市| 枣阳市| 黄浦区| 健康| 枣阳市| 赞皇县| 舟曲县| 新兴县| 仙桃市| 隆安县| 湖南省| 威远县| 娱乐| 渝北区| 天峨县| 贺兰县| 南陵县| 宜章县| 安远县|