0% found this document useful (0 votes)
9 views8 pages

4 Ijsrms 03115

The document presents a new iris recognition system that combines image processing techniques with deep learning to enhance accuracy and usability. It details the processes of eye detection, iris detection, segmentation, and classification using a convolutional neural network, tested on various datasets. The proposed model demonstrates superior performance in iris recognition compared to existing methods, achieving high accuracy across multiple datasets.

Uploaded by

ragou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views8 pages

4 Ijsrms 03115

The document presents a new iris recognition system that combines image processing techniques with deep learning to enhance accuracy and usability. It details the processes of eye detection, iris detection, segmentation, and classification using a convolutional neural network, tested on various datasets. The proposed model demonstrates superior performance in iris recognition compared to existing methods, achieving high accuracy across multiple datasets.

Uploaded by

ragou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

International Journal of Scientific Research in ___________________________ Research Paper .

Multidisciplinary Studies E-ISSN: 2454-9312


Vol.6, Issue.3, pp.20-27, March (2020) P-ISSN: 2454-6143

New full Iris Recognition System and Iris Segmentation Technique


Using Image Processing and Deep Convolutional Neural Network

Omar Medhat Moslhi


ARAB Academy for Science Technology and Maritime Transport, Giza, 32817, Egypt

Available online at: www.isroset.org

Received: 28/Jan/2020, Accepted: 14/Feb/2020, Online: 30/Mar/2020


Abstract- Iris recognition is a technology used in many security systems. Irises are different among all people every person
has a unique iris shape and there is no two irises have the same format. In this paper, a new model is introduced in iris
recognition to make this technology easy for anyone to use it, especially that any image can be used in the model and the
model filter itself and choose only the images that pass the model filters. This paper presents an iris recognition system
from the beginning of eye detection to the end of recognizing the iris images. This paper also presents a new process to
make iris recognition which is a blend between image processing techniques with deep learning to make iris Recognition.
Also, this paper represents a new iris segmentation technique that detects the iris images efficiently with high accuracy.
The iris recognition model is beginning an eye detection process then the iris detection process takes place which detects
the iris inside the eyes then iris segmentation process gets iris images that will be saved and used in the last process which
is responsible for iris classification using convolutional neural network. The iris recognition system was tested on well-
known data sets: Casia Iris-Thousand, Casia Iris Interval, Ubiris Version 1 (v1) and Ubiris Version 2 (v2).

Keywords: Iris Recognition, Iris Segmentation, Computer Vision, Convolutional Neural Network, Image Processing

I. INTRODUCTION II. RELATED WORK

In recent years iris recognition has an important place Hugo and Luis et al. [10] introduced the relation
especially in the field of biometric pattern recognition between error rates and the segmentation process and
[18]. Iris recognition plays an important role in many there is an increase in the error rates when the iris is
applications It helps in the identification of different inaccurately segmented. In [19] CNN was used in iris
persons with high accuracy as each person has unique recognition and it was observed that many weakly
iris featured and at 1 in the probability for the correlated CNN matching scores can be obtained which
existence of two similar irises [9][23]. The iris has together provide a strong model where sparse linear
random morphogenesis which makes each person has a regression techniques are used in this paper to solve
unique pattern [10]. many problems like regularization.

Iris recognition gives high accuracy more than other Hugo and Luis et al. [23] studied the relation between
human characteristics in user authentication like the sampling rate in the iris normalization stage and the
Fingerprint and handwriting [8]. A lot of governments overall accuracy of iris recognition. Hugo and Luis et al.
and institutions using biometric technology in their [17] proposed a new iris classification model which
security systems as this technology has high accuracy makes six regions from segmented and normalized iris
[36]. where fusion rule is used to achieve iris classification.

This paper introduces a new and full system for Iris In [15] discussed the iris image preprocessing for iris
recognition which begins by eye detection and then iris recognition on unconstrained environments using deep
detection and if the image successfully passes these steps representation. This approach begins by segmentation
it will pass through iris segmentation step and the final and normalization then data augmentation is used to
step is iris classification using convolutional neural increase training samples feature extraction is done using
networks. This paper introduces a new iris segmentation two CNN models and cosine distance is used for
method to extract features from the image. classification.

In section 2 related works are discussed and in section 3 Ahmed Sarhan et al. [18] proposed an algorithm that
overview of the proposed model in section 4 detailed uses discrete cosine transform (DCT) to extract
explanation of each step of the model in section 5 distinctive features from the iris image then the extracted
contains the results and conclusion in section 6. feature to vector is applied to an ANN for classification.

© 2020, IJSRMS All Rights Reserved 20


Int. J. Sci. Res. in Multidisciplinary Studies Vol.6, Issue.3, Mar 2020

In [32] a new segmentation algorithm was used to detect detects eyes. Figure 2 shows the output of this process.
the pupil which depends on a threshold that detects the
black rectangular area in the pupil where the grayscale
values within the pupil are very small. This paper uses a
neural network to recognize the iris patterns the
architecture of the neural network is two hidden layers
the first hidden layer contain 120 neurons and the second
one contain 81 neurons.

III. MODEL OVERVIEW

the iris recognition model begins by detection process


which tries to find eyes in the images collected by
camera then the second process is iris detection in this
phase iris inside eyes images are detected to be ensure
that the eyes have visible iris that could be segmented in
the next steps the third process is iris segmentation that
will be used to extract features that will be used in the Figure 2 shows iris detection process output
last process by the convolutional neural network (CNN)
model to train and test iris images. 4.2. Iris detection

Iris detection is a very important step in the model as


without iris images training deep neural models will be
worthless. We can define the iris as it’s the region
between the pupil and the rest of the eyes [4]. Hough
transform has a lot of applications it has been used to
detect different patterns for example lines and circles [5].
We can define the Hough transform algorithm
mathematically as follows:

For each pixel (x,y) the Hough transform algorithm use


accumulator to detect r

Fig 1 shows the architecture of the iris recognition


system

IV. IRIS RECOGNITION MODEL

4.1. Eye detection

Eye detection has a lot of different applications. Iris


recognition is one of these applications [7]. The model
uses haar cascade classifiers to detect eyes as these
classifiers are fast, don’t need a lot of computational Fig 3 shows Hough transform parameters
time and give high accuracy [6]. where:
r: distance from origin to the closest part on straight line
Images that come out from camera pass through haar Ɵ: is the angle between x axis and r
cascade classifiers that detect eyes in these images. This For Ɵ 0° to 360°
stage will ensure that the images contain eye. the image r = x×cos(Ɵ)+y×sin(Ɵ) (1)
will pass to the next step if and only if the classifier Accumulator(r, Ɵ) = Accumulator(r, Ɵ) +1 (2)

The model takes the images detected in the last phase


and applies Hough transform on these images this phase
is ensure that there is an iris inside the eyes because it’s
possible that the eyes are closed or anything else that

© 2020, IJSRMS All Rights Reserved 21


Int. J. Sci. Res. in Multidisciplinary Studies Vol.6, Issue.3, Mar 2020

makes the iris doesn’t appear in the image so this phase If A and B represent grayscale image and structuring
detect the appearance of the iris inside the eyes and the element respectively in and E is the Euclidean space
image will pass to the next step if it successfully pass where A exist then :
this step. Figure 4 shows the output of this process.
We can define dilation as:

A B = {z ∈ E| A } (3)

We can define erosion as :

A B = {z ∈ E| Bz ⊆ A} (4)

From equations (1) and (2) we can defined opening and


closing operations as:

Opening between (A, B) = ((A B) B) (5)

Closing between (A, B) = ((A B) B) (6)

Figure 4 shows iris detection process output

4.3. Iris Segmentation

Iris segmentation plays the most important role in iris


recognition as the features extracted from the iris
segmentation process will be used in the classification figure 6 a- example of opening operation b- example of
process so the accuracy of classification will depend on closing operation
the quality of the segmented images
Iris images captured by the camera or any sensor that
If the image passes the first and second steps have many differences and these differences are many
successfully it will reach this step. In this paper, a new and many such as the environment that surrounds the iris
segmentation algorithm is introduced which contains images that have many variables, the shooting distance
three steps: Choosing Threshold, Morphological Process, that can be far or near and the lighting and there are
and Contour Detection. many other things so it must that the algorithm be as
variable with all of these variables to capture or take
correct information and not be affected by any other
factors, and this is the algorithm presented in this paper.

The algorithm uses morphological techniques to extract


iris information but in a way that makes it suitable for
the changes mentioned before.

The morphological process is begun by defining a


threshold for the image then makes opening and closing
morphological operations then a bitwise OR operator
between opening and closing images.

So for each image, the algorithm begins by defining


golden reference which is the sum of all pixels in the
image when it passes through the morphological process
Fig 5 shows iris segmentation architecture with threshold = zero. This threshold begins to increase
in working reference and it’s compared with golden
Morphology in image processing provides structure and
reference this process of increasing the threshold will
analysis of images where it has a lot of applications in
continue until the working reference begins to be
many areas like medical imaging and cellular biology
different from the golden reference by a certain amount
[33] [34]. Operations performed in morphology are
then the increase of the threshold stops and working
interactions between object and structuring elements.
reference will be chosen to the next step. The output
In this paper opening and closing operations are used image from the last step will be taken and the contour
border algorithm which was proposed in [35] will be
which we can define them mathematically as follows:
applied to it. The final results are shown in figure [7].

© 2020, IJSRMS All Rights Reserved 22


Int. J. Sci. Res. in Multidisciplinary Studies Vol.6, Issue.3, Mar 2020

V. RESULTS AND DISCUSSION

The proposed iris model was tested on four datasets


which they can briefly discuss as follows:

1- Ubiris Version 1 (v1):


Ubiris v1 contains 1877 images from 241
subjects the dataset contains two sessions.
Session 1 was used only in this paper because
session 2 contains more images and this will
make an unbalanced dataset so only session 1
was used which contains 1214 images. The
images were collected using a Nikon E5700
camera and focal length = 71mm and image
Figure 7 shows output from segmentation process from resolution = 800×600 pixels [38].
different datasets a- Ubirs v2 b- Casia-Iris-Thounad c-
Casia-Iris-Interval d- Ubirs v1 2- Ubiris Version 2 (v2):
Ubiris v2 is the extension of ubiris v1. Ubiris
4.4. Iris classification v2 contains 1877 images from 241 subjects the
dataset contains two sessions. For the same
Deep neural network models have become a very strong
reasons in ubiris v1 only session 1 was used in
tool in many applications. Image classification is one of
ubiris v2. The images were collected using
the applications of deep learning [1] [2].
Nikon Canon EOS5D camera and focal length
The model uses a convolution neural network as it can = 400 and image resolution = 200×150 pixels
understand unique features in images [3]. The model [37].
uses a convolutional neural network (CNN) for iris
recognition as CNN will differentiate between different 3- Casia Iris-Thousand:
classes. Casia Iris-Thousand is part of casia version 4
which contains six subsets. Casia Iris-Thousand
In this process, a pre-trained convolutional neural contains 20000 iris images from 1000 subjects
network model DenseNet-201 is used for iris which were collected using IKEMP-100 camera
classification [39]. table 1 shows the architecture of with resolution 640×480 pixels .
DenseNet-201 which contains four dense blocks and
three transition layers. A flattening layer and a dense 4- Casia Iris Interval:
layer followed by a softmax layer are added on the Casia iris interval is part of casia version 4
DenseNet-201 bottleneck output features. which contains six subsets. The number of
subjects used in this paper = 42 each subject has
In the training process Adam optimizer was used with 18 images. The images were collected using
beta_1=0.9, beta_2=0.999, learning rate = 0.001, batch casia close-up iris camera with resolution
size= 32 and number of epochs = 30. Softmax activation 320×280 pixels .
function used in the last layer. Data augmentation
technique was used in data sets which is a change in the Table 2 shows the information on images on each dataset
illumination of the images before the training process. before it pass to the iris classification process

Table 1 shows Dense-Net Architecture. Dataset Number Number image output size Test
Of of size from the Image
Classes Samples before Dense-Net Per
training Network Class
process
In Pixel
Casia Iris 42 1344 200×200 2×2×1664 3 to 4
Interval
Casia V4 1000 40000 70×70 6×6×1664 2
Iris-
Thousand
Ubiris V1 241 2428 200×200 6×6×1664 1 to 2
Ubiris V2 241 2428 200×200 6×6×1664 1 to 2

© 2020, IJSRMS All Rights Reserved 23


Int. J. Sci. Res. in Multidisciplinary Studies Vol.6, Issue.3, Mar 2020

Figure 8 shows example of different datasets a- Ubiris


v1 b- Casia Iris-Thounad c- Casia Iris-Interval d- Ubiris
v2

The parameter used to check level f the model in iris


recognition is accuracy
Accuracy = × 100

Fig 10 shows the model accuracy and model loss with


the number of epochs for Casia Iris Interval

Fig 9 shows the model accuracy and model loss with the
number of epochs for Casia Iris-Thousand

Fig 11 shows the model accuracy and model loss with


the number of epochs for Ubiris Version 1

© 2020, IJSRMS All Rights Reserved 24


Int. J. Sci. Res. in Multidisciplinary Studies Vol.6, Issue.3, Mar 2020

makes significant results in the classification step. All


the methods on iris recognition focus on iris
classification and iris segmentation but there is no
methods focus on steps before that in this paper the
proposed method takes a full journey from identifying
the eyes to detecting iris then extracting iris features then
classify the iris so the proposed iris recognition system is
full method which can be tested on any types of images.
In table 3 there are different methods and approaches to
make iris recognition and the proposed method perform
the highest accuracy among all other methods. The
propsed model achived accuracy on Casia Iris-Thousand,
Casia Iris Interval, Ubiris Version 1 and Ubiris Version
2 that is higher than other models in table 3.

Table 3 comparison between different iris recognition


systems

Reference Method Recognition


Accuracy
Casia Iris-Thousand
[25] DenseNet 1 98.80%
[14] vgg net 2 90%
[27] Capsule 3 83.1
[26] M-EGM 4 98.80%
[29] Alex-Net 6 98%
[31] MiCoRe-Net 7 88.70%
Proposed 99%
Casia Iris-Interval
Fig 12 shows the model accuracy and model loss with [13] uncertainty theory 99.60%
the number of epochs for Ubiris Version 2 method 14
[12] KL Tracking 16 99.75%
Figures 9, 10, 11 and 12 represent the train and test [24] Krawtchouk 99.80%
accuracies and losses with the number of epochs on Moments with
Casia Iris-Thousand, Casia Iris Interval, Ubiris Version 1 Manhattan distance
and Ubiris Version 2 respectively. Accuracies achieved 5
in the test set are 99%, 100%, 99.32% and 98.29% on [16] k-nearest subspace, 99.43%
Casia Iris-Thousand, Casia Iris Interval, Ubiris Version 1 sectorbased and
and Ubiris Version 2 respectively. cumulativesparse
concentration 17
The test accuracies will be used in comparison to other
Proposed 100%
iris recognition systems. Table 3 shows a comparison
between different methods of iris recognition on each Ubiris v1
dataset used in this paper. Our proposed iris recognition [30] HSV color space 10 97.43%
system performs better than other methods on each [28] shape analysis 11 95.08%
dataset. [21] Gabor filter 12 93.90%
[11] Sum-Rule 98.00%
The results show that the accuracy range on all datasets Interpolation 13
from 98% to 100% which indicates that the proposed [22] curve[et transform 97.50%
model is strong as it tested on different datasets and 15
environments. Proposed 99.32%
Ubiris v2
6. Conclusion [24] Dual-Hahn 97.5
moments 5
This paper proposed a new iris recognition system which [24] Krawtchouk 94.5
performs high accuracy on different public datasets. The moments 5
paper also proposes a new iris segmentation method [40] fuzzy matching 97.11
which affects the final accuracy on each dataset. The
[31] MiCoRe-Net 7 96.12%
performance of the proposed iris recognition model is
[20] k-NN 8 94.8
better than other methods. the newly proposed iris
Proposed 98.29%
segmentation method performs high accuracy which

© 2020, IJSRMS All Rights Reserved 25


Int. J. Sci. Res. in Multidisciplinary Studies Vol.6, Issue.3, Mar 2020

Conflict of Interest: The author declares that he has no [16] Bhateja, A., Sharma, S., Chaudhury, S. and Agrawal, N., Iris
conflict of interest. recognition based on sparse representation and k-nearest
subspace with genetic algorithm. Pattern Recognition Letters,
Vol. 73, pp.13-18, 2016.
REFERENCES [17] Proenca, H. and Alexandre, L., Toward Noncooperative Iris
Recognition: A Classification Approach Using Multiple
[1] J. Yosinski, T. Fuchs, H. Lipson, A. Nguyen "Understanding Signatures. IEEE Transactions on Pattern Analysis and
Neural Networks Through Deep Visualization" in Deep Machine Intelligence, Vol. 29, Issue. 4, pp.607-612, 2007.
Learning Workshop of Int. Conf. on Machine Learning,2015 [18] Sarhan AM. Iris Recognition Using Discrete Cosine
[2] Li Y, Yuan Y. Convergence analysis of two-layer neural Transform and Artificial Neural Networks. J of Computer
networks with relu 322 activation. In Conference Advances Science, Vol.5, Issue.5, pp.369-373, 2009.
in Neural Information Processing Systems,USA, pp. 597– [19] Proenca, Hugo & Neves, Joao., A Reminiscence of
607, 2017. ”Mastermind”: Iris/Periocular Biometrics by ”In-Set” CNN
[3] Matthew D. Zeiler, Rob Fergus (2013), ”Stochastic Pooling for Iterative Analysis. IEEE Transactions on Information
Regularization of Deep Convolutional Neural Networks”, in Forensics and Security, Vol.14, pp.1702-1712, 2019.
Proceedings of the International Conference on Learning [20] Kaur, B., Singh, S. and Kumar, J., Iris Recognition Using
Representations, Vol.1, 2013. Zernike Moments and Polar Harmonic Transforms. Arabian
[4] Tobji, Rachida & DI, Wu & Ayoub, Naeem & Samia, Journal for Science and Engineering, Vol.43, Issue.12,
Haouassi. (2018).” Efficient Iris Pattern Recognition Method pp.7209-7218, 2018.
by using Adaptive Hamming Distance and 1D Log-Gabor [21] Elsherief S, Allam M, Fakhr M. Biometric Personal
Filter”. International Journal of Advanced Computer Science Identification Based on Iris Recognition. In: IEEE
and Applications. Vol.9, Issue.11, pp.662-669, 2018. International Conference on Computer Engineering and
[5] Srihari, Sargur N Govindaraju, Venugopal “Analysis of Systems. Cairo, pp. 208-213, 2006.
Textual Images Using the Hough Transform” Machine [22] Ahamed A, Bhuiyan MIH. Low complexity iris recognition
Vision and Applications, Vol.2, pp. 141–153, 1989. using curvelet transform. In IEEE International Conference
[6] Kasiński, Andrzej & Schmidt, Adam.. “The Architecture of on Informatics, Electronics & Vision (ICIEV). Dhaka,
the Face and Eyes Detection System Based on Cascade Bangladesh, pp.548-553, 2012.
Classifiers” , Computer Reconition Systems, Springer, Berlin [23]- Proenca H, Alexandre L. Iris Recognition: An Analysis of
Heidelberg, pp 124-131, 2007. the Aliasing Problem in the Iris Normalization Stage. In IEEE
[7] Lin, Yu-Tzu & Lin, Ruei-Yan & Lin, Yu-Chih & C. Lee, International Conference on Computational Intelligence and
Greg. “Real-time eye-gaze estimation using a low- Security. Guangzhou, China, pp. 1771-1774, 2006.
resolution webcam” Multimedia Tools and Applications. [24] Kaur, B., Singh, S. and Kumar, J. Robust Iris Recognition
, Vol.65, pp 543–568, 2013. Using Moment Invariants. Wireless Personal
[8] Albadarneh, Aalaa & Albadarneh, Israa & Alqatawna, Communications, Vol. 99, Issue 2, pp.799-828, 2017.
Ja’far. (2015). “Iris Recognition System for Secure [25] Nguyen, K., Fookes, C., Ross, A. and Sridharan, S., Iris
Authentication Based on Texture and Shape Features. Recognition With Off-the-Shelf CNN Features: A Deep
Conference” IEEE Jordan Conference on Applied Learning Perspective. IEEE Access, vol. 6, pp.18848-18855,
Electrical Engineering and Computing Technologies 2018.
(AEECT), At Dead Sea, Jordan, 2015 [26] Otaibi, Nouf S. A. "Non ideal iris recognition based elastic
[9] Arora, Shefali & P. S Bhatia, M., “A Computer Vision snakes and graph matching model." International Journal of
System for Iris Recognition Based on Deep Learning” Modern Communication Technologies and Research, Vol. 5,
Conference: IEEE 8th International Advance Computing Issue.12, pp. 7-12, 2017.
Conference (IACC) ,India, 2018. [27] Liu, M., Zhou, Z., Shang, P. and Xu, D., Fuzzified Image
[10]Proença, H. and Alexandre, L., Iris recognition: Analysis of Enhancement for Deep Learning in Iris Recognition. IEEE
the error rates regarding the accuracy of the segmentation Transactions on Fuzzy Systems, Vol.28, Issue.1, pp.92-99,
stage. Image and Vision Computing, Vol.28, Issue.1, pp.202- 2020.
206, 2010. [28] Hosseini SM, Araabi BN, Soltanian-Zadeh H. Shape Analysis
[11] Sanchez-Gonzalez Y, Chacon-Cabrera Y, Garea-Llano E. A of Stroma for Iris Recognition. In: Lee S-W, Li SZ, eds.
Comparison of Fused Segmentation Algorithms for Iris Advances in Biometrics. Berlin, Heidelberg: Springer Berlin
Verification. In: Salinesi C, Norrie MC, Pastor Ó, eds. Heidelberg; Vol.4642, pp.790-799, 2007.
Advanced Information Systems Engineering, Berlin, [29] G Alaslani, M. and A. Elrefaei, L., Convolutional Neural
Heidelberg: Springer Berlin Heidelberg, Vol 7908, pp.112- Network Based Feature Extraction for IRIS
119, 2014 Recognition. International Journal of Computer Science and
[12] Nigam A, Gupta P. Iris Recognition Using Consistent Corner Information Technology, Vol.10, Issue.2, pp.65-78, 2018.
Optical Flow. In: Lee KM, Matsushita Y, Rehg JM, Hu Z, [30] Pavaloi I, Ignat A. Iris recognition using statistics on pixel
eds. Computer Vision – ACCV 2012. Berlin, Heidelberg: position. In: IEEE E-Health and Bioengineering Conference
Springer Berlin Heidelberg, Vol 7724, pp.358-369, 2013. (EHB). Sinaia, Romania,pp.422-425, 2017.
[13] Bellaaj M, Elleuch JF, Sellami D, Kallel IK. An Improved Iris [31] Wang, Z., Li, C., Shao, H. and Sun, J., Eye Recognition With
Recognition System Based on Possibilistic Modeling. In: Mixed Convolutional and Residual Network (MiCoRe-
Proceedings of the 13th International Conference on Net). IEEE Access, Vol. 6, pp.17905-17912, 2018.
Advances in Mobile Computing and Multimedia - MoMM, [32] Abiyev RH, Altunkaya K. Personal Iris Recognition Using
ACM Press, Brussels, Belgium, pp.26-32, 2015. Neural Network. International Journal of Security and its
[14] Minaee S, Abdolrashidi A, Wang Y. An Experimental Study Applications,Vol.2, Issue.2, pp. 41-50, 2008.
of Deep Convolutional Features For Iris Recognition.in [33] Umer, Saiyed & Dhara, Bibhas & Chanda, Bhabatosh. (2015).
Conferene of IEEE Signal Processing in Medicine and Iris Recognition using Multiscale Morphologic Features.
Biology Symposium, USA, 2017 Pattern Recognition Letters, Vol.65, pp. 67-74, 2015.
[15] Zanlorensi LA, Luz E, Laroca R, Britto Jr. AS, Oliveira LS, [34] Chackalackal M.S., Basart J.P. (1990) NDE X-Ray Image
Menotti D. The Impact of Preprocessing on Deep Analysis Using Mathematical Morphology. In: Thompson
Representations for Iris Recognition on Unconstrained D.O., Chimenti D.E. (eds) Review of Progress in
Environments. , Conference on Graphics, Patterns and Quantitative Nondestructive Evaluation. Review of
Images (SIBGRAPI, Brazil, pp.289-296, 2018.

© 2020, IJSRMS All Rights Reserved 26


Int. J. Sci. Res. in Multidisciplinary Studies Vol.6, Issue.3, Mar 2020

Progress in Quantitative Nondestructive Evaluation.


Springer, Boston, MA, pp.721-728, 1990.
[35] Suzuki S. and Keiichi Topological Structural Analysis of
Digitized Binary Images by Border Following.Computer
Vision, Graphics, And Image Processing Vol. 30, PP. 32-46,
1985.
[36] A.K. Bhatia, H. Kaur, “Security and Privacy in Biometrics: A
Review,” International Journal of Scientific Research in
Computer Science and Engineering, Vol.1, Issue.2, pp.33-35,
2013.
[37] Proenca H, Filipe S, Santos R, Oliveira J, Alexandre LA. The
UBIRIS.v2: A Database of Visible Wavelength Iris Images
Captured On-the-Move and At-a-Distance. IEEE Trans
Pattern Anal Mach Intell, Vol.32, Issue.8, pp.1529-1535,
2010.
[38] Proença H, Alexandre LA. UBIRIS: A Noisy Iris Image
Database. In international conference on image analysis and
processing– ICIAP 2005, Vol.3617, PP.970-977, 2005.
[39] Huang G, Liu Z, Maaten L van der, Weinberger KQ. Densely
Connected Convolutional Networks. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR). Honolulu,
pp. 2261-2269, 2017.
[40] Ross, A.; Sunder, M.S.: Block based texture analysis for iris
classification and matching. IEEE Computer Society
Conference on Computer Vision and Pattern Recognition -
Workshops, pp. 30–37, 2010.

© 2020, IJSRMS All Rights Reserved 27

You might also like