0% found this document useful (0 votes)
32 views30 pages

Currency Recognition

The document outlines a graduation project focused on developing a currency recognition system specifically for Iraqi paper currency. It discusses the technology used for recognizing and classifying currency, including image processing techniques and feature extraction methods. The aim is to create a system that improves efficiency for banking and currency exchange operations by accurately identifying various denominations of Iraqi currency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views30 pages

Currency Recognition

The document outlines a graduation project focused on developing a currency recognition system specifically for Iraqi paper currency. It discusses the technology used for recognizing and classifying currency, including image processing techniques and feature extraction methods. The aim is to create a system that improves efficiency for banking and currency exchange operations by accurately identifying various denominations of Iraqi currency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Republic of Iraq

University of Al-Qadisiyah
College of computer science
and information technology
Computer science
department

Currency Recognition System


A graduation project is submitted to the computer science
department in partial fulfillment of the requirement for
the degree
of Bachelor in computer science

BY.
Rasha Mohammed Zubaida Asaad Alaa
Jaber Balsam Basim

Supervisor

L. Mohammed Hamzah Abed

1
‫األهداء‬
‫ال بد لنا ونحن نخطو خطواتنا األخيرة في الحياة‬
‫الجامعية من وقفة نعود الى أعوام قضيناها في‬
‫رحاب الجامعة مع أساتذتنا الكرام الذين قدموا لنا‬
‫الكثير باذلين بذلك جهود كبيرة في بناء الغد لتبعث‬
‫‪...‬األمة من جديد‬
‫وقبل ان نمضي نقدم أسمى أيات الشكر واألمتنان‬
‫والتقدير والمحبة الى اللذين حملوا اقدس رسالة في‬
‫‪ ...‬الحياة‬
‫الى اللذين مهدوا لنا طريق العلم والمعرفة الى‬
‫‪ ...‬جميع أساتذتنا األفاضل‬
‫كن عالما‪ ،‬فإن لم تستطع فكن متعلما‪ ،‬فإن لم"‬
‫‪".‬تستطع فأحب العلماء‪ ،‬فإن لم تستطع فال تبغضهم‬

‫وأخص بالتقدير والشكر األستاذ الفاضل محمد حمزة‬


‫‪.‬عبد‬

‫‪2‬‬
‫بسم الله الرحمن الرحيم‬
‫ق;;الوا س;;بحانك ال علم لن;;ا﴿‬
‫الال م;;;ا علمتن;;;ا ان;;;ك انت‬
‫﴾العليم الحكيم‬

‫صدق الله العظيم‬


‫سورة البقرة االية (‪)32‬‬

‫‪3‬‬
Abstract:
Many currencies belong to different countries worldwide, currency recognition via
visible and invisible watermarks in paper currency is a new technology used for
good currency classification, Currency recognition can be used to help a human for
easily detecting the currency and if able convert it to other currencies without any
supervision. Iraqi currency is used in this paper for recognition and identification.
All the Iraqi paper currencies are tested and recognized. The sampled used
(different view sides and blurry images tested), complex designs used for currency
security, thus the recognition is difficult, feature extraction and detection is used
for this purpose which provide efficiency, high speed, min complexity and can be
used with different currencies.

4
Contents
Chapter one............................................................................................................................6
1-1 Introduction............................................................................................................................. 6
1-2 Literature view........................................................................................................................ 7
1-3 Structure of typical paper currency recognition system...........................................................8
1-4 Data set.................................................................................................................................... 9
1-5 The aim of the project............................................................................................................10
1-6 Summery............................................................................................................................... 11
Chapter two..........................................................................................................................12
2-1 introduction........................................................................................................................... 13
2-2 Pre-processing....................................................................................................................... 13
2-3 Colored image....................................................................................................................... 14
2-3-1 YCbCr Color Space........................................................................................................ 15
(2-3-2) LUV color space...........................................................................................................16
2-4 feature extraction...................................................................................................................17
2-4-1 Color feature extraction.................................................................................................. 17
Mean............................................................................................................................................ 18
Standard Deviation...................................................................................................................... 19
2-5 Edge detection using Sobel algorithm...................................................................................19
2-6 Summary............................................................................................................................... 20
Chapter three......................................................................................................................21
3-1 introduction........................................................................................................................... 22
3-1-1 Training.......................................................................................................................... 22
3-1-2 Testin:............................................................................................................................. 22
3-1-3 Functions we used...........................................................................................................23
3-2 Run of program..................................................................................................................... 25
4-Conclution and future work......................................................................................................28

5
Chapter one
introduction

6
1-1 Introduction
The design of currency is different from one to another in size, color and pattern [1].
The recognition of currency is not that the easy mission especially for those who
work in banks and exchange offices, vending machines that is used for coffee and
fast food and ATM machines need recognition for currency. [2] Proper software will
make the mission of such people and machine easier, this system aims to do this
job by processing every image. The image processing approach is discussed with
MATLAB to detect the features of paper currency. Image Processing involves
changing the nature of an image in order to improve its pictorial information for
human interpretation. The proposed system will work on two images, one is
original image of the paper currency and other is the test image on which
verification is to be performed.

7
1-2 Literature view
Debnath et al. (2010) [3], they had used ensemble neural network for currency
recognition. Negative correlation learning is used to train the individual Neural
Networks (NNs) in an ENN. There are different types of notes such as noisy and
old notes and the machine does not easily recognize such notes. Therefore, a
system developed using ENN can identify them easily and correctly. For testing,
they had used notes of different dominations, which are of 2, 5, 10, 20, 50, 100 and
500 TAKAS. First, they convert the note image into gray scale and then the image
is compressed. Then the compressed image is given to system as an input for
recognition. The system developed using ENN can easily identify the currency
with noise as well as old currency notes. With independent training, there are less
chances of misclassification.
Jahangir and Raja (2007) [4], they had used neural network recognition method to
recognize Bangladeshi currency. They had implemented this method on cheap
hardware that can be used in different places. The system takes the image of
banknote as an input. The notes are scanned using less expensive sensors. The
notes are trained for recognition using Back Propagation algorithm. If the note is
flipped, the correct recognition is guaranteed because the axis symmetric mask is
used in preprocessing stage. For experiment notes, they used eight notes of TAKA,
which were recognized successfully.
Singh et al. (2011) [5], they represent the heuristic analysis of Indian currency notes
and the digits of serial number of Indian currency notes to recognize them. To
identify a character of currency image the features of that image should be
extracted. It is very important to extract features from different notes. To extract
correct features of character heuristic analysis are done before extracting features
in currency recognition

8
1-3 Structure of typical paper currency recognition system:
The system presented is designed to recognize paper currency. Input to the system
is an image acquired by a scanner, containing the paper currency and its output is
the feature of the paper currency. The structure of the system is shown in Fig (1).

Banknotes collecting

Banknotes scanning

Image processing

Feature extraction

Classification using Euclidean distance

Recognition results
Figure (1-1) Structure of typical paper currency recognition system.

1-4 Data set:


The currency of the Iraqi republic classified into seven categories each one of them
has different colors and features, starts from 250 to 50,000 IQ dinar. During this
research paper, a database has been collected and captured using a scanner device
to collect images of each category for both sides front and rear. Our database
consist of 140 image divided into two categories of classes. Each banknote class
consist of ten images for the front side and other ten for the rear side in a database,
images are captured in different cases like old, new, missing some parts or has
some noise like hand writing.
Each paper of the Iraqi currency has its own color shown in table (1-1).

9
Table (1-1) IQ currency paper color.

Category Major color


250 F Main color is blue
250 R Main color is blue
500 F Main color is bluish green
500 R Main color is bluish green
1,000 F Main color is bluish brown
1,000 R Main color is bluish brown
5,000 F Main color is bluish dark blue
5,000 R Main color is bluish dark blue
10,000 F Main color is bluish green
10,000 R Main color is bluish green
25,000 F Main color is red
25,000 R Main color is red
50,000 F Main color is violet
50,000 R Main color is violet

1-5 The aim of the project:


The main goal of this project is to provide a system that can recognize the papers
of the Iraqi currency by applying different methods and techniques to help bankers
and employees who works in exchange money to recognize each paper, this will
make the work more efficient, easier and will save more time. The currency
recognition system has been developed to be able to classify the paper currency to
its correct class, and to recognize the note quickly and correctly no matter how it
loos whether it’s old, new or noisy. It can be used in places such as shops, banks
counter and automated teller machine, auto seller machine etc. it is not easy for the
teller in bank to recognize different notes so a currency recognition system in the
bank can reduce the human effort.

10
1-6 Summery:
In this chapter we gave a general overview of the currency recognition system. We
also introduced a small literature view of how authors used the technology to
provide systems for recognizing currency papers, and we showed the structure of
typical paper currency recognition system, and talked a little bit about the pre-
processing. After that we showed the data sets of the Iraqi currency and what color
each paper has so we can used it later in the color feature extraction. At the end we
talked about the goals and the aim of the project that we are working on.

11
Chapter two
Theory Background

12
2-1 introduction:
Now that the technology became an important thing in our lives and it’s commonly
used. In this chapter will show how the system works to help recognizing the
currency paper which will make it easier for the employees in the bank and people
who works in the currency field to distinguish between each paper with no effort.

2-2 Pre-processing:
In the proposed system a high-resolution scanner is used to acquire the image. The
acquired image of a paper currency is first converted to gray scale image.
Conversion to gray scale facilitates further pre-processing. The task of pre-
processing is achieved by converting colored currency image into grayscale, then
black-white image after that. The edge of the image is filtered using Prewitt
method or other methods. Different stages of an image are shown in Fig (2-1).

(a) (b)

(c)
Figure (2-1) Results showing different stages in a paper currency recognition
system (a) Original Image, (b) Gray scale image, (c) Black and white image.

13
This paper is meant for Iraqi paper currency recognition system. The system is
designed to just recognize the Iraqi currencies. This means that this system will not
be concerned with currencies other than Iraqi currencies. The system is not
concerned with verification of the validity of the paper currencies (i.e. verifying
that the paper currency is genuine and not fake). This is usually done using
methods which might involve sensing the magnetic string embedded inside the
currency, or some other methods.

2-3 Colored image


A color model is a mathematical model that represents colors as 3D or 4D
numbers that are the color components. The resulting set of colors is called color
space. There are many color spaces in practice.

The red, green, and blue RGB color space is widely used throughout computer
graphics Red, green, and blue are three primary additive colors individual
components are added together to form a desired color and are represented by a
three- dimensional, Cartesian coordinate system show in the figure below:

Figure (2-2) RGB color Cub.

14
The indicated diagonal of the cub, with equal amount of each primary component
represent various gray level. Table contain the RGB values for 100% amplitude,
100% saturated color bars a common video test signal. Multimedia devices (e.g.,
TV sets) use additive color mixing of primary colors (red, green, blue). Results are
compatible with human color space. MATLAB represents colors as RGB values.
However, RGB signals are not efficient for storage and transmission due to
significant mutual redundancy.

The RGB color space is the most prevalent choice for computer graphics because
color display uses red, green, and blue to create the desired color. Therefore, the
choice of RGB color space simplifies the architecture and design of the system.
Also, a system that is designed using RGB color space can take advantage of a
large number of existing software routines. Since this color space has been around
for a number of years.

2-3-1 YCbCr Color Space


The YCbCr color space was developed during the development of a world –wide-
digital component video standard. Y is defined to have a nominal 8-bit range 16-
235, Cb and Cr are defined to have nominal range of 16-240. There are several
YCbCr sampling format. YCbCr convert the primary colors (Red, Green and Blue)
into perceptually meaningful information. It is a subtractive (not additive) model.
YCbCr separates out a luminance signal (Y) that can be stored with a high
resolution or transmitted at high bandwidth, and two chrominance components
(Cb, Cr) that can be down sampled or compressed. The component Cb represents
the difference between the blue component and a reference value. Cr represents the
difference between the red component and a reference value. Green is achieved by
using a combination of these three values. YCbCr is used in video and image
formats that uses data compression like MPEG and JPEG.

15
Convert RGB to YCbCr, Computer system consideration

If the RGB data has a range of 0-255, as is commonly found in computer systems
the following equations may be more convenient to use:

Y = 0.257 R +0.504 G +0.098 B +16 Equation (2-1)

Cb = - 0.148 R - 0.291 G +0.439 B +128 Equation (2-2)

Cr = 0.439 R - 0.368 G - 0.071 B +128 Equation (2-3)

R = 1.164 (Y -16) + 1.596 (Cr-128) Equation (2-5)

G= 1.164 (Y -16) – 0.813 (Cr -128) – 0.391 (Cb-128) Equation (2-6)

B= 1.164 (Y -16) +2.018 (Cb -128) Equation (2-7)

Note the 8-bit YCbCr and RGB data should be saturated at the 0 and 255 levels to
avoid underflow and overflow warp-around problems.

(2-3-2) LUV color space


CIELUV is an Adams chromatic valence color space, and is an update of the CIE
1964 (U*, V*, W*) color space (CIEUVW). The difference includes a slightly
modified lightness scale, and a modified uniform chromaticity scale in which one
of the coordinates, V’, is 1.5 times as large as V in its 1960 predecessor. CIELUV
and CIELAB were adopted simultaneously by the CIE when no clear consensus
could be formed behind only one or the other of these two-color spaces. CIELUV
uses Judd-type (translational) white point adaptation (in contrast with CIELAB,
which uses a “wrong” von Kries transform). This can produce useful results when
working with a single illuminate, but can predict imaginary color (i.e., outside
spectral locus) when attempting to use it as a chromatic adaptation transformation.

16
The translation adaptation transforms used in CIELUV has also been shown to
perform poorly in predicting corresponding colors.

2-4 feature extraction: Feature extraction involves transforming the input data
into a set of features which can uniquely represent an image. These set of features
are also called feature vector. Visual information plays a pivotal role in our society
and it will play an increasingly pervasive role in our lives [6]. The features such as
color, texture and shape [7] are used for extracting the relevant information from the
input image. Feature extraction provides us methods with the help of which we can
identify characters uniquely and with high degree of accuracy.

2-4-1 Color feature extraction


Color is an important feature that makes possible recognition of images by
humans. We use color to tell the difference between objects, places, and the time of
day. Usually colors are defined in three-dimensional color spaces. These could
either be RGB (Red, Green, and Blue), HSV (Hue, Saturation, and Value) or HSB
(Hue, Saturation, and Brightness). The most common technique for extracting the
color features is based on color histograms of images [8]. A color histogram tells the
global distribution of colors in the images. It is very easy to compute and
insensitive to small variations in the images. There are two types of color
histograms, Global Color Histograms (GCHs) and Local Color Histograms
(LCHs). A GCH represents one whole image with a single-color histogram. An
LCH divides an image into fixed blocks and takes the color histogram of each of
those blocks. LCHs contain more information about an image but are
computationally expensive when comparing images. The GCH is the traditional
method for color-based image retrieval. However, it does not include information
concerning the color distribution of the regions of an image. Thus, when
comparing GCHs one may get inconsistent result in terms of images [8] There are
two main drawbacks in color histogram. First, color histogram doesn’t take into
account the spatial information. The second is that the histogram is not unique and
also not robust. Two different images with similar color distribution give rise to
very similar histograms. Similarly, the images of the same view with different
conditions of lighting create very different histograms. To deal with the first
problem, many researchers suggested the use of color Correlogram for taking into

17
account the spatial information. Correlogram is efficiently used for image indexing
in content-based image retrieval. Color Correlogram extracts not only the color
distribution of pixels in images like color histogram, but also extracts the spatial
information of pixels in the images. The auto-correlogram of image I for color Ci,
distance k is given by the equation bellow
(k)
y Ci I ≡ Pr ⁡¿ Equation (2-8)
Color correlogram integrates both color information and space information. The
use of multi resolution histogram for image retrieval is suggested in [9].

Mean:
The mean is the arithmetic average of a set of values, or distribution and we can
compute it by using two types of formula, first one for one dimension and the
second for two Dimension.

…… Equation (2-9)

Where
n =number of data set
X=set of data

…… Equation (2-10)

Where
N and M =Dimension of data set
X=set of data

18
Standard Deviation
The standard deviation is kind of the "mean of the mean," and often can help
you find the story behind the data. To understand this concept, it can help to learn
about what statisticians call normal distribution of data. The standard deviation is a
statistic that tells you how tightly all the set data are clustered around the mean in a
set of data [19].

. ….Equation (2-11)

Mean and standard


deviation can be used during the reconstruction phase, as guide line to know the
compression ratio.
Skewness is a measure of the asymmetry of the data around the sample mean.
If skewness is negative, the data are spread out more to the left of the mean than to
the right. If skewness is positive, the data are spread out more to the right.
The skewness of the normal distribution (or any perfectly symmetric distribution)
is zero. The skewness of a distribution is defined as

….. Equation (2-12)

2-5 Edge detection using Sobel algorithm


Image edge detection is a process of locating the edge of an image which is
important in finding the approximate absolute gradient magnitude at each point of
an input grayscale image. The problem of getting an appropriate absolute gradient
magnitude for edges lies in the method used. The Sobel operator performs a 2-D
spatial gradient measurement on images. Transferring a 2-D pixel array into
statically uncorrelated data set enhances the removal of redundant data, as a result,
reduction of the amount of data is required to represent a digital image. The sobel
edge detector uses a pair of 3x3 convolution masks, one estimating gradient in the
x-direction and the other estimating gradient in the y-direction. The sobel detector
is incredibly sensitive to noise in picture. It effectively highlights them as edges.

19
Hence, sobel operator is recommended in massive data communication found in
data transfer.

2-6 Summary
In the second chapter we discussed the theory background of the fundamentals
currency recognition system and how can we extract the features from the
banknote paper. Also investigated the color space models that used of the images
and edge detection based on sobel technique.

20
Chapter three
practical part

21
3-1 introduction
In this chapter will invistegate the practical implementation of the porposed work.

3-1-1 Training:
1-First of all we gather some pictures randomly.
2- resize the pictures to a spisific size we picked (92*160) as a fixed size.
3- enhance the picture to get a better version of the added picture.
4- transform the rgb of the picture to luv.
5- dtect the edges using the sobel function.
6- extract the features of the pictures, we extracted the mean, standard
deviation(std) and the skewness.
7-store all the steps thet we did previously in a data base to use it later for
comparition.

3-1-2 Testing:
In the testing part it will do the same steps that it did in the training part but the
difference is in the storing step (7)
1-First of all we gather some pictures randomly.
2- resize the pictures to a spisific size we picked (92*160) as a fixed size.
3- enhance the picture to get a better version of the added picture.
4- transform the rgb of the picture to luv.
5- dtect the edges using the sobel function.
6- extract the features of the pictures, we extracted the mean, standard
deviation(std) and the skewness.
7-store all the steps thet we did previously in a vectror to use it for comparition.

22
Compare between the database from the training part with the vector from the
testing part to get a result that shoes the value of the intered picture and it’s status
(the currency is found or not ).
The figure bellow shows the opreration of the work we did previously.

Figure 3-1 proposed system work.

3-1-3 Matlab Functions used


 Imresize: this function is used to resize the image to the size we want .
B = imresize(A,scale) returns image B that is scale times the size of A. The
input image A can be a grayscale, RGB, or binary image. If A has more than
two dimensions, imresize only resizes the first two dimensions. If scale is in
the range [0, 1], B is smaller than A. If scale is greater than 1, B is larger
than A. By default, imresize uses bicubic interpolation.

 Medfilt2: this function is used to enhance the picture by filtering it. B =


medfilt2(A) performs median filtering of the image A in two dimensions.
Each output pixel contains the median value in a 3-by-3 neighborhood
around the corresponding pixel in the input image.

23
 Makecform: The makecform function supports conversions between
members of the family of device-independent color spaces defined by the
Commission Internationale de l'Éclairage (International Commission on
Illumination, or CIE). makecform also supports conversions to and from the
sRGB and CMYK color spaces. To perform a color space transformation,
pass the color transformation structure created by makecform as an
argument to the applycform function.

 mean: Average or mean value of array.


M = mean(A) returns the mean of the elements of A along the first array
dimension whose size does not equal 1.
-If A is a vector, then mean(A) returns the mean of the elements.
-If A is a matrix, then mean(A) returns a row vector containing the mean of
each column.
A is a multidimensional array, then mean(A) operates along the first array
dimension whose size does not equal 1, treating the elements as vectors.
This dimension becomes 1 while the sizes of all other dimensions remain the
same.

 std: Standard deviation S = std(A) returns the standard deviation of the


elements of A along the first array dimension whose size does not equal 1.
-If A is a vector of observations, then the standard deviation is a scalar.
-If A is a matrix whose columns are random variables and whose rows are
observations, then S is a row vector containing the standard deviations
corresponding to each column.
-If A is a multidimensional array, then std(A) operates along the first array
dimension whose size does not equal 1, treating the elements as vectors. The
size of this dimension becomes 1 while the sizes of all other dimensions
remain the same.
-By default, the standard deviation is normalized by N-1, where N is the
number of observations.

 skewness: y = skewness(X) returns the sample skewness of X. For vectors,


skewness(x) is the skewness of the elements of x. For matrices, skewness(X)

24
is a row vector containing the sample skewness of each column. For N-
dimensional arrays, skewness operates along the first nonsingleton
dimension of X.
y = skewness(X,flag) specifies whether to correct for bias (flag = 0) or not
(flag = 1, the default). When X represents a sample from a population, the
skewness of X is biased; that is, it will tend to differ from the population
skewness by a systematic amount that depends on the size of the sample.
You can set flag = 0 to correct for this systematic bias.
y = skewness(X,flag,dim) takes the skewness along dimension dim of X.
skewness treats NaNs as missing values and removes them.

3-2 Run of program


We have used GUI in matlab as a graphical method as shown in figure(3-2).
interface of our program:

Figure(3-2) interface of the work.

The window have the component as following:


 Input: we input the picture that we want to recognize.
 Status: shows if the currency is found or not.

25
 Currency value: gives the value of the intered currency paper.
 The picture is shown in the white part on top.
As an example we inter a random picture of a currency paper such as the (250)
banknote paper :

Figure (3-3) 250 IQ dinar front.


It will give us the status and the value of the intered picture:

Figure (3-4) shows that the currency is found.


The currency is found and the value is (250).

26
If we added a different currency (not iraqi), like the US dollar :

Figure (3-5) 100 US dollar front.

it will not recognize the paper:

Figure(3-6) shows that the currency is not found.

27
4-Conclution and future work
Currency recognition system uses the features of each banknote paper which are
unique and detect the edges of it using sobel. we used several pictures of each
banknote paper that are placed in a database that we previously created and
extracted the features for each one. when we inter the paper that we want to
recognize, the system will compare its feature with the fetures of the banknote
papers stored in the database, if the paper we intered has the same fetures or partly
similer features as one of the papers in the database then the system will show the
value of it.
As a future work we intend to use a better hardware, a bigger database and more
features to create a system that can recognize more currencies.also using deep
learning model for extract features and learn the proposed system.

28
References
[1]
Rubeena Mirza, Vinti Nanda, Paper Currency Verification System Based on
Characteristic Extraction Using Image Processing, International Journal of
Engineering and Advanced Technology, Volume 1, Issue 3, ISSN: 2249 – 8958,
February 2012.
[2]
Junfang Guo, Yanyun Zhao, Anni Cai, A Reliable Method for Paper Currency
Recognition Based on LBP, Proceedings, 2nd IEEE International Conference on
Network Infrastructure and Digital Content, Beijing, 2010.
[3]
Debnath, K.K., S.U. Ahmed and Md. Shahjahan, 2010. A paper currency
recognition system using negatively correlated neural network ensemble. J.
Multimedia, 5(6): 560-567.
[4]
Jahangir, N. and A. Raja, 2007. Bangladeshi banknote recognition by neural
network with axis symmetrical masks. Proceeding of the 10th International
Conference on Computer and Information Technology, pp: 105.
[5]
Singh, P., G. Krishan and S. Kotwal, 2011. Image processing based heuristic
analysis for enhanced currency recognition. Int. J. Adv. Technol., 2(1): 82-89.
[6] MPEG-7 overview (version 10),” ISO/IEC JTC1/SC29/WG11, Tech. Rep,
2004. D. Zhang and G. Lu, “Review of shape representation and description
techniques,” Pattern Recognition, vol. 37, pp. 1-19, 2004.
[7]
Dr.Fuhuri Long, Dr.Hongjiang and Prof.David Dagan Feng, “Fundamental of
content-based Image Retrieval” 2003.
[8]
Young-Jun Song, Won-bae Park, Dong-woo Kim, and Jae-Hyeong Ahn,
“Content-based image retrieval using new color histogram”, Intelligent Signal
Processing and Communication Systems, Proceedings of 2004 International
Symposium on 18-19 Nov. 2004.
[9]
Buch P, Patel N, “Content Based Image Retrieval: A Review” in proceedings of
KITE-pp 48-51,2011.

29
30

You might also like