Implementation of Vector Quantization
using Content Addressable Memory
Architecture
Under the guidance of
Dr. Indrajit Chakrabarti
Department of Electronics and Electrical
Communication Engineering
Indian Institute of Technology, Kharagpur
Kharagpur- 721302, WB
India
Submitted by
Ujjwal Jain
03EC3204
ABSTRACT
Digital Image Coding using Vector Quantization (VQ) based techniques provides low-bit
rates and high quality coded images, at the expense of high computational demands. The
Computational requirement which is due to the coding search process had hindered the
wide application of VQ.
Reduction of the encoding search complexity through the partitioning of a large
codebook into smaller on-chip memories of a concurrent VLSI chip.
So a slight deviation from the Vector Quantization , Multi- Stage Vector Quantization is
implemented. In which Vector Quantization is done in more than once but it reduces both
computational complexity and storage area.
Concurrency in the system is introduced by implementing Content Addressable Memory.
An array of L x N cells are connected in parallel and pipelined in the directions of Vector
dimension, L and codeword dimension, N respectively. This content addressable memory
computes the distortion between input vectors and codewords and the least distortion is
computed and then the index of that codeword is passed on to the next stage for similar
computation. This 3- stage break–up reduces computational complexity and storage
problem.
The regular and iterable structure makes possible the VLSI implementation of the
architecture.
INTRODUCTION
1.1 Quantization:
In general, Quantization of the amplitude of any uniform sampled signal is discretization
of amplitude according to some function. It introduces some distortion of the waveform
or a loss of signal fidelity.
In terms of image compression , Quantization is a compression technique achieved by
compressing a range of values to a single quantum value. By reducing the number of
discrete symbols in a given stream, the stream becomes more compressible.
There are number of ways in which this Quantization can be achieved.
A very simple technique can be by reducing the number of colors used to represent an
image. This will reduce the number of bits needed to represent a pixel of image and
hence will reduce the bit stream length.
In general Quantization can be classified into 2 categories:-
1). Scalar Quantization
2). Vector Quantization
1.2 Scalar Quantization:
Above given example of color Quantization is an example of Scalar Quantization. In
Scalar Quantization we quantize or approximate the input value to some discrete values.
So in simple way if the inputs to the quantizer are scalars then it is called Scalar
Quantization.
1.3 Vector Quantization :
In this technique we consider the joint quantization of a block of signal samples or a
block of signal parameters.
In vector quantization we group the source output into blocks or vectors.
For example, we can treat L consecutive samples of speech as the components of an L –
dimensional vector. Or, we can take a block of L pixels from an image and treat each
pixel value as a component of a vector of size or dimension L. This vector of source
outputs forms the input to the vector quantizer. At both the encoder and decoder of the
vector quantizer ,we have a ser of L – Dimensional vectors called the codebook of the
vector quantizer. The vectors in this codebook, known as code-vectors , are selected to be
the representative of the vectors we generate from the source output. Each code-vector is
assigned a binary index. At the encoder , the input vector is compared to each code-
vector in order to find the code-vector closet to the input vector. The elements of this
codevector are the quantized values of the source output. In order to inform the decoder
about which codevector was found to be the closest to the input vector, we transmit or
store the binary index of the code-vector. Because the decoder has exactly the same
codebook, it can retrieve the code vector given its binary index.
FIG 1.1 THE VECTOR QUANTIZATION PROCEDURE
Although the encoder may have to perform a considerable amount of computations in
order to find the closest reproduction vector to the vector of source outputs , the decoding
consists of a table lookup. This makes the vector quantization a very attractive encoding
scheme for applications in which the resource available for decoding are considerably
less than the resources available for encoding. For example , in multimedia applications ,
considerable computational resources may be available for the encoding operation.
1.4 Compression rate with VQ
Suppose we have a codebook of size K, and the input vector of dimension L. In order to
inform the decoder of which code-vector was selected, we need to use [log2K] bits.
For example, if the codebook contained 256 code-vectors, we would need 8 bits to
specify which of the 256 code-vectors had been selected at the encoder. As each code-
vector contains the reconstruction values of L source output samples, the number of bits
per sample would be ([log2K] / L). If each scalar which constitutes the input vector is of
‘t’ bits.
So the compression rate achieved using VQ is [log2K] / L* t.
Suppose if we have a Black and White image, to represent each pixel we will need 8 bits
in Gray Scale. And dimension of input vector is 16.
Then compression ratio for different codebook size will look something like this:
Fig 1.2 Summary of compression measures for image compression.
Codebook Size Bits needed to select Bits per pixel Compression Ratio
a codeword
16 4 0.25 32:1
64 6 0.375 21.33:1
256 8 0.5 16:1
1024 10 0.625 12.8:1
Designing Codebook
2.1 Voronoi Region
A vector quantizer maps k-dimensional vectors in the vector space Rk into a finite set of
vectors Y = {yi: i = 1, 2, ..., N}. Each vector yi is called a code vector or a codeword. And
the set of all the codewords is called a codebook. Associated with each codeword, yi, is a
nearest neighbor region called Voronoi region, and it is defined by:
The set of Voronoi regions partition the entire space Rk such that:
for all i j
As an example we take vectors in the two dimensional case without loss of generality.
Figure 2.1 shows some vectors in space. Associated with each cluster of vectors is a
representative codeword. Each codeword resides in its own Voronoi region. These
regions are separated with imaginary lines in figure 1 for illustration. Given an input
vector, the codeword that is chosen to represent it is the one in the same Voronoi region.
Figure 2.1: Codewords in 2-
dimensional space. Input vectors
are marked with an x, codewords
are marked with red circles, and
the Voronoi regions are separated
with boundary lines.
The representative codeword is determined to be the closest in Euclidean distance from
the input vector. The Euclidean distance is defined by:
where xj is the jth component of the input vector, and yij is the jth is component of the
codeword yi.
This representative vectors are found out using iterative algorithms. One of the most
widely used algorithm is LINDE-BUZO-GRAY (LBG) algorithm.
2.5 Multi-Stage Vector Quantization
To encode pictures with acceptable level of distortion, a single-stage VQ requires a fairly
large-size codebook. Since MSVQ for speech coding did not result in favorable results,
it is generally assumed that it would be the case with image coding. However, we have
shown that good quality color image encoding can be achieved using MSVQ . It
performs comparable to a single stage VQ using codebooks of moderate size. In MSVQ
(Fig. 2.2), the original image is encoded by first stage VQ and the difference between the
original and the reconstructed image is then encoded again by a second stage VQ. The
process is repeated until the desired quality of pictures is obtained. The decoder consists
of look-up tables (LUT) and adders to reconstruct the signal. Real-time implementation
of MSVQ requires much less hardware compared to single stage VQ because smaller
size codebooks can be used at each stage.
Fig 2.2 Multi-Stage Vector Quantization.
2.6 Advantages:
Let the Codebook Size of the 3-Stage vector quantizer be L1 , L2 and L3, then effective
size of the overall codebook is L1 x L2 x L3 . And the numbers of comparisons required
are L1+L2+L3 which is also the storage size required.
Suppose we have 3-stage vector quantizer, each with a codebook size of 32, meaning that
we would have to store only 96 codewords. This would provide an effective codebook
size of 32768. The computational savings are also of the same order.
Implementation of VQ Encoder
Vector quantization essentially involves, for each input vector, a search operation to
obtain the best match codeword. Traditionally, the search mechanism is implemented
sequentially, each vector is compared with the codewords one at a time. For K input
vectors of dimension L and a codebook of size N the search complexity is O(KLN). This
is heavily computation intensive and therefore real time implementation of the VQ
algorithm is difficult. Content addressable architecture employ parallelism in the
direction of vector dimension L and codebook size N. Matching must therefore be
performed in parallel.
CAM is a collection of storage elements which can be accessed simultaneously and in
parallel on the basis of their data content rather than by the specific address or location.
The schematic of a CAM structure is shown in the figure. The main component is a CAM
array is Bij : i = 1,2 ... , n; j = 1,2... b , consisting of n x b cells organized as n words with
b bits/word. Each bit cell consists of flip flop with associated logic gates for pattern
matching and read/write control.
The circuit diagram of the basic unit of a CAM is shown and is built with NAND gates.
The bit cell effectively compares the input bit with the stored bit and reports the result as
a match signal. The bit B is stored in the flip flop and is entered through the input lines J
and K. The input lines Q1 and Q2 are used to compare the search bit with the stored bit
value.
To write/store a bit and masking writing J and K have the following values.
i). J = 1, K = 0 and enable = 1; stored bit is 1,
ii). J = 0 , K = 1 and enable = 1; stored bit is 0,
iii). J = 0 and K = 0; writing is masked.
To perform comparison reading and masking Q1 and Q2 have the following values.
i). Q1 = 1[0] , Q2 = 0[1] ; content addressable reading is initiated with
Match = 1 if the stored bit was 1[0] , and
= 0 otherwise
ii). Q1 = 0 , Q2 = 0, bit is masked for comparison.
The match signal and enable signal are common to all the bits in a word. The word cell is
used to compare a word with an external search arguement stored in the input register. To
facilitate searching on selected attributes of the search argument the masking register is
used to enable or disable a specific bit. Whenever there is a match in all of the bits
positions. The result is stored in response register.
Cam array for Pattern matching.
Traditionally, CAM arrays have been employed in the design of high speed pattern
recognition. The sets of templates are stored as words in CAM array and input pattern is
used as search argument. The response register is initialized to 0. The search argument is
compared in parallel with all the words and if and exact match is obtained with a
particular word, the corresponding output line is activated and the result is stored in the
reponse register. If no exact match is found, LSB is masked in masking register and the
above steps are repeated till we get a match.
This CAM is applied to all the 3 stage of multi-stage vector quantization.
Results
Computer simuations are carried out on a image of size 256 x 256 pixels with 8 b/pixel.
The image are proportional to yield blocks of 4 x 4 pixels which are rearranged as 16-
dimensional vectors. A new codebook is generated using the input vectors of the image to
be coded as the training set and starting with a universal codebook predesigned from a
seperate set of images. The image vectors are then quantized using the new codebook.
Maximum period comes out to be 5.732 and the maximum frequency is 187.33 MHz.
Here is the FPGA Report generated by Xilinx ISE.
Here is the simulation where the size of codebook is reduced to 4 and word size to 4 bits.
So the result can be visible.
References
• Digital Communication by John G. Proakis.
Y.Linde, A. Buzo, and R.M.Gray,”An algorithm for vector quantizer
design.”, IEEE Trans.
Commun, vol. COM-28, pp. 84-95, Jan. 1980.
• Gersho and RM.Gray, Vector quantization and Signal Compression. 3300 AH
Dortrect, The Netherlands, Kluwer Academic publishers, 1992.
• K. Tsang and B. W. Y. Wei. A vlsi architecture for a realtime code book
generator and encoder of a vector quantizer. IEEE Transactions on Very Large
Scale Integration(VLSI) Systems, 2(3):360–364, September 1994.
• R.M Gray , “Vector Quantization” , IEEE ASSP Mag.. pp 4-29, Apr 1984.
• N.M Nasrabadi and R.A King, “Image coding using vector quantizer design”,
IEEE Trans. Commun.. vol. Com-28 , pp 84-85,Jan 1980.
• M. Goldberg and P.R. Boucher and S.Shlien, “Image Compression using
adaptive vector quantization” IEEE Trans, Commun.
• G. Davidson , T.Stanhope ,”Real-time speech Coding with a VLSI vector
quantization processor”, IEEE ICASSP’85
• H. Sun and H. Hsu ,”A VLSI wavefront array for image vector quantization”, in
Proc. 30th Midwest Symp.
• P.A Ramanoorthy and B. Potu ,”Bit-Serial systolic chip set for real image time
coding”, in Proc. IEEE ICASSP’87
• N.M Nasrabadi and Y. Feng ,”Vector quantization of images based on kohonen
self organizing feature maps”, in Proc. Int .Conf Neutral networks.
• D. Chase and A.Gersho ,”Real time VQ Codebook Generation Hardware for
Image Processing”, Proc. IEEE ICASSP ’85
• H. Abut, B.P.M Tao and J.L Smith ,”Vector Quantizer Architecture for Speech
and Image Coding”, Proc. IEEE ICASSP’87
• T.M Liu and Z. Hu ,”Hardware realization of multistage speech waveform vector
quantizer”, in Proc. IEEE Int.Conf.
• B.H Juang and A.H Gray Jr ,”Multiple stage vector quantization for speech
Coding “ in Proc IEEE Int.conf.
• V.Cuperman and A.Gersho , “Adaptive Differential vector coding of speech” in
Conf.Rec. Globeman
• Kamyar Dezhgosha and M.Jamali ,”A VLSI Architecture for Real-Time Image
Coding using a Vector Quantization based Algorithm”
• Gersho and M.Yano ,”Adaptive Vector Quantization by Progressive Codevector
replacement”
• S. Panchanathan and M.Goldberg , “A Systolic Array Architecture for Image
Coding using vector Quantization”
• S. Panchanathan and M. Goldberg , “A Content Addressable Memory
Architecture for Image Coding Using Vector Quantization”
• Kataoka, T. Moriya, and S. Hayashi, “An 8 kbit/s speech coder based on
conjugate structure CELP,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal
Processing, 1993, pp. II-592–II-595.
• N. Farvardin, “A study of vector quantization for noisy channels”, IEEE Trans
Infor. Th, vol. 36, pp. 799-809, July 1990.
• B.Ramamurthi and A.Gersho. Classified vector quantization of images. IEEE
Transactions on communications, COM-34(11):1105–1115, November 1986.
• P. Boucher, "Adaptive vector quantization of pictorial data," Master's thesis,
Univ. of Ottawa Applied science in electrical engineering, 1954