0% found this document useful (0 votes)
121 views8 pages

A New Lossless Image Compression Technique Based On Bose, Chandhuri and Hocquengham (BCH)

The document summarizes a new lossless image compression technique based on Bose-Chandhuri-Hocquengham (BCH) codes. The technique divides images into 7-bit blocks and uses a (7,4) BCH code to compress blocks to 4 bits by removing parity bits. An extra bit is added to each block to indicate if it is a valid codeword or not. The compressed file and a key file containing the extra bits are generated. Huffman coding is then applied for further compression. Experimental results show the technique efficiently compresses images without data loss.

Uploaded by

ShafiullaShaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views8 pages

A New Lossless Image Compression Technique Based On Bose, Chandhuri and Hocquengham (BCH)

The document summarizes a new lossless image compression technique based on Bose-Chandhuri-Hocquengham (BCH) codes. The technique divides images into 7-bit blocks and uses a (7,4) BCH code to compress blocks to 4 bits by removing parity bits. An extra bit is added to each block to indicate if it is a valid codeword or not. The compressed file and a key file containing the extra bits are generated. Huffman coding is then applied for further compression. Experimental results show the technique efficiently compresses images without data loss.

Uploaded by

ShafiullaShaik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

International Journal of Software Engineering and Its Applications

Vol. 5 No. 3, July, 2011

A New Lossless Image Compression Technique


Based on Bose, Chandhuri and Hocquengham (BCH) Codes

Rafeeq Al-Hashemi, Israa Wahbi Kamal


Head of CS Dep., College of IT,AHU, Maan Jordan,
Rafiq_alhashimy@ahu.edu.jo
Dep. of CS, College of Science & IT,ZU, Zarqa Jordan,
esraawk@yahoo.com

Abstract

A new binary (bit-level) lossless image compression method based on a well-known error
correcting BCH codes has been introduced in this paper. The BCH encoder converts the
message of k bits to a codeword of length n by adding 3 parity bits. In contrast the decoder
eliminates these parity bits after verifying the received message; therefore, the proposed
method utilizes this idea by dividing the image into blocks of size 7 bits. These blocks entered
to the BCH decoder who eliminates the parity bits. This reduces the block size to 4 bits. The
output will be in two folds first the compressed image file and the second a file contains the
keys. Each block is tested to find if it is a valid block or not (valid / non-valid codeword). In
order to distinguish between them during the decompression process, the proposed method
adds 1 for the valid codeword and 0 for the invalid codeword and saved in another file to be
the key that used in the decoding stage. We then implement the Huffman codes on the
compressed file to increase the compression ratio. On the other hand, two different
algorithms are implemented into the added bit file (keys) to reduce the file size. The RLE
encoding algorithm is the first one, and the results were entered into the second compression
round using the Huffman algorithm. The file is then attached to the header of the compressed
image file. This proposed method reduces the entropy of the generated binary sequence so
that it grants higher compression ratio. The experimental results show that the compression
algorithm is efficient and gives a good compression ratio without losing data. The using of
BCH code improves the results of Huffman in terms of increasing compression ratio.

Keywords: image compression, BCH codes, RLE algorithm, parity check, Huffman
algorithm

1. Introduction
Image compression has an important application in the areas of image transmission and
image storage despite the large capacity storage devices that are currently available.
Therefore, an efficient method for storing and transmitting an image to reduce execution time
and memory size is needed. The general principle of image compression algorithms is to
transform binary digits into a new one that contains the same information but with fewer
digits, so the file can be as small as possible. The efficient image compression algorithm or
any data compression is chosen according to scales, such as compression size, compression
ratio, and entropy. Compression size is the size of the new file in bits after compression is
complete. Compression ratio is a percentage that results from dividing the original file size in
bits by the compression size in bits. Entropy is the number that results from dividing the
15
International Journal of Software Engineering and Its Applications
Vol. 5 No. 3, July, 2011

compression size in bits by the number of symbols in the original file and scales as
bits/symbol [1].
The general idea for achieving error detection and correction is to add some redundancy
(i.e., some extra bits) to a message. The receiver then uses these extra bits to the check
consistency of the delivered message and to recover data that is determined to be erroneous.
These detecting/correcting codes are also used in data compression. Hamming codes, BCH
code, and Huffman code are all codes for detecting/correcting errors. Hamming and BCH
codes are block codes that have fixed lengths, where as Huffman code is a variable-length
code [7, 8, 11]. Typically, a block code takes k bit information and transforms this into an n
bit codeword by adding m redundant bits to the original data bits. The block length of this
type of code would be an n bit and, so, there are 2k possible codewords.
Data compression is divided into two major categories, lossless compression techniques
and lossy compression techniques. In lossless compression, data can be compressed and
restored without any loss of information. In lossy compression, the decompressed data may
be an acceptable approximation of the original uncompressed data [12]. In this work, we
introduce a novel lossless image compression scheme, based on the BCH codes, which works
under the principle of removing the parity bits (redundant bits) from the codeword, leaving
only the data bits. The codeword of length n and k data bits: (there are 2k possible codewords)
are compressed into a k bit in the compressed file using (n, k) BCH code. For a non-codeword
(there are 2n 2k possible non-codewords) the same n bits are written in the compressed file.
We use an indicator to determine whether or not the n bits in the original file are compressed
into a k bit.

2. Related Works
Numerous compression research studies examine the use of compression in different file
types and different application domains, as well as by using different algorithms. The popular
approaches used for encoding-decoding compression are Huffman code, arithmetic coding,
and Hamming code. Many of these approaches are applicable for text files only, and using the
most frequent characters will result in shorter codewords. The other approaches are dependent
upon a table built during the compression process.
In 1948, Shannon established the fundamental theory of entropy, which is the smallest
possible rate for lossless data compression [9]. Elias (1955) proposed an algorithm for fixed-
length lossless data compression that uses the parity-check matrix of a linear block code to
compress data via syndromes. The optimal decoding method is maximum likelihood (ML)
decoding [4]. Weinberger et al. (1999) suggest an algorithm in lossless image compression
using JPEG-LS. The model is tuned for efficient performance in conjunction with an
extended family of Golomb-type codes, which are adaptively chosen, and an embedded
alphabet extension for coding of low-entropy image regions [11]. Sharma et al. (2005)
propose a global processing technique for training the Kohonen network using JPEG images.
They use a neural network to compress the image file and, therefore, the image is identical to
the original image [10]. Zukoski et al. (2006) propose a novel model compression technique
for medical images that makes use of lossless compression in clinically relevant regions, as
defined by radiologists, and uses lossy compression everywhere else. The technique is
applicable in this medical application domain [12]. Muthaiah et al. (2008) propose a
technique using cubic-spline interpolation. This method uses irregular reconstruction and,
thus, it is a lossy technique and its efficiency depends upon its accuracy [5]. Nadarajan and
Zukarnain (2008) evaluate the performance of the string matching algorithms (LZW and
LZSS) in order to increase the performance of memory access. Theirs is a simulation study
that found that LZSS is an efficient algorithm in comparison to LZW [6]. Chandra et al.
16
International Journal of Software Engineering and Its Applications
Vol. 5 No. 3, July, 2011

(2009) propose a binary merge coding for lossless compression of images, but in the spatial
domain of the image, and this approach worked under the principle of inter-pixel redundancy
reduction. It depends upon repeated values in consecutive pixel positions [3]. Saravanan and
Ponalagusamy (2009) propose a new technique by reducing the number of source symbols.
The source symbols are reduced by applying source symbols reduction; furthermore,
Huffman coding is applied to achieve compression. The source symbols reduction technique
reduces the number of source symbols by combining them to form a new symbol. Therefore,
the amount of Huffman code to be generated is also reduced [2]. Consequently, Huffman
code symbols reduction results in a better compression ratio.

3. The Proposed Method


The proposed method is an implementation of a lossless image compression scheme,
based on BCH codes, for detecting/correcting errors in data transmission. The BCH code
algorithm adds extra bits, called parity bits, whose role is to verify the correctness of the
original message sent upon receipt. This BCH method converts the block of size k bits into n
by adding m parity bits, depending upon the size of the message k, which is encoded into a
codeword of length n. We selected n=7. This value was chosen after conducting many
experiments that gave us the best results. Following is the description of the applied
algorithm. Figure (1) shows the block diagram of the technique.
3.1. Compression Steps
The proposed method takes the original image and implements a number of steps, as
follows:
Step 1: A pre-processing step that converts the image into binary numbers.
Step 2: Apply the (7, 4) BCH decoders on these binary numbers to convert each block of size
7 bits into a 4-bit length. Note that not all blocks of size 7 bits are codewords; some
of the 7-bit blocks are non-codewords. Therefore, we use an extra bit to distinguish
between codewords and non-codewords. We add 1 bit if the bit block is a codeword
and 0 bits if it is not a codeword to an additional file called an added bit file. The
block that is a valid codeword is converted to a 4-bit block instead of a 7-bit block.
Whereas the invalid codeword moves as it is, without any compression, into the
compressed binary file.
Step 3: Implement Huffman codes on the binary numbers to compress the image file.
Step 4: Apply two different algorithms to the added bit file: the Run Length Encode (RLE)
algorithm and the Huffman encode algorithm. The file is then added to the header of
the compressed binary file. Thus, we obtain satisfactory results and high
compression. Figure (2) represents the flowchart of the proposed method.

Figure 1. Proposed System Diagram


17
International Journal of Software Engineering and Its Applications
Vol. 5 No. 3, July, 2011

Image

Pre-Processing
Convert to binary

Divide image into


blocks of size 7 bits

Implement BCH
algorithm on each block

Test if the block is a valid


codeword

N
Y

Encode the block


using BCH decoder

Add 0 to the added The decoded block =4


bit file bits

Add 1 to the added


Compress the output file
bit file
Using Huffman Codes

Copy the block to


the compressed file

Implement RLE
algorithm on the
added bit file

Implement Huffman
algorithm on the added bit file

Merge the added bit file with


the header of the image
compressed file

Compressed
image file

Figure 2. Proposed Method Flowchart

An example of the proposed encoding method is illustrated in Figure (3).

18
International Journal of Software Engineering and Its Applications
Vol. 5 No. 3, July, 2011

45 10 24
Sample of image pixel
20 147 201

30 35 125

Convert to Binary:-

00101101 00001010 00011000

00010100 10010011 11001001

00011110 00100011 01111101

Convert To Vector:-

00101101 00001010 00011000 00010100 10010011 11001001 00011110 00100011 01111101

Redistribute the blocks:-

00101101 00001010 00011000 00010100 10010011 11001001 00011110 00100011 01111101

Resize the blocks to 7 bits:-

0010110 1000010 1000011 0000001 0100100 1001111 0010010 0011110 0010001 1011111 00000001

Implements BCH cods:-

Codeword Codeword

0010110 1000010 0011 0001 0100 1111 0010010 0011110 0010001 1111 0001

Huffman Compression Stage

Compressed File

Figure 3. Proposed Encoding Method Example

19
International Journal of Software Engineering and Its Applications
Vol. 5 No. 3, July, 2011

3.2. Decompression Steps

A decompression procedure is used to reconstruct the original image from the


compressed image and is performed as follows:

Step 1: Read the header of the compressed file and extract the added bit file from it, then
decode the extracted file through application of the Huffman decoder and, secondly,
apply the RLE decoder in order to return the added bit file to its original form.
Step 2: The compressed image file is decoded using the Huffman algorithm.
Step 3: Apply the BCH encode. In this stage, every bit of this file is read and the length of the
block depends upon the value in the added bit file. If the bit of added bit file is 1,
then the block size is 4 bits, otherwise the size of the block is 7 bits. The BCH
encoder returns the block of size 4 bits to its original size of 7 bits and returns the
parity bits that were removed by the BCH decoding.
Step 4: Finally, the image is returned to its original status without any loss of data.

4. Results
This section analyzes and discusses the results obtained by performing the algorithms
discussed above on a set of test images. The proposed method uses the set that is commonly
used in the field of image processing, namely Airplane, Baboon, F-18, Lena, Peppers, and
Barbara, as a test set. The proposed algorithms can be implemented on any image and are not
limited to these images.

Figure 4. Entropy Comparison

The results of the method that operates on the entire set of test images are shown in Table
1. The results show an average compression ratio of 1.42. Figure 4 displays the entropy of the
original images compared to the entropy of the Huffman method and our method. Figure 5
represents a sample of a comparison between the original image, the image compressed using
Huffman, and the proposed method, which shows that our method outperforms the Huffman
method.

20
International Journal of Software Engineering and Its Applications
Vol. 5 No. 3, July, 2011

Table 1. Compression Ratio Compared to Huffman

Image BCH Ratio Huffman Ratio


F18 2.145 1.42
Barbara 1.109 1.043
Lena 1.14 1.07
Airplane 1.28 1.19
Average 1.4173 1.1795

Figure 5. Compression Comparison

In the proposed method, the BCH code has 16 codewords in additional to the Huffman
codewords. The method searches for the redundancy of these codewords in the original image
file. The number of codewords displayed is shown in Table 2.

Table 2. Number of BCH Codewords in the Test Images

Image No. of Non No. of


Codewords Codewords Blocks

Baboon 4475 30754 35229

Airplane 2601 15123 17724

Lena 2402 15322 17724

Peppers 2343 15381 17724

Barbara 3655 22499 26154

21
International Journal of Software Engineering and Its Applications
Vol. 5 No. 3, July, 2011

5. Conclusion
We have presented an efficient lossless image compression approach that utilizes BCH
coding with compression. The use of BCH code improves the results of the Huffman
algorithm in terms of increasing the compression ratio. Experimental results related to
compression ratios demonstrate that the proposed implementation possesses a fairly good
compression capacity that is comparable to Huffman lossless coding. Therefore, the
experiment confirms that the proposed technique produces higher lossless compression than
Huffman coding. Thus, the proposed technique is suitable for compression of text, image, and
video files and would be useful in numerous network applications

References
[1] A. Hovhannisyan, Compression of lossless compression models, MSc. Thesis in EE, (1999), Texas, USA.
[2] C. Saravanan, R. Ponalagusamy, Lossless Grey-scale Image Compression using Source Symbols Reduction
and Huffman Coding, International Journal of Image Processing, Volume (3): Issue 5, (2009).
[3] Chandra, N. S., Raju, M. B., Arya bahanu, M., and Vikram, B. R., Binary merge coding for lossless image
data compression, Journal of Computer Science 5 (5): (2009), pp388 391.
[4] Elias, P., Coding for noisy channels, IRE Convetion Record, vol.4, (1955), pp.37 46.
[5] Muthaiah, R., NeelaKantan, K., Sharma, V. and Arora, A., Image compression and reconstruction using
cubic spline interpolation technique, American Journal of Applied Sciences 5(11): (2008).
[6] Nadarajan, K., and Zukarnain, Z. A., Analysis of string matching compression algorithm, Journal of
Computer Science 4 (3): (2008), pp205 210.
[7] Navarro, G. and Ranot, M., Compressed pattern matching algorithms for Lempel-Ziv-Welch (LZW)
compression, (1999).
[8] Navarro, G. and Ranot, M., A General Practical Approach to Pattern Matching over Ziv-Lempel
Compressed Text, Proceeding of 10th annual symposium on combinatorial pattern matching, (1999).
[9] Shannon, C. E., A Mathematical Theory of Communication, the Bell System Technical Journal, Vol. 27,
July, October, (1948), pp. 379423, 623656,
[10] Sharma, D. K., Gaur, loveleen, and Okunbor, D., Image compression and feature extraction using kohonen's
SOM neural network, Journal of Strategic E-Commerce, Volume 5, no. 1,(2005).
[11] Weinberger, M.J. & Seroussi, G., The LOCO-I Lossless Image Compression Algorithm: Principles and
Standardization into JPEG-LS, IEEE Intl Conference on Image Processing, (1999), pp. 68-72, USA.
[12] Zukoski, Matthew J., Boult, Terrance, and Iyriboz, Tun, A novel approach to medical image compression,
Int. J. Bioinformatics Research and Applications, (2006), Vol. 2, No. 1.

Authors
Dr. Rafeeq Al-Hashemi: is currently an Assistant Professor of
Computer Science at Al-Hussein Bin Talal University in Jordan. He
received his PhD in Computer Science from the University of
Technology. His research interests include text mining, data mining, and
image compression. He is the Chairman of Computer Science in the
University.

Dr. Israa W. Kamal: is an Associate Professor in the Department of


Computer Science at The Zarqa University, Jordan. She graduated from
the University of technology, Baghdad-Iraq, with a BSc, MSc, and PhD
in Computer Science department in 1990, 1996, and 2004 respectively.
She has published 2 papers in journals, mainly in the areas of software
engineering, artificial intelligence, coding theory, and data compression.
Her current research is in image compression.

22

You might also like