0% found this document useful (0 votes)
44 views81 pages

UNIT 5 Alg Struct 2

The document covers advanced topics in theoretical computer science, focusing on algebraic structures, residue arithmetic, and coding theory. It discusses error detection and correction techniques, including parity checks, checksums, and cyclic redundancy checks (CRC), along with their applications in data transmission. The importance of efficient encoding and decoding methods for reliable communication is emphasized throughout the text.

Uploaded by

Poojitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views81 pages

UNIT 5 Alg Struct 2

The document covers advanced topics in theoretical computer science, focusing on algebraic structures, residue arithmetic, and coding theory. It discusses error detection and correction techniques, including parity checks, checksums, and cyclic redundancy checks (CRC), along with their applications in data transmission. The importance of efficient encoding and decoding methods for reliable communication is emphasized throughout the text.

Uploaded by

Poojitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

THEORETICAL COMPUTER SCIENCE – 20MSS31

ALGEBRAIC STRUCTURES - II
TOPICS COVERED:

• Residue arithmetic for Computers


• Coding theory
• Error Detection Error Correction
• Distance between Code words
• Minimum distance and weight
• Group Code
• Linear Code
• Cyclic code
• Problems under Encoding and Decoding Techniques
RESIDUE ARITHMETIC
FOR COMPUTERS
DEFINITION:
Residue arithmetic, often referred to as modular arithmetic in the context of computer science and
the theory of computation.
This concept is particularly useful in the theory of computation for a variety of reasons,
including efficiency, data representation, and algorithm design.
Residue Number System
Any non-negative integer x>=0 in a general positional number system can be written as
x=nΣ i=0 ai w^i
where, ai - are set of permissible digits w^i - are set of weights
Example: 191=1*100+9*10+1*1
=1*10^2+9*10^1+1*10^0
Then number system is called fixed base number system.
A number system which is not a fixed base is called mixed base number system and it is
also called residue number system.
Positional (or Weighted) Number System:
▪ A positional number system is also known as weighted number system. As the
name implies there will be a weight associated with each digit.
▪ According to its position of occurrence in the number, each digit is weighted
▪ Few examples of positional number system are decimal number system, Binary
number system, octal number system, hexadecimal number system, BCD, etc.
.Non-Positional (or Non-weighted) Number System:

▪ Non-positional number system is also known as non-weighted number system. Digit value is
independent of its position.
▪ Non-positional number system is used for shift position ,encodes and error detecting purpose.
▪ Few examples of non-weighted number system are gray code, cyclic code, linear code, excess-3 code,
etc.
Advantages of fixed base number system over non fixed base:
▪ Algebric comparison of 2 numbers can be easily done

▪ The range of these number system can be easily extended by adding more digits

▪ Overflow deduction is easy

▪ But in the residue number system is not convenient for the representation of the fractions
FERMET THEORM

This theorem states that if p is a prime number, then for any integer a,
the number a p – a is an integer multiple of p.
Here p is a prime number
ap ≡ a (mod p).
Special Case: If a is not divisible by p, Fermat’s little theorem is equivalent
to the statement that a p-1-1 is an integer multiple of p.
ap-1 ≡ 1 (mod p)
OR
ap-1 % p = 1
Here a is not divisible by p.
CODING THEORY
INTRODUCTION:

▪ The use of codes is in practice since, the ancient rulers of kingdom.

▪ In modern era Codes are used in data compression, cryptography, error coding and more recently it is used in

network coding.

▪ Codes are studied by various scientific disciplines such as information theory electrical engineering,

mathematics, and computer science for the purpose of designing efficient and reliable data transmission methods.
USES OF CODING THEORY:
1.The source encoding consists of compression of the data in order to transmit more efficiently.

2.This is the practice in the Internet transactions.

3.The channel encoding adds extra data bits to make the transmission of data more robust and secured.

4.Cell phones also use coding techniques to correct for the fading and noisy high frequency radio transmission.

5.Data modems, telephone transmissions and NASA all employ channel coding techniques to get the bits through,
for example the turbo code and LDPC codes.
DIAGRAMMATIC REPRESENTATION:
SOME BASIC CODING TECHNIQUES:

▪ Transmitted messages, like data from a satellite, invariably suffer from


noise.

▪ It is important to encode a message in such a way that in spite of noise


one should be able to decode a message to its original form.

▪ Most messages are sent as digital-sequences of o's and i's, such as


10101 or 1010011.
EXAMPLE:
▪ Let us assume that we want to send the message 1011 in binary for a real word, such as 'buy, or a sentence
such as 'buy stocks.
▪ One way of encoding 1011 would be to attach a binary "tail" to it so that if the message gets distorted to, say,
on, we can detect the error.
▪ One such tail could be a 1 or o, depending on whether we have an odd or an even number of 1's in the word.
▪ This way all encoded words will have an even number of 1’s.
▪ So 1011 will be encoded as 10111. This is called even parity.
▪ Now if this is distorted to, we know that an error has occurred, because we have only received an odd
number of 1’s.
▪ Similarly, if we look for odd number of 1's in the word at the time of encoding then it is called odd parity.
▪ In general, this error- detecting code is called a parity check and is considered too simple and very useful.
The American mathematician Claude E. Shannon (1916-2001) and his friend, namely, Richard Hamming
and his colleagues in Bell laboratories, founded the field of coding theory.
ESSENTIAL ASPECTS:
Coding theory typically involves the removal of redundancy and the
correction (or detection) of errors in the transmitted data.

There are essentially two aspects to coding theory, namely,

i) Data compression (also called, source coding) and


ii) Error correction (also called channel coding).

COMMUNICATION PROCESS:
The process of communication involves transmitting information between source and destination.
The three essential parts in an ideal communication system are
(1) Transmitter
(2) Channel and
(3) Receivers
ELEMENTS OF COMMUNICATION PROCESS:

1. Sender: The person who conveys his thoughts, message or ideas to the receiver is known as the sender. He is
at the starting point of the communication system and represents the source of communication.
2. Message: The subject matter of communication is termed as messages. It includes ideas, feelings, suggestions,
order, etc., which a sender wants to convey to the receiver.
3. Encoding: The process of converting messages into communication symbols, which may be understood by the
receiver. It includes words, pictures, gestures, symbols, etc. Encoding translates the internal thought of the sender
into a language which can be understandable.
4. Media: The path, channel or medium through which encoded message is transmitted to the receiver is known
as media. It is the carrier of the message.
5. Decoding: The process of translating the encoded message into an effective language, which can be understood
by the receiver is known as decoding. In this, the encoded symbols of the sender are converted.
6. Receiver: The person who receives the message of the sender is known as the receiver.
7. Feedback: In order to complete the process of communication, feedback is essential. The process of reversal of
communication in which the receiver expresses his reaction to the sender of the message is known as feedback.
Feedback ensures that the receiver has received and understood the message.
8. Noise: Any construction or hindrance which hampers the communication process is known as noise. The
hindrance may be caused to the sender, message or receiver.
ELEMENTS OF COMMUNICATION PROCESS:

The transmitter translates the message into a signal and the receiver translates it back into a message.
Communication is only possible if the fields of experience of sender and receiver overlap
ILLUSTRATION OF COMMUNICATION
ERROR DETECTION
WHAT IS AN ERROR ?

• Error is an Condition when the output information does not match with the Input Information.
• Eg:- A 0 bit may change to 1 or vice versa.

ERROR DETECTING CODES


• Whenever a message is transmitted, it may get scrambled by noise or data may get corrupted.
• To avoid this, we use error-detecting codes which are additional data added to a given digital message
to help us detect if an error occurred during transmission of the message.
• A simple example of error-detecting code is parity check.
ERROR CORRECTING CODES

• we can also pass some data to figure out the original message from the corrupt message that we
received.
• Error-correcting codes also deploy the same strategy as error-detecting codes but additionally,
such codes also detect the exact location of the corrupt bit.
• In error-correcting codes, parity check has a simple way to detect errors along with a
sophisticated mechanism to determine the corrupt bit location. Once the corrupt bit is located, its
value is reverted (from 0 to 1 or 1 to 0) to get the original message.

HOW TO DETECT & CORRECT ERRORS?

• To detect and correct the errors, additional bits are added to the data bits at the time of transmission.
• The additional bits are called parity bits. They Allow Detection or Correction of the errors.
• The data bits along with parity bits form the code word.
PARITY CHECKING OF ERROR DETECTION
• Simplest method for detecting and correcting errors.
• The MSB of an 8-bits word is used as the parity bit and the remaining 7 bits are used as data or
message bits.

• The parity of 8-bits transmitted word can be either even parity or odd parity.

• Even parity -- Even parity means the number of 1's in the given word including the parity bit should
be even (2,4,6,....).

• Odd parity -- Odd parity means the number of 1's in the given word including the parity bit should be
odd (1,3,5,....).
USE OF PARITY BIT

• The parity bit can be set to 0 and 1 depending on the type of the parity required.
• For even parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is even.
• For odd parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is odd.

HOW DOES ERROR DETECTION TAKE PLACE ?


• Parity checking at the receiver can detect the presence of an error if the parity of the receiver signal is
different from the expected parity.

• That means, if it is known that the parity of the transmitted signal is always going to be "even" and if
the received signal has an odd parity, then the receiver can conclude that the received signal is not
correct.

• If an error is detected, then the receiver will ignore the received byte and request for retransmission of
the same byte to the transmitter.
CHECKSUM
• This is a block code method where a checksum is created based on the data values in the
data blocks to be transmitted using some algorithm and appended to the data.
• When the receiver gets this data, a new checksum is calculated and compared with the
existing checksum. A non-match indicates an error.

ERROR DETECTION BY CHECKSUM


• For error detection by checksums, data is divided into fixed sized frames or segments.

• Sender’s End − The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.

• Receiver’s End − The receiver adds the incoming segments along with the checksum using 1’s
complement arithmetic to get the sum and then complements it.
• If the result is zero, the received frames are accepted; otherwise they are discarded.
->At the Sender side, the data is divided into equal subunits of n bit length by the
checksum generator. This bit is generally of 16-bit length. These subunits are
then added together using one’s complement method. This sum is of n bits. The
resultant bit is then complemented. This complemented sum which is called
checksum is appended to the end of original data unit and is then transmitted to
Receiver.

->The Receiver after receiving data + checksum passes it to checksum checker.


Checksum checker divides this data unit into various subunits of equal length
and adds all these subunits. These subunits also contain checksum as one of the
subunits. The resultant bit is then complemented. If the complemented result is
zero, it means the data is error-free. If the result is non-zero it means the data
contains an error and Receiver rejects it.
EXAMPLE:

If the data unit to be transmitted is 10101001 00111001, the following


procedure is used at Sender site and Receiver site.

SENDER SIDE:
10101001 subunit 1
00111001 subunit 2
11100010 sum (using 1s complement)
11101 checksum (complement of sum)

DATA TRANSMITTED TO RECEIVER:


RECEIVER SITE :

10101001 subunit 1
00111001 subunit 2
00011101 checksum
11111111 sum
00000000 sum's complement

Result is zero, it means no error.


CYCLIC REDUNDANCY CHECKS (CRC’S)
• The Cyclic Redundancy Checks (CRC) is the most powerful method for Error-Detection and
Correction.
• It is given as a kbit message and the transmitter creates an (n – k) bit sequence called frame check
sequence.
• The out coming frame, including n bits, is precisely divisible by some fixed number.
• Modulo 2 Arithmetic is used in this binary addition with no carries, just like the XOR operation.
• Redundancy means duplicacy.
• The redundancy bits used by CRC are changed by splitting the data unit by a fixed divisor. The
remainder is CRC.

QUALITIES OF CRC
• It should have accurately one less bit than the divisor.
• Joining it to the end of the data unit should create the resulting bit sequence precisely divisible by the
divisor.
CRC GENERATOR & CHECKER

• A string of n 0s is added to the data unit. The number n is one smaller than the number of bits in the
fixed divisor.
• The new data unit is divided by a divisor utilizing a procedure known as binary division; the
remainder appearing from the division is CRC.
• The CRC of n bits interpreted in phase 2 restores the added 0s at the end of the data unit.
EXAMPLE
An example generator polynomial is of the form like x3 + x + 1. This generator polynomial represents
key 1011. Another example is x2 + 1 that represents key 101.
n : Number of bits in data to be sent
from sender side.
k : Number of bits in the key obtained
from generator polynomial.
Example:
Message D = 1010001101 (10 bits)
Predetermined P = 110101 (6 bits)
FCS R = to be calculated 5 bits
Hence, n = 15 K = 10 and (n – k) = 5
The message is generated through 25:accommodating 1010001101000
The product is divided by P.

The remainder is inserted to 25D to provide T = 101000110101110 that is sent.


Suppose that there are no errors, and the receiver gets T perfect. The received frame is
divided by P.

Because of no remainder, there are no errors.


ILLUSTRATION - 1
Data word to be sent - 100100
Key - 1101 [ Or generator polynomial x3 + x2 + 1]

Sender Side:
Therefore, the remainder is 001 and hence the encoded
data sent is 100100001.

Receiver Side:
Code word received at the receiver side 100100001

Therefore, the remainder is all zeros. Hence, the


data received has no error.
ILLUSTRATION - 2
Data word to be sent - 100100
Key - 1101

Sender Side:
Therefore, the remainder is 001 and hence the
code word sent is 100100001.

Receiver Side
Let there be an error in transmission media
Code word received at the receiver side - 100000001

Since the remainder is not all zeroes, the error


is detected at the receiver side.
CORRECTION
WHAT IS ERROR CORRECTION ?

Error correction codes are used to delect and repair mistakes that occur during data
transmission from the transmitter to the receiver

Two types :

1)Forward error correction

2)Backward error correction


BACKWARD ERROR CORRECTION:
▪ When a backward mistake is dected ,the reciver requests that the sender retransmit the
complete data unit.

FORWARD ERROR CORRECTION :


▪ In this scenario, the error-correcting code is used by the recevier,which
automatically corrects the mistakes.
▪ A single extra bit can identify but not repair the mistake.
▪ To correct the mistakes ,the specific location of the error must be known.
▪ It is used to compute even a single-bit mistake,
▪ For Example,
▪ Error correcting algorithm will identify which one of seven bits is incorrect. We need
to add some more redundant bits to do this .
THE NO OF REDUNDANT BITS IS CALCULATED USING BELOW
FORMULA:
2^r >= d+r+1

• The above formula is used to compute the value of r.

• For example,

• If the value of d is 4,the least possible no that fulfils the above relation is 3.
ERROR CORRECTION TECHNIQUES:

Hamming code:
Parity bits :
A bit that is added to the original binary data to make sure the total no of 1s is even or
odd
(in case of even or odd parity)
Even Parity :
To check for even parity ,if the total no of 1s is even,the parity bit value is 0.
Odd Parity:
If the total no occurrences of 1s is odd.
Then parity bit value is 1.
TO DETERMINE THE VALUE OF PARITY BITS :
• Rule :The value of parity bits is determined by the sequence of that is alternatively checks
and skips

• For R1: Check 1 bit , skip 1 bit, continue,. :(1,3,5,7,9,.....)

• For R2: Check 2 bit ,skip 2 bit ,continue ,. :(2,3,6,7,10,11,....)

• For R4: Check 4 bit, skip 4 bits ,continue,.. :(4,5,6,7,12,13,14,15,....)


EXAMPLE :
• If the data to 1011001 be transmitted is Number of data bits=7 Thus, number of redundancy bits=4
Total bits 7+4 = 11

• Redundant bits are always placed at positions that correspond the power of 2, so that redundant bits
will be placed at positions ….1,2,4 and 8

• Redundant bits will be placed here.


Thus now, all the 11 bits will look like this:
R1:
Here r1,r2,r4 and r8 are the redundant bits
Determing the parity bits :

we look at bits 1,3,5,7,9,11 to calculate r1,


in this case,the no of 1s is these bits together is even,we have the r1 bit equal 0 to maintain the even parity
R2:

we look at bits 2,3,6,7,10,11 to calculate r2.


In this case, because the no of 1s in these bits together is odd, we make the r2 bit equal to 1 to maintain even
parity
R4:

We look at bits 4,5,6,7, to calculate r4


In these case because the number of 1s in these bits together is odd
we make the r4 bit equal to 1 to maintain even parity
Example:A 7 bit hamming code is received as 1011011.Assume even parity and state
whether the received code is correct or wrong ,if wrong locate the bit
1 0 1 1 0 1 1
D7 D6 D5 R4 D3 P2 P1

Detecting Error:
Analyzing bits 1,3,5,7
P1 D3 D5 D7
1011(The no of 1’s is odd)
Error occurs
here P1=1
Analyzing bit 2,3,6,7
P2 D3 D6 D7
1 0 0 1(the no of 1’s is even)
No Error occurred
here P2=0

1 0 1 1 0 1 1
D7 D6 D5 P4 D3 P2 P1

Analyzing bits 4,5,6,7


P4 D5 D6 D7
1 1 0 1(The no of 1’s is odd)
Error occurs
here P4=1

Here P1 and P4 are equal to 1 ,So the code contains errors


Correcting errors :
The error code is,

1 0 1
P4 P2 P1

1. To correct the error we must know the decimal value of this


binary number
2. The decimal equivalent of 101 is 5
3. Invert the 5th bit of the code to get the correct code

The corrected code is :


1 0 0 1 0 1 1
DISTANCE BETWEEN TWO CODE WORDS
WHAT IS A CODE WORD?
A code word is a specific sequence of binary digits (0s and 1s) used to represent
data or information.

IMPORTANCE OF CODEWORDS

•ENCRYPTION
●In encryption, code words are used to represent plaintext data in a secure form, known as ciphertext.
•DATA COMPRESSION
●In data compression, longer sequences of data are replaced by shorter code words, reducing the amount of data that
needs to be transmitted or stored.
●Code words are used in data compression algorithms to represent information more efficiently.
•Error Detection and Correction
●The code words contain redundancy that allows the receiver to detect errors.
●Error-correcting codes are used to detect and correct errors that may occur during data transmission.
●Eg: Hamming codes
DISTANCE BETWEEN CODEWORDS

Hamming Distance :
The Hamming distance between code words is the number of positions at which the
corresponding bits are different.
USAGE OF HAMMING DISTANCE

• Text Analysis
• Error Detection and Correction
• Genetics and Bioinformatics
MINIMUM DISTANCE AND WEIGHT
MINIMUM DISTANCE:
In coding theory, the minimum distance of a code is a measure of its
error-correcting capability. It is defined as the smallest number of positions at
which two distinct codewords differ. A larger minimum distance implies that the
code can correct more errors, which is important in applications such as data
transmission and storage. The minimum distance helps in determining the code's
ability to detect and correct errors.

WEIGHT:
In coding theory, the weight of a codeword is a measure of how "heavy" the codeword is in terms of the
number of nonzero elements (usually 1s) it contains. It's a fundamental concept for certain types of codes,
especially in binary codes.
The weight of a codeword is calculated by counting the number of nonzero elements within the codeword
• Here are the three codewords:
EXAMPLE
• Codeword A: 001110
• Codeword B: 101001
• Codeword C: 110110
• To calculate the minimum distance between these codewords, you look at each bit position
and count how many differences there are between any two codewords. The minimum
distance is the smallest number of differences.

Distance(A, B):
•1st bit: A = 0, B = 1
•2nd bit: A = 0, B = 0
•3rd bit: A = 1, B = 1
•4th bit: A = 1, B = 0
•5th bit: A = 0, B = 0
•6th bit: A = 0, B = 1
The number of differences between A and B is 3.
Distance(A, C):
•1st bit: A = 0, C = 1
•2nd bit: A = 0, C = 1
•3rd bit: A = 1, C = 0
•4th bit: A = 1, C = 1
•5th bit: A = 0, C = 1
•6th bit: A = 0, C = 0
The number of differences between A and C is 4.
Distance(B, C):
•1st bit: B = 1, C = 1
•2nd bit: B = 0, C = 1
•3rd bit: B = 1, C = 0
•4th bit: B = 0, C = 1
•5th bit: B = 0, C = 1
•6th bit: B = 1, C = 0
The number of differences between B and C is 6.
• To calculate the weight of each codeword, you simply count the number of non-zero
elements, which are '1s' in the binary representation. Here's the weight for each of the
codewords:

• Weight(A): Count of '1's in Codeword A = 2


• Weight(B): Count of '1's in Codeword B = 3
• Weight(C): Count of '1's in Codeword C = 4
• So, the weight of Codeword A is 2, the weight of Codeword B is 3, and the weight of
Codeword C is 4.
GROUP CODE
DEFINITION:
A group code is a type of code used in coding theory. It consists of linear block of codes

USE:
To efficiently encode data for error correction Data
transmission Storage purposes

TYPES:
Two types:
1. Linear code
2. Cyclic code
PROBLEMS:
1.To find the parity check matrix
2.To find the codeword generated for encoding function
CONSTRUCTION OF A GROUP CODE:
Group codes can be constructed by special generator matrices which resembles
linear block codes containing the elements of the same group.

EXAMPLE:
CODE WORDS GENERATED ARE:
LINEAR CODE
INTRODUCTION:
Linear codes are a type of error-correcting code where the XOR (or addition in a finite field) of any
two code word is also a code word.

->Linear codes play a crucial role in various communication systems, such as in the transmission and
storage of digital information, where errors may occur due to noise or other factors.
-> The minimum distance property is particularly important for the error-detection and
error-correction capabilities of linear codes.

APPLICATIONS :
COMMUNICATIONS:
-> Satellite and deep space communications
-> digital audio and video transmissions

STORAGE:
-> Computer memory (RAM)
-> single error correcting and double error detecting code
DEFINITION OF LINEAR CODE:
A code C is considered a linear code if the sum of any two words v and w in C is also a
word in C.
C={000,111,010,101}
ZERO WORD REQUIREMENT:
Every linear code must contain the zero word (the additive identity).

LINEAR CODE AS A SUBSPACE:


A linear code (group code) is a subspace (or subgroup) of k^n , where k is a finite field and n is the
length of the code words.

UTILIZING SUBSPACE KNOWLEDGE:


Subspace concepts are employed to improve techniques for encoding and decoding in linear codes.

MINIMUM DISTANCE OF A LINEAR CODE:


The minimum distance of a linear code is defined as the minimum weight of any nonzero code word.
The weight of a code word is the number of non-zero elements in it.
EXAMPLE of a linear code over the binary field (GF(2)) :
Let's define a linear code C as follows:

C={000,111,010,101}

Here, each element in C is a binary word of length 3. This set C forms a linear code over GF(2).
Now, let's check if it satisfies the properties of a linear code:

1. Closure under Addition (or XOR in binary):


Take any two words in C and add them.
For example, take v=111 and w=010. The sum is v+w=101, which is also in C.

2. Contains Zero Word:


The zero word in this case is 000, and it is indeed in C.
3. Linear Code as a Subspace:
This set C is a subspace of GF(2)^3 since it forms a closed set under
addition and scalar multiplication over GF(2).
NOTE: The vector space GF(2)^3 represents all possible binary vectors of length 3. In GF(2)
(Galois Field of order 2), the only possible elements are 0 and 1.
elements in
GF(2)^3 : {000 ,001 ,010 ,011 ,100 ,101 ,110 ,111 }

4. Minimum Distance:
C={000,111,010,101}
The minimum distance of this code is 1, then The minimum weight of any nonzero
codeword is 1 (e.g., the weight of 010 or 101),which indicates the ability of the code to
detect and correct errors.
ADVANTAGES :

-> It is the easiest and simplest technique to detect and correct errors.
-> Error Probability is reduced
-> Faster encoding and less storage
-> Error Patterns are easily describable

DISADVANTAGES :

-> Transmission bandwidth requirement is more


-> Extra bits reduces bit rate of transmitter and also reduces its power
CYCLIC CODE
DEFINITION:
Cyclic codes are a type of linear block code used in coding theory to detect and correct
errors in digital communications. They are particularly useful in situations where errors
occur in a cyclic pattern, such as in data transmitted over a noisy channel.

PROPERTIES:
•Linearity property
•Cyclic shifting

EXAMPLE:
i) {0000, 0110, 1001, 1111} ii) {0000, 0101, 1010, 1111}
SYSTEMATIC CODEWORDS
message or data is included in the encoded message
formula:
C(x) = [x^(n-k) . m(x)/p(x)], where p(x) = Remainder [x^(n-k) . m(x)/g(x)]

NON-SYSTEMATIC CODEWORDS
message or data is not included in the encoded message
formula:
C(x) = m(x) . g(x)
GENERATOR AND PARITY-CHECK MATRICES:

Cyclic codes can be represented using generator and parity-check matrices. These matrices are
used to encode and decode messages transmitted. The generator matrix is used to generate the
code words from the message, while the parity-check matrix is used to check for errors in the
received code words.

GENERATOR MATRICES:

A generator matrix is a matrix that can be used to generate all the codewords of a cyclic code. It
is an m x n matrix where m is the number of parity-check equations and n is the length of the
code. The generator matrix can be obtained from the parity-check matrix by using the algorithm
for transposing matrices over the binary field.

eg: G(x) = [ Identity | Parity ]


formula: Remainder [x^(n-k)/g(x)]
PARITY-CHECK MATRICES:
A parity-check matrix is an (n-k) x n matrix that can be used to check whether a given vector is
a valid codeword of a cyclic code. It is obtained by taking the transpose of the generator matrix
and then performing Gaussian elimination to obtain a matrix in row-echelon form. The rows of
this matrix are the parity-check equations of the code.

APPLICATIONS OF CYCLIC CODES:


Wireless Communication
Cyclic codes are also used in wireless communication systems, such as Bluetooth and Wi-Fi. These codes
help to ensure reliable transmission and reception of data, even in the presence of noise and interference.
Error-Correcting Codes
Cyclic codes are a type of error-correcting code, which means they can detect and correct errors in data
transmission. These codes are used in a variety of applications, from satellite communication to digital
television broadcasting.
Data Storage
Cyclic codes are commonly used in data storage systems such as hard disk drives and flash memory. These
codes can detect and correct errors that may occur during data transfer and storage, ensuring data integrity.
PROBLEMS UNDER ENCODING AND
DECODING TECHNIQUES
ENCODING
▪ In computers, encoding is the process of arranging a sequence of characters (letters,
numbers, punctuation, and certain symbols) into a specialized format for efficient
transmission or storage.

▪ The purpose of encoding is to transform data so that it can be properly consumed by a


different type of system.

DECODING
▪ Decoding is the opposite process -- the conversion of an encoded format back into the
original sequence of characters.

▪ The purpose of decoding is to convert the encoded data into its actual form.
OSGOOD-SCHRAMM MODEL OF COMMUNICATION
ADVANTAGES OF OSGOOD –SCHRAMM MODEL

• It is a dynamic model.
• No separate sender or receiver.
• Assumes communication is circular in nature.

DISADVANTAGES

• Does not explain about Semantic Noise


• Misinterpretation.
SHANNON –WEAVER TRANSMISSION MODEL
EXAMPLE OF SHANNON – WEAVER MODEL

A manager calls his assistant and says “I want to see you.” but there is a noise in
background say a car horn. The assistant didn’t hear the manager properly and
inquired “What do you want?”.
Sender – Manager
Encoder – Telephone (Manager’s)
Channel – Cable
Noise – Car Horn
Decoder – Telephone (Assistant’s)
Receiver – Assistant
Feedback – question from assistant
HAMMING CODE

▪ The hamming code uses the number of redundant bits depending on the number of information
bits in the message.

▪ Let n be the number of information or data bits, then the number of redundant bits P is determined
by:

2p >=n + p + 1

▪ Generated Hamming Code contains Data bits and parity bits.


Encode a binary 10010 into the even parity hamming code

• Number of data bits n = 5


• Then the number of parity bits - 16 >= 4 + 5 + 1 where p=4

You might also like