0% found this document useful (0 votes)
45 views61 pages

DC Practical Manual Te

The document outlines the Digital Communication Lab syllabus for TE ETC students, detailing various experiments to be conducted, including studies on BPSK, BFSK, and error control coding. It specifies the objectives, apparatus, and procedures for each experiment, emphasizing practical applications and theoretical concepts in digital communication. Additionally, it includes links to virtual labs for supplementary experiments and certification details for students who complete the lab work.

Uploaded by

sagarsabale7766
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views61 pages

DC Practical Manual Te

The document outlines the Digital Communication Lab syllabus for TE ETC students, detailing various experiments to be conducted, including studies on BPSK, BFSK, and error control coding. It specifies the objectives, apparatus, and procedures for each experiment, emphasizing practical applications and theoretical concepts in digital communication. Additionally, it includes links to virtual labs for supplementary experiments and certification details for students who complete the lab work.

Uploaded by

sagarsabale7766
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Class: TE (ETC); Subject: Digital Communication

INDEX

SUBJECT: DIGITAL COMMUNICATION LAB, (TE-ETC, SEM-V)

Exp. Date Title Pages Remark Signature


No.
Group A (Any Two)
1 Study of BPSK transmitter & receiver using
suitable hardware setup/kit.
2 Study of BFSK transmitter & receiver using
suitable hardware setup/kit.
3 Study of Baseband receiver performance in
presence of Noise using suitable hardware
setup/kit.
Group B (Any Two)
4 Study of Error Control Coding using suitable
hardware setup/kit.
5 Study of DSSS transmitter and receiver using
suitable hardware setup/kit.
Group C (Any Three)
6 Simulation study of Performance of M-ary PSK.
7 Simulation study of Performance of M-ary
QAM.
8 Simulation Study of performance of BPSK
receiver in presence of noise.
Group D (Any Three)
9 Simulation study of Source Coding technique.
10 Simulation study of various Entropies and
mutual information in a communication system.
11 Simulation Study of Linear Block codes.
12 Simulation Study of cyclic codes.
Virtual LAB Links:
1. Link: https://www.etti.unibw.de/labalive/index/digitalmodulation/
2. Link: https://vlab.amrita.edu/index.php?sub=59&brch=163&sim=262&cnt=970
Note: Additional 2 experiments to be performed using the virtual labs.

This is to certify that Mr. / Ms. a student of TE ETC has


performed the above mentioned 12 experiments in the Digital Communication Laboratory of
the Rajiv Gandhi College of Engineering, Karjule Harya in the academic year 2023-24 for
Sem. I.

STAFF INCHARGE DATE HEAD OF DEPT


(Miss. Patil V. V.))

Department: Electronics and Telecommunication Engg. Page 1


Class: TE (ETC); Subject: Digital Communication

EXPERINENT NO: 01 DATE:

TITLE: Study of BPSK transmitter & receiver using suitable hardware setup/kit.
OBJECTIVE : To study concept of BPSK Generation and Reception

APPARATUS: 1. PSK Generator / Receiver kit Keshtronica


2. DSO, Dual Channel, CRO60 MHz

THEORY

BPSK (also sometimes called PRK, Phase Reversal Keying, or 2PSK) is the simplest form of phase
shift keying (PSK). It uses two phases which are separated by 180° and so can also be termed 2-
PSK. It does not particularly matter exactly where the constellation points are positioned, and in
this figure they are shown on the real axis, at 0° and 180°. This modulation is the most robust of all
the PSKs since it takes the highest level of noise or distortion to make the demodulator reach an
incorrect decision. It is, however, only able to modulate at 1 bit/symbol (as seen in the figure) and
so is unsuitable for high data-rate applications.

In the presence of an arbitrary phase-shift introduced by the communications channel, the


demodulator is unable to tell which constellation point is which. As a result, the data is often
differentially encoded prior to modulation.

BPSK is functionally equivalent to 2-QAM modulation.

The general form for BPSK follows the equation:

This yields two phases, 0 and π. In the specific form, binary data is often conveyed with the
following signals:

for binary "0"

Department: Electronics and Telecommunication Engg. Page 2


Class: TE (ETC); Subject: Digital Communication

for binary "1"

Where fc is the frequency of the carrier-wave.

Hence, the signal-space can be represented by the single basis function

where 1 is represented by and 0 is represented by . This assignment is, of course,


arbitrary.

This use of this basis function is shown at the end of the next section in a signal timing diagram.
The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator would produce.
The bit-stream that causes this output is shown above the signal (the other parts of this figure are
relevant only to QPSK).

BIT ERROR RATE

The bit error rate (BER) of BPSK in AWGN can be calculated as:

or

Since there is only one bit per symbol, this is also the symbol error rate.

Sometimes this is known as quaternary PSK, quadriphase PSK, 4-PSK, or 4-QAM. (Although the
root concepts of QPSK and 4-QAM are different, the resulting modulated radio waves are exactly
the same.) QPSK uses four points on the constellation diagram, equispaced around a circle. With
four phases, QPSK can encode two bits per symbol, shown in the diagram with gray coding to
minimize the bit error rate (BER) — sometimes misperceived as twice the BER of BPSK.

The mathematical analysis shows that QPSK can be used either to double the data rate compared
with a BPSK system while maintaining the same bandwidth of the signal, or to maintain the data-

Department: Electronics and Telecommunication Engg. Page 3


Class: TE (ETC); Subject: Digital Communication

rate of BPSK but halving the bandwidth needed. In this latter case, the BER of QPSK is exactly the
same as the BER of BPSK - and deciding differently is a common confusion when considering or
describing QPSK.

Given that radio communication channels are allocated by agencies such as the Federal
Communication Commission giving a prescribed (maximum) bandwidth, the advantage of QPSK
over BPSK becomes evident: QPSK transmits twice the data rate in a given bandwidth compared to
BPSK - at the same BER. The engineering penalty that is paid is that QPSK transmitters and
receivers are more complicated than the ones for BPSK. However, with modern electronics
technology, the penalty in cost is very moderate.

As with BPSK, there are phase ambiguity problems at the receiving end, and differentially encoded
QPSK is often used in practice. The following diagram is BPSK receiver

Procedure:-

1] Connect O/P of pattern Gen to ‘OR’ Gate.

2] Connect O/P of OR gate to I/P of transmitter i.e.(i/p of MULT. Block).

3] Connect O/P of ‘MULT’ Block i.e. BPSK O/P to I/P of 1496 Sq. ckt.

4] Connect O/P of 1496 Sq. ckt to I/P of B.P. Filter.

5] Connect O/P of BP Filter to I/P of: 2 N/W.

6] Connect O/P of: 2 N/W to I/P1 of phase comparator.

7] Connect BPSK O/P to I/P2 of phase comparator.

8] Switch on the power supply.

9] Observe Signal’s at different test points together with I/P bit pattern.

10] Observe filter O/P & COMP. Block O/P. The O/P of COMP. Block is receiver detected O/P.

Department: Electronics and Telecommunication Engg. Page 4


Class: TE (ETC); Subject: Digital Communication

Study of BPSK transmitter & receiver using hardware kit.

Department: Electronics and Telecommunication Engg. Page 5


Class: TE (ETC); Subject: Digital Communication

EXPERIMENT NO: 02 DATE:

TITLE: Study of FSK transmitter & receiver using suitable hardware setup/kit.

OBJECTIVE : 1. To study concept of FSK Generation


2. To study concept of FSK Reception
APPARATUS : Sr. No. Apparatus Range
1. FSK ST2106 & 07
2. DSO Dual Channel, CRO 60 MHz
3. 8 bit Digital Data Generator ST2101
THEORY
Frequency Shift Keying: In frequency shift keying, the carrier frequency is shifted in steps (i.e.
from one frequency to another) corresponding to the digital modulation signal. If the higher
frequency is used to represent a data '1' & lower frequency a data '0', the resulting Frequency shift
keying waveform appears as shown in figure.
Thus
Data = 1 high frequency
Data = 0 low frequency

FSK Waveform
On a closer look at the FSK waveform, it can be seen that it can be represented as the sum of two
ASK waveforms.

Department: Electronics and Telecommunication Engg. Page 6


Class: TE (ETC); Subject: Digital Communication

FSK Modulator: The demodulation of FSK waveform can be carried out by a phase locked loop.
As known, the phase locked loop tries to 'lock' to the input frequency. It achieves this by generating
corresponding output voltage to be fed to the voltage controlled oscillator, if any frequency
deviation at its input is encountered. Thus the PLL detector follows the frequency changes &
generates proportional output voltage. The output voltage from PLL contains the carrier
components. Therefore the signal is passed through the low pass filter to remove them. The
resulting wave is too rounded to be used for digital data processing. Also, the amplitude level may
be very low due to channel attenuation. The signal is 'Squared Up' by feeding it to the voltage
comparator. Figure shows the functional blocks involved in FSK demodulation.

FSK Demodulator: Since the amplitude change in FSK waveform does not matter, this modulation
technique is very reliable even in noisy & fading channels. But there is always a price to be paid to
gain that advantage. The price in this case is widening of the required bandwidth. The bandwidth
increase depends upon the two carrier frequencies used & the digital data rate. Also, for a given
data, the higher the frequencies & the more they differ from each other, the wider the required
bandwidth. The bandwidth required is at least doubled than that in the ASK modulation. This
means that lesser number of communication channels for given band of frequencies.

Department: Electronics and Telecommunication Engg. Page 7


Class: TE (ETC); Subject: Digital Communication

PROCEDURE:
1. Make the connection according to the circuit diagram.
2. Connect Binary Data Generator to the FSK modulator with desired data pattern output to CRO.
3. Connect FSK modulator output on CRO.
4. Now demodulate the FSK modulator output at receiver side.
5. Find the transmitted data pattern on CRO.

Department: Electronics and Telecommunication Engg. Page 8


Class: TE (ETC); Subject: Digital Communication

EXPERIMENT NO: 03 DATE:

TITLE: Study of Baseband receiver performance in presence of Noise using suitable


hardware setup/kit.

OBJECTIVE : To study a matched filter and its error probability calculation.

APPARATUS: Matched Filter study kit

THEORY: Data transmission system using binary encoding transmits sequence of 1, 0. these bits
may be represented in number of ways. In PSK system we transmit in phase sine wave for logic 1
& 180 degree out of phase wave for logic 0. When this data is received it is corrupted by noise &
there is finite probability that receiver will make an error in determining logic 1 & logic 0. To
reduce the probability of error we use the concept of co-relaters or matched filter as optimum filter.
In matched filter we integrate input data for one bit period. At the end of integration if output is
more than certain level we come to know that bit is logic 1 or logic 0. It is instructive to note that
integrator filters signal & noise such that signal voltage varies linearly with time & noise increases
more slowly.
In the circuit 8-bit data is given to parallel to serial converter. Bit duration of this data can
be varied by basic clock output. Signal coming out of parallel to serial converter is fed to PSK
Generator. Amplitude of output of PSK Generator can be varied by using nearby pot. Now we have
not used any filter after PSK gen. so noise generated by multiplication process is there with the
desired PSK signal. Now this PSK signal is fed to the receiver.
Output of filter is fed to serial to parallel converter. Output of receiver can be observed on
LED’s provided on the panel.

PROCEDURE:
1. Switch on the power supply.
2. Observe clock output on CRO & connect it to i/p of control block.
3. Set bit pattern as 00100101 using dip switch. “1”on the switch is LSB. When s/w is
to on position o/p is “1”.
4. Observe o/p of p/s block on CRO Measure bit period by varying pot nearby clock.
Make bit period maximum.
5. Connect o/p of p/s block to i/p of PSK gen. Observe o/p of PSK gen.
6. Keep pot nearby PAK gen. to such a position that o/p of PAK gen. will be 500 mVp-
p in amplitude
7. Observe the o/p of noise gen. o/p & set min. amplitude (i.e. 0V).
8. Connect o/p of noise gen. to i/p2 of adder block & PSK generator O/P to i/p 1 adder
block.
9. Connect O/P of adder block to i/p of matched filter observe the o/p of matched filter
10. Keep S/W above receiver latch to LE
11. Observe O/P on LEDs the same data should be at o/p.
12. Now go on varying clock frequency slowly increase the level, every time measure
the bit period. Observe at what bit period 1st error comes. Also observe that if you
increase amplitude of PSK gen. error vanishes.
13. By keeping pot of amplitude of PSK generator to lowest position & by varying clock
period measure no. of errors vs bit period. To observe stable readings make switch

Department: Electronics and Telecommunication Engg. Page 9


Class: TE (ETC); Subject: Digital Communication

above receiver latch to ground position. For particular bit period take no of readings
by moving this s/w from LE to ground.
14. Plot the graph of bit period Vs error probability.
15. In 8 bits if one bit is in error then error probability is 12.5%. If we increase no. of
bits more accurate results are obtained
Practical Kit Front Panel

Department: Electronics and Telecommunication Engg. Page 10


Class: TE (ETC); Subject: Digital Communication

EXPERINENT NO: 04 DATE:

TITLE: Study of Error Control Coding using suitable hardware setup/kit.


OBJECTIVE : To study hamming Code for error detection and Correction.

APPARATUS: Digital Multimeter, SB 224 Error Detection Kit (Make: SINCOM)

Theory: Error Detection and Correction

In mathematics, computer science, telecommunication, and information theory, error detection and
correction has great practical importance in maintaining data (information) integrity across noisy
channels and less-than-reliable storage media.

Definitions of error detection and error correction:

• Error detection is the ability to detect the presence of errors caused by noise or other impairments
during transmission from the transmitter to the receiver.

• Error correction is the additional ability to reconstruct the original, error-free data. There are two
basic ways to design the channel code and protocol for an error correcting system:

• Automatic repeat-request (ARQ): The transmitter sends the data and also an error detection code,
which the receiver uses to check for errors, and requests retransmission of erroneous data. In many
cases, the request is implicit; the receiver sends an acknowledgement (ACK) of correctly received
data, and the transmitter re-sends anything not acknowledged within a reasonable period of time.

• Forward error correction (FEC): The transmitter encodes the data with an error correcting code
(ECC) and sends the coded message. The receiver never sends any messages back to the
transmitter. The receiver decodes what it receives into the "most likely" data. The codes are
designed so that it would take an "unreasonable" amount of noise to trick the receiver into
misinterpreting the data.

It is possible to combine the two, so that minor errors are corrected without retransmission, and
major errors are detected and a retransmission requested. The combination is called hybrid
automatic repeat-request.

Error detection schemes

In telecommunication, a redundancy check is extra data added to a message for the purposes of
error detection. Several schemes exist to achieve error detection, and generally they are quite
simple. All error detection codes (which include all error-detection-and-correction codes) transmit
more bits than were in the original data. Most codes are "systematic": the transmitter sends a fixed
number of original data bits, followed by fixed number of check bits (usually referred to as
redundancy in the literature) which are derived from the data bits by some deterministic algorithm.
The receiver applies the same algorithm to the received data bits and compares its output to the
received check bits; if the values do not match, an error has occurred at some point during the
transmission. In a system that uses a "nonsystematic" code, such as some raptor codes, data bits are
transformed into at least as many code bits, and the transmitter sends only the code bits.

Department: Electronics and Telecommunication Engg. Page 11


Class: TE (ETC); Subject: Digital Communication

Parity schemes

A parity bit is an error detection mechanism that can only detect an odd number of errors. The
stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity
bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity;
odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate
the error, and even correct it if the error affects a single bit: this is the principle behind the
Hamming code.

There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of
bit errors (one, three, five, and so on). If even numbers of bits (two, four, six and so on) are flipped,
the parity bit appears to be correct, even though the data is corrupt.

Hamming distance based checks

If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1
bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if
one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance
x <= d+1 from any word in the mapping) it can successfully detect it as an erroneous word. Even
more, d or fewer errors will never transform a valid word into another, because the Hamming
distance between each valid word is at least d+1, and such errors only lead to invalid words that are
detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using
the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n
word is transmitted with maximum d errors.

Error correction: Automatic repeat request

Automatic Repeat-request (ARQ) is an error control method for data transmission which makes use
of error detection codes, acknowledgment and/or negative acknowledgement messages and
timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the
receiver to the transmitter to indicate that it has correctly received a data frame. Usually, when the
transmitter does not receive the acknowledgment before the timeout occurs (i.e. within a reasonable
amount of time after sending the data frame), it retransmits the frame until it is either correctly
received or the error persists beyond a predetermined number of retransmissions.

A few types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ and Selective Repeat
ARQ. Hybrid ARQ is a combination of ARQ and forward error correction.

Error-correcting code

An error-correcting code (ECC) or forward error correction (FEC) code is redundant data that is
added to the message on the sender side. If the number of errors is within the capability of the code
being used, the receiver can use the extra information to discover the locations of the errors and
correct them. Since the receiver does not have to ask the sender for retransmission of the data, a
back-channel is not necessary in forward error correction, so it is suitable for simplex
communication such as broadcasting. Error correcting codes are used in computer data storage, for
example CDs, DVDs and in dynamic RAM. It is also used in digital transmission, especially
wireless communication, since wireless communication without FEC often would suffer from

Department: Electronics and Telecommunication Engg. Page 12


Class: TE (ETC); Subject: Digital Communication

packet-error rates close to 100%, and conventional automatic repeat request error control would
yield a very low good put.

Hamming Code - Error Detection and Error Correction

The hamming code technique, which is an error-detection and error-correction technique, was
proposed by R.W. Hamming. Whenever a data packet is transmitted over a network, there are
possibilities that the data bits may get lost or damaged during transmission.

Let's understand the Hamming code concept with an example: Let's say you have received a 7-bit
Hamming code which is 1011011.

First, let us talk about the redundant bits. The redundant bits are some extra binary bits that are not
part of the original data, but they are generated & added to the original data bit. All this is done to
ensure that the data bits don't get damaged and if they do, we can recover them.

Now the question arises, how do we determine the number of redundant bits to be added? We use
the formula, 2r >= m+r+1; where r = redundant bit & m = data bit.

From the formula we can make out that there are 4 data bits and 3 redundancy bits, referring to the
received 7-bit hamming code.

Hamming Code: Error Detection

As we go through the example, the first step is to identify the bit position of the data & all the bit
positions which are powers of 2 are marked as parity bits (e.g. 1, 2, 4, 8, etc.). The following image
will help in visualizing the received hamming code of 7 bits.

First, we need to detect whether there are any errors in this received hamming code.

Step 1: For checking parity bit P1, use check one and skip one method, which means, starting
from P1 and then skip P2, take D3 then skip P4 then take D5, and then skip D6 and take D7, this
way we will have the following bits,

As we can observe the total number of bits are odd so we will write the value of parity bit as P1 =
1. This means error is there.

Step 2: Check for P2 but while checking for P2, we will use check two and skip two method,
which will give us the following data bits. But remember since we are checking for P2, so we have
to start our count from P2 (P1 should not be considered).

Department: Electronics and Telecommunication Engg. Page 13


Class: TE (ETC); Subject: Digital Communication

As we can observe that the number of 1's are even, then we will write the value of P2 = 0. This
means there is no error.

Step 3: Check for P4 but while checking for P4, we will use check four and skip four methods,
which will give us the following data bits. But remember since we are checking for P4, so we have
started our count from P4(P1 & P2 should not be considered).

As we can observe that the number of 1's are odd, then we will write the value of P4 = 1. This
means the error is there.

So, from the above parity analysis, P1 & P4 are not equal to 0, so we can clearly say that the
received hamming code has errors. The parity P0, P1, and P2 is obtained by following equation,

P0 = D1 D2 D4

P1 = D1 D3 D4

P2 = D2 D3 D4

Hamming Code: Error Correction

Since we found that received code has an error, so now we must correct them. To correct the errors,
use the following steps: Now the error word E will be:

Now we have to determine the decimal value of this error word 101 which is 5 (22 *1 + 21 * 0 + 20
*1 = 5). We get E = 5, which states that the error is in the fifth data bit. To correct it, just invert the
fifth data bit. So the correct data will be:

Department: Electronics and Telecommunication Engg. Page 14


Class: TE (ETC); Subject: Digital Communication

Procedure:

1. Study the circuit provided on the Front panel of Kit.


2. Switch on power supply.
3. Apply I/P Data to Data Transmitter.
4. Convert parity and data O/P of data transmitter to data receiver.
5. Create error in data bit by error switcher given on kit.
6. Observe the data bit before correction and after correction.

Department: Electronics and Telecommunication Engg. Page 15


Class: TE (ETC); Subject: Digital Communication

Experiment No: 05 Date:

TITLE: Study of DSSS transmitter and receiver using suitable hardware setup/kit.

OBJECTIVE: To study direct sequence spread spectrum BPSK modulation and demodulation
technique.

APPARATUS :
Sr. No. Apparatus Range
1. DSSS kit
2. DSO Dual Channel, 60 MHz

THEORY: Spread Spectrum techniques were and are still used in military applications,
because of their high security, and their less susceptibility to interference from other parties. In this
technique, multiple users share the same bandwidth, without significantly interfering with each
other. The spreading waveform is controlled by a Pseudo-Noise (PN) sequence, which is a binary
random sequence. This PN is then multiplied with the original baseband signal, which has a lower
frequency, which yields a spread waveform that has a noise like properties. In the receiver, the
opposite happens, when the pass band signal is first demodulated, and then despreads using the
same PN waveform. An important factor here is the synchronization between the two generated
sequences. In this report, I will try to illustrate the design process of such a system, and then come
up with a full circuit design.
Pseudo Noise (PN)
As we mentioned earlier, PN is the key factor in DS-SS systems. A Pseudo Noise or
Pseudorandom sequence is a binary sequence with an autocorrelation that resembles, over a period,
the autocorrelation of a random binary sequence. It is generated using a Shift Register, and a
Combinational Logic circuit as its feedback. The Logic Circuit determines the PN words. In this
design i used the so-called Maximum–Length PN sequence. It is a sequence of period 2m. 1
generated by a linear feedback shift register, which has feedback logic of only modulo–2 adders
(XOR Gates). Some properties of the Maximum–Length sequences are:
In each period of a maximum–length sequence, the number of 1s is always one more than
the number of 0s. This is called the Balance property.

Circuit Diagram:

Department: Electronics and Telecommunication Engg. Page 16


Class: TE (ETC); Subject: Digital Communication

Among the runs of 1s and 0s in each period of such sequence, one–half the runs of each
kind are of length one, one–fourth are of length two, one–eighth are of length three, and so on. This
is called the Run property.
The Autocorrelation function of such sequence is periodic and binary valued. This is called
the Correlation property1.
A block diagram of a Maximum–Length PN generator is shown in fig. with a 4–bit register
and one modulo–2 adder. This has a period of 24 .1 = 15, and it was the configuration used in this
design as we will show later.
Direct Sequence - Spread Spectrum

In Direct Sequence-Spread Spectrum the baseband waveform is multiplied by the PN


sequence. The PN is produced using a PN generator. Frequency of the PN is higher than the Data
signal. This generator consists of a shift register, and a logic circuit that determines the PN signal.
After spreading, the signal is modulated and transmitted. The most widely modulation scheme is
BPSK (Binary Phase Shift Keying). The equation that represents this DS-SS signal is shown in
eq.(1.1), and the block diagram is shown in fig.

(1.1)

where m(t) is the data sequence, p(t) is the PN spreading sequence, fc is the carrier
frequency, and . is the carrier phase angle at t=0. Each symbol in m(t) represents a data symbol and
has a duration of Ts . Each pulse in p(t) represents a chip, and has a duration of Tc. The transitions
of the data symbols and chips coincide such that the ratio Ts to Tc is an integer. The waveforms
m(t) and p(t) are shown in fig.(1.3). Here we notice the higher frequency of the spreading signal
p(t). The resulting spread signal is then modulated using the BPSK scheme. The carrier frequency
fc should have a frequency at least 5 times the chip frequency p(t).
In the demodulator section, we simply reverse the process. We Demodulate the BPSK
signal first, Low Pass Filter the signal, and then Dispread the filtered signal, to obtain the original
message. The process is described by the following equations:

Department: Electronics and Telecommunication Engg. Page 17


Class: TE (ETC); Subject: Digital Communication

Circuit Diagram:

As shown in eq.(1.2) and eq.(1.3) when we multiply two cosine signals together, we will
obtain two expressions, one of which has twice the frequency of the original message. And this part
can be removed by a LPF. The output is mss(t) as shown in fig.(1.4). This design is based on
Coherent Detection BPSK, so we don’t have to worry about carrier synchronization issues.
As for the PN sequence in the receiver, it is mentioned earlier that it should be an exact
replica of the one used in the transmitter, with no delays, cause this might cause severe errors in the
incoming message. After the signal gets multiplied with the PN sequence, the signal dispreads, and
we obtain the original bit signal m(t), that was transmitted. The block diagram of the receiver is
shown in fig. (1.4).

PROCEDURE:

1) Switch on the power supply.


2) Observe o/p of PN sequence generator, P1, P2, on CRO. i.e. P1 = 10000, P2 = 10100.
3) Connect o/p of PN sequence generator to PN sequence i/p of transmitter multiplier block.
4) Connect either P1 or P2 to pattern i/p of transmitter multiplier block.
5) Observe o/p of transmitter multiplier block which looks like random signal.
6) Connect o/p of transmitter multiplier block to i/p of PSK transmitter.
7) Observe o/p of PSK transmitter block together with carrier of PSK transmitter on XY mode
of CRO. You can observe two cross lines corresponding to 0 &180 phases i.e. BPSK signal.
8) Connect PSK transmitter o/p to i/p of 1496 squaring circuits & i/p2 of PSK receiver.

Department: Electronics and Telecommunication Engg. Page 18


Class: TE (ETC); Subject: Digital Communication

9) Observe o/p of 1496 squaring circuits & o/p of band pass filter. Adjust it properly using pot
provided near B.P. filter section.
10) Connect o/p frequency divider to i/p 1 of PSK receiver.
11) Observe o/p of PSK receiver & o/p of filter & comparator.
12) Connect o/p of filter & comparator to receiver multiplier block & also connect PN sequence
to receiver multiplier block.
13) Observe o/p of receiver multiplier block which is our transmitted pattern.

OBSERVATIONS AND GRAPHS:


1) Observe & plot o/p of PN sequence generator & pattern P1, P2.
2) Observe & plot o/p of transmitter multiplier block.
3) Observe o/p of PSK transmitter block together with carrier of PSK transmitter on XY mode
of CRO.
4) Observe & plot o/p of 1496 squaring circuits & o/p of band pass filter.
5) Observe & plot o/p of PSK receiver.
6) Observe & plot o/p of filter & comparator.

Department: Electronics and Telecommunication Engg. Page 19


Class: TE (ETC); Subject: Digital Communication

EXPERINENT NO: 06 DATE:

TITLE: Simulation study of Performance of M-ary PSK.


OBJECTIVE : Simulation study of Performance of M-ary PSK.

APPARATUS : 1. MATLAB Software

2. PC

M-ary PSK

M-ary Encoding: The word binary denotes two-bits. M just denotes a digit that resembles to the
number of conditions, levels, or combinations possible for a given number of binary variables. This
is the type of digital modulation method used for data transmission in which in its place of one-bit,
two or more bits are transmitted at a time. As a single signal is used for multiple bit transmission,
the channel bandwidth is reduced.

M-ary Equation

If a digital signal is given below four situations, such as voltage levels, frequencies, phases and
amplitude, then M = 4. The number of bits essential to create a given number of conditions is
expressed mathematically as

N=log2M, Where,

N is the number of bits necessary. M is the number of conditions, levels, or combinations possible
with N bits.
The above equation can be re-arranged as: 2 N = M or instance, with two bits, 22 = 4 conditions are
possible. In BPSK we transmit each bit individually depending on whether b(t) is logic 0 or logic 1.
We transmit one or another of sinusoid for the bit time Tb. The Sinusoids differing in phase by
2π/2=180. In QPSK two bits re grouped depending on which of four two bit words develops. We
transmit one or another of four sinusoid of duration 2Tb. The Sinusoids differing in phase by
2π/4=90. The scheme can be extended to N bits. For this N bits are grouped, so that in this N bit
symbols, extending over the time NTb.
There re 2N =M possible symbols. Which is defined as M-ary PSK.

M-ary PSK: The M symbols are represented by sinusoids of duration NTb =Ts. The M symbols
are differ from one another by the phase,

The wave forms of eq (1) are represented by the dots in fig. in signal space. The coordinate axes are
orthogonal wave forms.

Department: Electronics and Telecommunication Engg. Page 20


Class: TE (ETC); Subject: Digital Communication

M-ary PSK Modulator: At the transmitter the bit stream is applied to serial to parallel converter

 The converter stores N bits of a symbol.


 These N bits are then presented at once on N output lines in parallel of the converter.
 The converted output unchanging for the duration NTb of symbol during which time the
converter is assembling a new group of N bits.
 For each symbol time, the converter output is updated.
 The converter output is applied to a D/A converter.
 The D/A converter output is voltage which depends on the symbol Sm (m=0,1,….M-1).
 Finally D/A converter output is applied as a control input to a special type of constant
amplitude sinusoidal signal source whose phase Фm is determined by V(Sm).
 The output is fixed amplitude sinusoidal wave form whose phase a one to one
correspondence to the assembled N-bit symbol.
 This phase can change once per symbol time.

Department: Electronics and Telecommunication Engg. Page 21


Class: TE (ETC); Subject: Digital Communication

M-ary PSK Demodulator: The demodulation technique employed is Synchronous demodulation

• The scheme for generating carrier at the demodulator is known as carrier recovery circuit
carrier recovery circuit consists of a device to rise the received signal to the Mth power.
• A BPF whose pass band is centred around Mf0 a frequency divider.
• Only the wave forms of the signals at the outputs of carrier recovery circuit are relevant not
their amplitudes.
 The recovered carrier is then multiplied with the received signal, which is then applied to
integrators.
 The integrators extend their integration over the same time period again bit synchronizer is
needed.
 The integrator outputs are voltages whose amplitudes are proportional to the TsPe and TsP0
Respectively and change at the symbol rate.
 Finally TsPe and TsP0 are applied to A/D converter which reconstructs the digital N bit
signal which constitutes the transmitted signal.

% MATLAB program for M ary PSK BER vs SNR


clc;
clear all;
close all;
k=input('enter the no of bits');
M = 2^k; %size
N = k*10^3; % number of symbols
% k = log2(M); % b/symbol
a = [0:M-1]*2*pi/M; % phase values
SNRdB = [0:2:20]; % SNR range
sdB = SNRdB + 10*log10(k);
% Mbinary to Gray code conversion
b = [0:M-1];
map =bitxor(b,floor(b/2));
[tt ind] = sort(map);
c = zeros(1,N);
for i = 1:length(SNRdB)

Department: Electronics and Telecommunication Engg. Page 22


Class: TE (ETC); Subject: Digital Communication

bits = rand(1,N*k,1)>0.5; % random 1's and 0's


% binary to decikal
bin2DecMatrix = ones(N,1)*(2.^((k-1):-1:0)) ;
shape= reshape(bits,k,N).';
% decimal to binary
G= (sum(shape.*bin2DecMatrix,2)).';
% Gray code mapping
dec = ind(G+1)-1; %
ph= dec*2*pi/M;
% modulation
d= exp(1i*ph);
s = d;
% AWGN
n = 1/sqrt(2)*(randn(1,N) + 1i*randn(1,N));
% reciever
r = s + 10^(-sdB(i)/20)*n;
% demodulation
e = angle(r);
% phase
e(e<0) = e(e<0) + 2*pi;
c = 2*pi/M*round(e/(2*pi/M)) ;
c(c==2*pi) = 0;
cd = round(c*M/(2*pi));
% Decimal to Gray conversion
f = map(cd+1);
cb = dec2bin(f,k) ;
cb = cb.';
cb = cb(1:end).';
cb = str2num(cb).' ;
% errors
Err(i) = size(find(bits- cb),2);
end
sBer=Err/(N*k);
tBer =(1/k)*erfc(sqrt(k*10.^(SNRdB/10))*sin(pi/8));
% plot
figure
semilogy(SNRdB,tBer,'rs-','LineWidth',2);
hold on
grid on
semilogy(SNRdB,sBer,'kx-','LineWidth',2);
legend('theory', 'simulation');
xlabel('SNR dB')
ylabel('Bit Error Rate')
title('BER VS SNR M-PSK')

Department: Electronics and Telecommunication Engg. Page 23


Class: TE (ETC); Subject: Digital Communication

Graph: M-PSK, BER VS SNR

Conclusion:

Department: Electronics and Telecommunication Engg. Page 24


Class: TE (ETC); Subject: Digital Communication

EXPERINENT NO: 07 DATE:

TITLE: Simulation study of Performance of M-ary QAM.


OBJECTIVE : Simulation study of Performance of M-ary QAM.

APPARATUS: 1. MATLAB Software


2. PC

Quadrature Amplitude Modulation

It is possible to combine ASK, FSK and PSK. ASK and PSK to be combined to create Quadrature
amplitude modulation (QAM). Quadrature Amplitude Modulation or QAM (pronounced “kwam”)
is a digital modulation technique that uses the data to be transmitted to vary both the amplitude and
the phase of a sinusoidal waveform, while keeping its frequency constant. QAM is a natural
extension of binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK), both of
which vary only the phase of the waveform. The number of different waveforms (unique
combinations of amplitude and phase) used in QAM depends on the modem and may vary with the
quality of the channel. With 16-QAM, for example, 16 different waveforms are available. 64-QAM
and 256-QAM are also common. In all cases, each different waveform, or amplitude-phase
combination, is a modulation symbol that represents a specific group of bits. In the modulator,
consecutive data bits are grouped together four at a time to form quad bits and each quad bit is
represented by a different modulation symbol. In the demodulator, each different modulation
symbol in the received signal is interpreted as a unique pattern of 4 bits. Figure 7.1 shows all 16
QAM modulation symbols superposed on the same axes. Four different colours are used in the
figure and each colour is used for four different waveforms. Each waveform has a different
combination of phase and amplitude.

Figure 7.1. All QAM modulation symbols for 16-QAM.

QAM constellations Figure 7.2 shows the constellation diagram for 16-QAM. The constellation
diagram is a pictorial representation showing all possible modulation symbols (or signal states) as a
set of constellation points. The position of each point in the diagram shows the amplitude and the
phase of the corresponding symbol. Each constellation point corresponds (is mapped to) to a
different quadbit.

Department: Electronics and Telecommunication Engg. Page 25


Class: TE (ETC); Subject: Digital Communication

Figure 7.2. 16-QAM constellation (4-bits per modulation symbol). Although any mapping between
quadbits and constellation points would work under ideal conditions, the mapping usually uses a
Gray code to ensure that the quadbits corresponding to adjacent constellation points differ only by
one bit. This facilitates error correction since a small displacement of a constellation point due to
noise will likely cause only one bit of the demodulated quadbit to be erroneous.

Figure 7.2.16-QAM constellation (4-bits per modulation symbol).

Although any mapping between quadbits and constellation points would work under ideal
conditions, the mapping usually uses a Gray code to ensure that the quadbits corresponding to
adjacent constellation points differ only by one bit. This facilitates error correction since a small
displacement of a constellation point due to noise will likely cause only one bit of the demodulated
quadbit to be erroneous.
A typical QAM modulator
A QAM signal can be generated by independently amplitude-modulating two carriers in quadrature
(cos ωt and sin ωt), as shown in Figure 7.3.

Figure 7.3.Simplified block diagram of a QAM modulator.


The Serial to Parallel Converter groups the incoming data into quadbits. Each time four bits have
been clocked serially into its buffer, the Serial to Parallel Converter outputs one quadbit in parallel
at its four outputs. The starting point for grouping bits into quadbits is completely arbitrary.

Department: Electronics and Telecommunication Engg. Page 26


Class: TE (ETC); Subject: Digital Communication

% MATLAB Code for QAM BER vs SNR


clear all;
close all;
N = 4*10^3; % number of symbols
M = 16; % size
k = log2(M); % bits/symbol
% for 16-QAM
Re = [-(2*sqrt(M)/2-1):2:-1 1:2:2*sqrt(M)/2-1];
Im = [-(2*sqrt(M)/2-1):2:-1 1:2:2*sqrt(M)/2-1];
k_QAM = 1/sqrt(10);
bdB = 3:1:13; % SNR range
sdB = bdB + 10*log10(k);
% binary to gray code
a = [0:k-1];
map = bitxor(a,floor(a/2));
[tt ind] = sort(map);
for i = 1:length(bdB)
c = rand(1,N*k,1)>0.5; % random 1's and 0's
d = reshape(c,k,N).';
bd = ones(N,1)*(2.^((k/2-1):-1:0)) ; % conversion from binary to decimal
% real
cRe = d(:,(1:k/2));
e = sum(cRe.*bd,2);
f = bitxor(e,floor(e/2));
% imaginary
cIm = d(:,(k/2+1:k));
g = sum(cIm.*bd,2);
h = bitxor(g,floor(g/2));
% mapping the Gray coded symbols into constellation
modRe = Re(f+1);
modIm = Im(h+1);
% constellation
mod = modRe + 1i*modIm;
s = k_QAM*mod;
% noise
n = 1/sqrt(2)*[randn(1,N) + 1i*randn(1,N)];
% reciever
r = s + 10^(-sdB(i)/20)*n;
% demodulation
r_re = real(r)/k_QAM;
r_im = imag(r)/k_QAM;
% rounding off
m = 2*floor(r_re/2)+1;
m(m>max(Re)) = max(Re);
m(m<min(Re)) = min(Re);

Department: Electronics and Telecommunication Engg. Page 27


Class: TE (ETC); Subject: Digital Communication

n= 2*floor(r_im/2)+1;
n(n>max(Im)) = max(Im);
n(n<min(Im)) = min(Im);
% To Decimal conversion
oRe = ind(floor((m+4)/2+1))-1;
oIm = ind(floor((n+4)/2+1))-1;
% To binary string
pRe = dec2bin(oRe,k/2);
pIm = dec2bin(oIm,k/2);
% binary string to number
pRe = pRe.';
pRe = pRe(1:end).';
pRe = reshape(str2num(pRe).',k/2,N).' ;
pIm = pIm.';
pIm = pIm(1:end).';
pIm = reshape(str2num(pIm).',k/2,N).' ;
% counting errors for real and imaginary
Err(i) = size(find([cRe- pRe]),1) + size(find([cIm - pIm]),1) ;
end
sBer = Err/(N*k);
tBer = (1/k)*3/2*erfc(sqrt(k*0.05*(10.^(bdB/10))));
% plot
figure
semilogy(bdB,tBer,'rs-','LineWidth',2);
hold on
semilogy(bdB,sBer,'kx-','LineWidth',2);
grid on
legend('theory', 'simulation');
xlabel('SNR dB')
ylabel('Bit Error Rate')
title('BER VS SNR')

Department: Electronics and Telecommunication Engg. Page 28


Class: TE (ETC); Subject: Digital Communication

Graph: QAM Plot BER VS SNR

Conclusion:

Department: Electronics and Telecommunication Engg. Page 29


Class: TE (ETC); Subject: Digital Communication

EXPERINENT NO: 08 DATE:

TITLE: Simulation Study of performance of BPSK receiver in presence of noise.


OBJECTIVE : Simulation Study of performance of BPSK receiver in presence of noise.
APPARATUS: 1. MATLAB Software
2. PC

Binary Phase Shift Keying (BPSK)


The first modulation considered is binary phase shift keying. Binary Phase-shift keying (BPSK) is a
digital modulation scheme that conveys data by changing, or modulating, two different phase s of a
reference signal (the carrier wave). The constellation points chosen are usually positioned with
uniform angular spacing around a circle. In this scheme during every bit duration, denoted by T,
one of two phases of the carrier is transmitted. These two phases are 180 degrees apart. This makes
these two waveforms antipodal. Any binary modulation where the two signals are antipodal gives
the minimum error probability (for fixed energy) over any other set of binary signals. The error
probability can only be made smaller (for fixed energy per bit) by allowing more than two
waveforms for transmitting information.
BPSK is a simple but significant carrier modulation scheme. The two time-limited energy signals
s1(t) and s2(t) are defined based on a single basis function ϕ1(t) as:

Generation of BPSK: Consider a sinusoidal carrier. If it is modulated by a bi-polar bit stream


according to the scheme illustrated in Figure 8.1 below, its polarity will be reversed every time the
bit stream changes polarity. This, for a sinewave, is equivalent to a phase reversal (shift). The
multiplier output is a BPSK signal.

Figure 8.1: BPSK Generation

The information about the bit stream is contained in the changes of phase of the transmitted signal.
A synchronous demodulator would be sensitive to these phase reversals. The appearance of a BPSK
signal in the time domain is shown in Figure 8.2 (lower trace). The upper trace is the binary
message sequence.

Department: Electronics and Telecommunication Engg. Page 30


Class: TE (ETC); Subject: Digital Communication

Figure 8.2: a BPSK signal in the time domain.


There is something special about the waveform of Figure 8.2. The wave shape is ‘symmetrical’ at
each phase transition. This is because the bit rate is a sub-multiple of the carrier frequency ω/(2π).
In addition, the message transitions have been timed to occur at a zero-crossing of the carrier.
Whilst this is referred to as ‘special’, it is not uncommon in practice. It offers the advantage of
simplifying the bit clock recovery from a received signal. Once the carrier has been acquired then
the bit clock can be derived by division.
Bit error rate (BER) of a communication system is defined as the ratio of number of error bits and
total number of bits transmitted during a specific period. It is the likelihood that a single error bit
will occur within received bits, independent of rate of transmission. There are many ways of
reducing BER. Here, we focus on channel coding techniques.
A channel in mobile communications can be simulated in many different ways. The main
considerations include the effect of multipath scattering, fading and Doppler shift that arise from
the relative motion between the transmitter and the receiver. In our simulations, we have considered
the two most commonly used channels: the Additive White Gaussian Noise (AWGN) channel
where the noise gets spread over the whole spectrum of frequencies and the Rayleigh fading
channel.
BER has been measured by comparing the transmitted signal with the received signal and
computing the error count over the total number of bits. For any given modulation, the BER is
normally expressed in terms of signal to noise ratio (SNR).
Convolutional coder takes a binary input sequence and outputs a convolutionally encoded binary
sequence according to the specified parameters of the model, in which every K input bits are
encoded into N output bits. The rate of the coder is given by the ratio K/N.
In MATLAB, we focused only on BER performance in terms of signal to noise ratio per bit for
PSK, considering AWGN and Rayleigh channels. PSK signal was created with the help of
MATLAB function y = dmod(x, Fc, Fd, Fs, 'psk', M) which performs M-phase shift keying
modulation. X denotes values (modulating signal) of random bits generated with the help of
function randint(m, n), which generates an m-by-n binary matrix, Fc is carrier frequency and Fs is
sampling frequency. Such a signal was mixed with noise and later detected by convolving the
distorted signal and the signal from matched filter-representation of carrier recovery circuit. This
signal was passed through a decision device to get the final data. The bit error rate measurements
were carried out sebsequently.

Department: Electronics and Telecommunication Engg. Page 31


Class: TE (ETC); Subject: Digital Communication

% Simulation Study of performance of BPSK receiver in presence of noise


% Initialization of Data and variables
clc;
clear all;
close all;
nr_data_bits=8192;
b_data=(randn(1,nr_data_bits))>.5;
b=[b_data];
d=zeros(1,length(b));
%Generation of BPSK
for n=1:length(b)
if(b(n)==0)
d(n)=exp(j*2*pi);
end
if(b(n)==1)
d(n)=exp(j*pi);
end
end
disp(d)
bpsk=d;
% Plotting of BPSK Data
figure(1);
plot(d,'o');
axis([-2 2 -2 2]);
grid on;
xlabel('real');
ylabel('imag');
title('BPSK constellation');
% Addition of Noise
SNR=0:24;
BER1=[];
SNR1=[];
for SNR=0:length(SNR);
sigma=sqrt(10.0^(-SNR/10.0));
snbpsk=(real(bpsk)+sigma.*randn(size(bpsk)))+i.*(imag(bpsk)+sigma*randn(size(bpsk)));
% Plotting of BPSK data with Noise
figure(2);
plot(snbpsk,'o');
axis([-2 2 -2 2]);
grid on;
xlabel('real');
ylabel('imag');
title('Bpsk constellation with noise');

Department: Electronics and Telecommunication Engg. Page 32


Class: TE (ETC); Subject: Digital Communication

% Recovering of Data
%receiver
r=snbpsk;
bhat=[real(r)<0];
bhat=bhat(:)';
bhat1=bhat;
ne=sum(b~=bhat1);
BER=ne/nr_data_bits;
BER1=[BER1 BER];
SNR1=[SNR1 SNR];
end
% Plotting of BER graph of BPSK
figure(3);
semilogy(SNR1,BER1,'-*');
grid on;
xlabel('SNR=Eb/No(db)');
ylabel('BER');
title('Simulation of BER for BPSK ');
legend('BER-simulated');

OUTPUT:

Department: Electronics and Telecommunication Engg. Page 33


Class: TE (ETC); Subject: Digital Communication

Conclusion:

Department: Electronics and Telecommunication Engg. Page 34


Class: TE (ETC); Subject: Digital Communication

EXPERIMENT NO: 09 Date:_____________

Title: Simulation study of Source Coding technique.


AIM: a. To write a program to generate Shannon Fano Coding.
b. To write a program to generate Huffman Coding.
THEORY

 Shannon Fano Coding


Shannon Fano Coding is directed towards construction of reasonably efficient
separate binary codes.

Let [x] be the message to be transmitted and [p] be their corresponding


probabilities. The messages are first written in descending order of their probabilities. The
message set then is partitioned into the most equiprobable subsets[x1] and [x2]. A 0 is
assigned to each message contained in one subset and ‘a’ is assigned to each message in
other subset. The same procedure is repeated for subsets [x1] and [x2] i.e, [x1] will be
partitioned into 2 subsets [x11] and [x12] and [x2] will be partitioned into [x21] and
[x22]. The code words in [x11] will start with 00, [x12] will start with 01, and [x21] with
10, and so on.

 Huffman Coding
Huffman Coding uses the principal as that of Shannon Fano algorithm. This type of
coding makes average no. of digits per message equal to entropy. The messages are
arranged in accordance to their decreasing probability. The 2 digit message of lowest
probability is assigned binary 0 and 1.

The two probabilities are added & sum is placed in the last stage just that the
probability are decreasing in order. Again 0 and 1 are assigned to last 2 probabilities this
goes on till the final stage. The code is taken in reverse order by linking for a part icular
level.

Department: Electronics and Telecommunication Engg. Page 35


Class: TE (ETC); Subject: Digital Communication

ALGORITHM

 Shannon Fano Coding


1. Enter the number of messages.

2. Enter probability of each message.

3. Sort the message in decreasing order of probability.

4. Partition the message into two halves and continue till one element in the subset
remains.

5. Assign code for each partition block.

6. Calculate efficiency using the formula-

7. Stop.

 Huffman Coding
1. Start.

2. Enter the no. of messages and the value of probabilities.

3. Arrange in ascending order.

4. Go on adding min 2 probabilities and also assign binary values to each level 0 or 5.
Continue this till all probabilities are finished.

6. Start linking the binary levels so that code is generated.

7. Print the code accordingly in sequence.

Department: Electronics and Telecommunication Engg. Page 36


Class: TE (ETC); Subject: Digital Communication

//C program to implement Shannon Feno- Variable Length Coding//

#include< stdio.h>
#include< conio.h>
#include< string.h>
struct node
{
char sym[10];
float pro;
int arr[20];
int top;
}s[20];
typedef struct node node;
void prints(int l,int h,node s[])
{
int i;
for(i=l;i< =h;i++)
{
printf("\n%s\t%f",s[i].sym,s[i].pro);
}
}
void shannon(int l,int h,node s[])
{
float pack1=0,pack2=0,diff1=0,diff2=0;
int i,d,k,j;
if((l+1)==h || l==h || l>h)
{
if(l==h || l>h)
return;
s[h].arr[++(s[h].top)]=0;
s[l].arr[++(s[l].top)]=1;
return;
}
else
{
for(i=l;i< =h-1;i++)
pack1=pack1+s[i].pro;
pack2=pack2+s[h].pro;
diff1=pack1-pack2;
if(diff1< 0)
diff1=diff1*-1;
j=2;
while(j!=h-l+1)
{
k=h-j;
pack1=pack2=0;

Department: Electronics and Telecommunication Engg. Page 37


Class: TE (ETC); Subject: Digital Communication

for(i=l;i< =k;i++)
pack1=pack1+s[i].pro;
for(i=h;i>k;i--)
pack2=pack2+s[i].pro;
diff2=pack1-pack2;
if(diff2< 0)
diff2=diff2*-1;
if(diff2>=diff1)
break;
diff1=diff2;
j++;
}
k++;
for(i=l;i< =k;i++)
s[i].arr[++(s[i].top)]=1;
for(i=k+1;i< =h;i++)
s[i].arr[++(s[i].top)]=0;
shannon(l,k,s);
shannon(k+1,h,s);
}
}

void main()
{
int n,i,j;
float x,total=0;
char ch[10];
node temp;
clrscr();
printf("Enter How Many Symbols Do You Want To Enter\t: ");
scanf("%d",&n);
for(i=0;i< n;i++)
{
printf("Enter symbol %d ---> ",i+1);
scanf("%s",ch);
strcpy(s[i].sym,ch);
}
for(i=0;i< n;i++)
{
printf("\n\tEnter probability for %s ---> ",s[i].sym);
scanf("%f",&x);
s[i].pro=x;
total=total+s[i].pro;
if(total>1)
{
printf("\t\tThis probability is not possible.Enter new probability");

Department: Electronics and Telecommunication Engg. Page 38


Class: TE (ETC); Subject: Digital Communication

total=total-s[i].pro;
i--;
}
}
s[i].pro=1-total;
for(j=1;j< =n-1;j++)
{
for(i=0;i< n-1;i++)
{
if((s[i].pro)>(s[i+1].pro))
{
temp.pro=s[i].pro;
strcpy(temp.sym,s[i].sym);
s[i].pro=s[i+1].pro;
strcpy(s[i].sym,s[i+1].sym);
s[i+1].pro=temp.pro;
strcpy(s[i+1].sym,temp.sym);
}
}
}
for(i=0;i< n;i++)
s[i].top=-1;
shannon(0,n-1,s);
printf(" ");
printf("\n\n\n\tSymbol\tProbability\tCode");
for(i=n-1;i>=0;i--)
{
printf("\n\t%s\t%f\t",s[i].sym,s[i].pro);
for(j=0;j< =s[i].top;j++)
printf("%d",s[i].arr[j]);
}
printf("\n ");
getch();
}
/********************* OUTPUT **************************
Enter How Many Symbols Do You Want To Enter : 6
Enter symbol 1 ---> a
Enter symbol 2 ---> b
Enter symbol 3 ---> c
Enter symbol 4 ---> d
Enter symbol 5 ---> e
Enter symbol 6 ---> f
Enter probability for a ---> 0.3
Enter probability for b ---> 0.25
Enter probability for c ---> 0.20
Enter probability for d ---> 0.12

Department: Electronics and Telecommunication Engg. Page 39


Class: TE (ETC); Subject: Digital Communication

Enter probability for e ---> 0.08


Enter probability for f ---> 0.05

Symbol Probability Code


a 0.300000 00
b 0.250000 01
c 0.200000 10
d 0.120000 110
e 0.080000 1110
f 0.050000 1111

% Write a MATLAB CODE for Huffman Source Coding Method.


clc;
clear all;
close all;
sig=1:4;
Symbols=[1 2 3 4];
P=[0.1 0.3 0.4 0.2];
dict = huffmandict(Symbols,P);
temp=dict;
for i=1: length(temp)
temp {i,2}= num2str(temp{i,2});
end
disp(temp);
hcode= huffmanenco(sig,dict)
dhsig= huffmandeco(hcode, dict)

==========================================
Output

[1] '0 0 1'


[2] '0 1'
[3] '1'
[4] '0 0 0'

hcode =

0 0 1 0 1 1 0 0 0

dhsig =

1 2 3 4

Department: Electronics and Telecommunication Engg. Page 40


Class: TE (ETC); Subject: Digital Communication

Conclusion:

Department: Electronics and Telecommunication Engg. Page 41


Class: TE (ETC); Subject: Digital Communication

EXPERIMENT NO: 10 Date:_____________

Title: Simulation study of various Entropies and mutual information in a


communication system.

AIM: Write a program for determination of various entropies and mutual information of a given
channel. Test various types of channel such as
a) Noise free channel.
b) Error free channel
c) Binary symmetric channel
d) Noisy channel
Compare channel capacity of above channels.
Apparatus: PC, C or MATLAB software, Printer.

THEORY
The information emitted by a discrete memory less source is related to the inverse
probability of occurrence.
I(x) = log2 ( 1/ Px)
Entropy of a discrete memory less space is the measure of the average information
content per source per symbol and is given by the expression-
n
H(xi) = ∑ Pi log2 ( 1/ Pxi)
i=1
Consider a memory less channel where xi is the transmitted message and yi is the
received message. If noise is present in the system, then uncertainty about transmission x i
when yi is received is log ( 1/ P(xi /yi) )
n m
 H(x/y) = ∑ ∑ P(xi ,yi) log ( 1/ P(xi /yi))
i=1 j=1
Similarly
n m
 H(Y/X) = ∑ ∑ P(xi ,yi) log ( 1/ P(yi /xi))
i=1 j=1

n m
 H(x, y) = ∑ ∑ P(xi ,yi) log ( 1/ P(xi ,yi))
i=1 j=1
H(xi ,yi) is the joint entropy, H(xi/yj) and H(yj/xi) is the conditional entropy.
Entropy is a measure of the average information content per source symbol.

Mutual Information
The mutual information I (X; Y) of a channel is

Department: Electronics and Telecommunication Engg. Page 42


Class: TE (ETC); Subject: Digital Communication

Information Channels
An information channel is characterized by an input range of symbols {x1, x2, . . . , xU }, an output
range {y1, y2, . . . , yV } and a set of conditional probabilities P(yj /xi ) that determines the
relationship between the input xi and the output yj . This conditional probability corresponds to that
of receiving symbol yj if symbol xi was previously transmitted.
The set of probabilities P(yj /xi ) is arranged into a matrix Pch that characterizes completely the
corresponding discrete channel:
Pi j = P(yj /xi )

Binary Symmetric Channel


The channel is symmetric because the probability of receiving a 1 if a 0 is sent is the same as the
probability of receiving a 0 if a 1 is sent. This common transition probability is denoted by p.

Fig. Binary Symmetric Channel

Lossless Channel
A channel described by a channel matrix with only one nonzero element in each column is called a
lossless channel. In the lossless channel no source information is lost in transmission.

Fig. Lossless Channel

Deterministic Channel
A channel described by a channel matrix with only one nonzero element in each row is called a
deterministic channel.
 each row has only one nonzero element, this element must be unity .
 output symbol will be received.

Department: Electronics and Telecommunication Engg. Page 43


Class: TE (ETC); Subject: Digital Communication

Fig. Deterministic Channel

Noiseless Channel
 Both lossless & deterministic.
 The channel matrix has only one element in each row and in each column, and this element
is unity.
 The input and output are of the same size; that is, m = n

Fig. Noiseless Channel

for a noiseless channel H(x,y) = 0


I(x) = H(x) - H(x/y)

= H(x) + H(y) - H(x,y)

ALGORITHMS

JOINT PROBABILITY MATRIX


1. Input the number of rows(m) and number of columns(n) of joint probability matrix.

2. Read P(xi,yi).

3. Calculate P(xi) as sum of rows.

4. Calculate P(yi) as sum of columns.

5. Calculate entropy by formulae

n
H(x) = ∑ Pi log2 ( 1/ Pi)
i=1
n
H(y) = ∑ Pj log2 ( 1/ Pj)
j=1

Department: Electronics and Telecommunication Engg. Page 44


Class: TE (ETC); Subject: Digital Communication

n m
H(x/y) = ∑ ∑ P(xi ,yi) log ( 1/ P(xi, yi))
i=1 j=1

H(y/x) = H(x, y) - H(x)


H(x/y) = H(x, y) - H(y)
6. Calculate I(x,y) = H(x) + H(y) - H(xy)

7. Stop.

CONDITIONAL PROBABILITY MATRIX

1. Input the number of rows(m) and number of columns(n).

2. Read P(yi/ xi).

3. Read P(xi).

4. Calculate P(xi, yi) =P(yi/xi) P(xi).

5. Calculate P(yi) as sum of columns of P(xi, yi).

6. Calculate entropy by formulae


n
H(x) = ∑ Pi log2 ( 1/ Pi)
i=1
n
H(y) = ∑ Pj log2 ( 1/ Pj)
j=1
n m
H(x/y) = ∑ ∑ P(xi ,yi) log ( 1/ P(xi, yi))
i=1 j=1

H(y/x) = H(x, y) - H(x)


H(x/y) = H(x, y) - H(y)
7. Calculate I(x,y) = H(x) + H(y) - H(xy)

8. Stop.

Department: Electronics and Telecommunication Engg. Page 45


Class: TE (ETC); Subject: Digital Communication

//* NOISE FREE CHANNEL C. Program*//


#include<stdio.h>
#include<conio.h>
#include<math.h>
#define MAX 10
void main()
{
int i,j,n;
float p[MAX],x,hxy,hx,hy,Ixy,a,b;
clrscr();
printf("\n\n\t\t NOISE FREE CHANNEL\n\n ");
printf("\n\n\t\tEnter the number of symbols: ");
scanf("%d",&n);
for(j=1;j<=n;j++)
{
printf("\t\tEnter probability: ");
scanf("%f",&p[j]);
}
a=0;
for (j=1;j<=n;j++)
{
a=a+p[j];
}
{
double x=1/(log10(2));
hxy=hx=hy=0;
for(j=1;j<=n;j++)
{
b=1/p[j];
hxy=hx=hy=hx+((p[j])*(log10(b))*(x));
Ixy=hx=hy=hxy;
}
printf("\n\t\tValue of hx is = %f",hx);
printf("\n\t\tValue of hy is= %f",hy);
printf("\n\t\tvalue of hxy is= %f",hxy);
printf("\n\t\tmutual information Ixy=%f",Ixy);
printf("\n\t\tthe conditional probabilities Hxby and Hybx =0");
}
getch();
}

NOISE FREE CHANNEL


Enter the number of symbols: 2
Enter probability: 0.5
Enter probability: 0.5
Value of hx is = 1.000000
Value of hy is= 1.000000
value of hxy is= 1.000000
mutual information Ixy=1.000000
the conditional probabilities Hxby and Hybx =0

Department: Electronics and Telecommunication Engg. Page 46


Class: TE (ETC); Subject: Digital Communication

//C. program for error free channel//


#include<stdio.h>
#include<conio.h>
#include<math.h>
#define MAX 10
void main()
{
int x,y;
float px,Hx,Hy,Hxy,Hxby,Hybx,M, Ix,p1,p2,p3;
clrscr();
printf("\n Error free channel");
printf("\n Enter the no of symbols:-");
scanf("\n%d",&x);
printf("\n Enter the no of output symbols:-");
scanf("\n%d",&y);
printf("\n Enter the joint probabilities:-");
scanf("\n%f%f%f",&p1,&p2,&p3);
px=p1+p2+p3;
M=1/(log10(2));
Hx=(px*log10(1/px)*M);
Hy=((p1*log10(1/p1)*M)+(p2*log10(1/p2)*M)+(p3*log10(1/p3)*M));
Hxy=Hy;
Hxby=Hxy-Hy;
Hybx=Hxy-Hx;
Ix=Hx-Hxby;
printf("\n\tentropy Hx=%f\tbits/symbol\n\n",Hx);
printf("\n\Hy=%f\n\n",Hy);
printf("\n\tentropy Hxy%f\tbits/symbol\n\n",Hxy);
printf("\n\tentropy Hxby=%f\tbits/symbol\n\n",Hxby);
printf("\n\tentropy Hybx=%f\tbits/symbol\n\n",Hybx);
printf("\n\tmutual information=%f\n",Ix);
getch();
}

Error free channel


Enter the no of symbols:-1
Enter the no of output symbols:-3
Enter the joint probabilities:-0.2 0.3 0.5
entropy Hx=0.000000 bits/symbol
entropy Hy=1.485475 bits/symbol
entropy Hxy1.485475 bits/symbol
entropy Hxby=0.000000 bits/symbol
entropy Hybx=1.485475 bits/symbol
mutual information=0.000000

Department: Electronics and Telecommunication Engg. Page 47


Class: TE (ETC); Subject: Digital Communication

//C. Program for BINARY SYMMETRIC CHANNEL//

#include<stdio.h>
#include<conio.h>
#include<math.h>
#define MAX 10
void main()
{
float Hx,Hy,Hxy,Hxby,Hybx,M,Ixy,p1,p2,A,B,L,E,j1,j2,j3,j4;
clrscr();
printf("\n BINARY SYMMENTRY CHANNEL\n");
printf("\n Enter the prob:-");
scanf("\n%f",&p1);
printf("\n Enter the conditional prob:-");
scanf("\n%f",&A);
p2=1-p1;
B=1-A;
M=1/(log10(2));
L=p1+A-(2*p1*A);
E=1-L;
j1=p1*B;
j2=p1*A;
j3=p2*A;
j4=p2*B;
Hx=(p1*log10(1/p1)*M)+(p2*log10(1/p2)*M);
Hy=((L*log10(1/L)*M)+(E*log10(1/E)*M));
Hxy=(j1*log10(1/j1)*M)+(j2*log10(1/j2)*M)+(j3*log10(1/j3)*M)+(j4*log10(1/j4)*M);
Hxby=Hxy-Hy;
Hybx=Hxy-Hx;
Ixy=Hx-Hxby;
printf("\n\tentropy Hx=%f\tbits/symbol\n\n",Hx);
printf("\n\tentropy Hy=%f\tbits/symbol\n\n",Hy);
printf("\n\tentropy Hxy%f\tbits/symbol\n\n",Hxy);
printf("\n\tentropy Hxby=%f\tbits/symbol\n\n",Hxby);
printf("\n\tentropy Hybx=%f\tbits/symbol\n\n",Hybx);
printf("\n\tmutual information Ixy=%f\n",Ixy);
getch();
}

Department: Electronics and Telecommunication Engg. Page 48


Class: TE (ETC); Subject: Digital Communication

BINARY SYMMENTRY CHANNEL


Enter the prob:-0.5
Enter the conditional prob:-0.5
entropy Hx=1.000000 bits/symbol
entropy Hy=1.000000 bits/symbol
entropy Hxy2.000000 bits/symbol
entropy Hxby=1.000000 bits/symbol
entropy Hybx=1.000000 bits/symbol
mutual information Ixy=0.000000

//C. Program…. For , BINARY NONSYMMETRIC CHANNEL//

#include<stdio.h>
#include<conio.h>
#include<math.h>
#define MAX 10
void main()
{
float Hx,Hy,Hxy,Hxby,Hybx,p1,p2,A,B,C,D,L,M,WL,WA,WB,I,E,j1,j2,j3,j4;
clrscr();
printf("\n BINARY NONSYMMENTRY CHANNEL\n");
printf("\n Enter the prob:-");
scanf("\n%f",&p1);
printf("\n Enter the conditional prob:-");
scanf("\n%f%f",&A,&B);
p2=1-p1;
M=1/(log10(2));
L=B+((1-A-B)*p1);
C=1-A;
D=1-B;
E=1-L;
j1=p1*A;
j2=p1*D;
j3=p2*C;
j4=p2*B;
WL=((L*log10(1/L)*M)+(E*log10(1/E)*M));
WA=(A*log10(1/A)*M)+(C*log10(1/C)*M);
WB=(B*log10(1/B)*M)+(D*log10(1/D)*M);
Hx=(p1*log10(1/p1)*M)+(p2*log10(1/p2)*M);
Hy=Hx;
Hxy=(j1*log10(1/j1)*M)+(j2*log10(1/j2)*M)+(j3*log10(1/j3)*M)+(j4*log10(1/j4)*M);
Hxby=Hxy-Hy;
Hybx=Hxy-Hx;
I=(WL-(p1*WA)-(p2*WB));
printf("\n\Hx=%f\n\n",Hx);
printf("\n\Hy=%f\n\n",Hy);
printf("\n\tentropy Hxy%f\tbits/symbol\n\n",Hxy);
printf("\n\tentropy Hxby=%f\tbits/symbol\n\n",Hxby);
printf("\n\tentropy Hybx=%f\tbits/symbol\n\n",Hybx);
printf("\n\tmutual information=%f\n",I);

Department: Electronics and Telecommunication Engg. Page 49


Class: TE (ETC); Subject: Digital Communication

getch();
}

BINARY NONSYMMENTRY CHANNEL

Enter the prob:-0.6


Enter the conditional prob:-0.8
0.7
Hx=0.970951
Hy=0.970951
entropy Hxy1.759305 bits/symbol
entropy Hxby=0.788355 bits/symbol
entropy Hybx=0.788355 bits/symbol
mutual information=0.185277

Conclusion:

Department: Electronics and Telecommunication Engg. Page 50


Class: TE (ETC); Subject: Digital Communication

EXPERIMENT NO: 11 Date:_____________

Title: Simulation Study of Linear Block codes.

AIM a. Given a generator matrix, write a program to generate Linear Block Coding.

b. Given a generator matrix, write a program to decode and correct the error.

THEORY

* Coding
For a (n, k) block code system a generator matrix of the order (n, k) is generated.

G11 G12 ... G 1N


G= G21 G22 ... G 2N

Gk1 Gk2 ... G kN


where k = msg bits and (n, k) is no. of parity bits.

G = [ Ik x k : Pk x (n-k)]k x n is the generator matrix.

The generator matrix can be divided into an identity matrix and parity matrix.

[ G11 G12 ... G1N ] = [ d11 d12 ... d1k ] G11 G12 ... G1N

Gk1 Gk2 ... G kN

d = data c = code G =generator matrix

* Decoding
Let us assume that RT is the received code and G is the transmitted code, If the
error was there then

RT = G  e

Let HT be the transpose of generator then

S = RT HT

= e HT

ALGORITHM

* Coding
1. Start
2. Input parity and order of codes.
3. Input code data word and multidata with generator matrix.
4. Generate code and print them.
5. Stop.

Department: Electronics and Telecommunication Engg. Page 51


Class: TE (ETC); Subject: Digital Communication

* Decoding
1. Start.
2. Input parity and order of code.
3. Input received code.
4. Calculate syndrome for received code.
5. Calculate syndrome for single bit error.
6. Complete 2 syndromes to locate and correct the error.
7. Display corrected value if error is present else print the received code is correct.
8. Stop.

TEST RESULTS

* Coding
Consider a (6,3) block code

100011
G= 010101
001111

Data Code
000 000 000
001 001 101
010 010 101
011 011 011
100 100 011
101 101 101
110 110 110
111 111 000

* Decoding
Given (n, k) = 7,4
1000110

G= 0100011
0010101
0001111

HT = 1 1 0
011
101
110
100
010
001

Department: Electronics and Telecommunication Engg. Page 52


Class: TE (ETC); Subject: Digital Communication

Error Vector Syndrome


000 0001 001
000 0010 010
000 0100 100
000 1000 111
001 0000 101
010 0000 011
110 0000 110
The received code = 0000001
S = [ R HT ] = [ 001 ]
Comparing it with HT, 7th bit is in error. Hence, corrected code is 0000000

Department: Electronics and Telecommunication Engg. Page 53


Class: TE (ETC); Subject: Digital Communication

%Program for coding & decoding of linear block code

clc;
clear all;
close all;
%%coding of linear block code
n = 7; k = 4;
genmat = [1 0 0 0 1 1 0; 0 1 0 0 0 1 1; 0 0 1 0 1 1 1; 0 0 0 1 1 0 1 ];
disp('genmat=')
disp(genmat)
msg = [0 0 0 0;0 0 0 1;0 0 1 0;0 0 1 1];
code = encode(msg,n,k,'linear',genmat);
msg
code
%% decoding of linear block code
parmat = gen2par(genmat)
evt = syndtable(parmat); % Produce decoding table.
disp('Error vector table')
disp(evt)
recd = [0 0 0 1 1 0 0] % Suppose this is the received vector.
syndrome = rem(recd * parmat',2);
syndrome_de = bi2de(syndrome,'left-msb'); % Convert to decimal.
disp(['Syndrome = ',num2str(syndrome_de),...
' (decimal), ',num2str(syndrome),' (binary)'])
corrvect = evt(1+syndrome_de,:) % Correction vector
% Now compute the corrected codeword.
correctedcode = rem(corrvect+recd,2)

SAMPLE OUTPUT
genmat=
1 0 0 0 1 1 0
0 1 0 0 0 1 1
0 0 1 0 1 1 1
0 0 0 1 1 0 1
msg =
0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1

code =

0 0 0 0 0 0 0
0 0 0 1 1 0 1
0 0 1 0 1 1 1
0 0 1 1 0 1 0

Department: Electronics and Telecommunication Engg. Page 54


Class: TE (ETC); Subject: Digital Communication

parmat =

1 0 1 1 1 0 0
1 1 1 0 0 1 0
0 1 1 1 0 0 1

Error vector table


0 0 0 0 0 0 0
0 0 0 0 0 0 1
0 0 0 0 0 1 0
0 1 0 0 0 0 0
0 0 0 0 1 0 0
0 0 0 1 0 0 0
1 0 0 0 0 0 0
0 0 1 0 0 0 0

recd =

0 0 0 1 1 0 0

Syndrome = 1 (decimal), 0 0 1 (binary)

corrvect =

0 0 0 0 0 0 1

correctedcode =

0 0 0 1 1 0 1

Conclusion:

Department: Electronics and Telecommunication Engg. Page 55


Class: TE (ETC); Subject: Digital Communication

EXPERIMENT NO: 12 Date:_____________

Title: Simulation Study of cyclic codes.

AIM a. To implement Systematic Cyclic Codes (n, k) encoder.

b. To implement decoding of Cyclic Codes.

THEORY

* Coding
Cyclic codes are subclasses of linear block codes. They have a property that a
cyclic shift of one codeword produces another codeword.

e.g X = (xn-1, xn-2, ......x1, x0)

here xn-1, x1, x0 represent individual cyclic code bits. X= code vector. If above code is
shifted cyclically.

X’ = (xn-2, xn-3, ..... x1, x0, xn-1 )

next shift leads to next code vector.

A linear code is called cyclic code if every cyclic shift produces another cyclic
code.

* Decoding
In a cyclic code also during transmission some errors may occur. Syndrome
decoding can be used to decode correct the errors the received code vector is represented
by Y.

If error is given by E. Code vector X = Y  E.

q = n - k for (n,k) code

The shift registers generate ‘q’ bit syndrome vector. Initially all shift register contents are
zero. Received code Y is entered through so one by one bit. The contents of flipflops of
shift registers keep on changing depending on input bits and polynomial bits.

Two conditions are linear if the poly bit is zero then message bit is directly
proportional ahead else entry are X’ored with poly bit and then shifted.

The syndrome vector is given by- S = (Sq-1, Sq-2, Sq-3 . . .,S1, S0)

Department: Electronics and Telecommunication Engg. Page 56


Class: TE (ETC); Subject: Digital Communication

ALGORITHMS

* Coding
1. Enter order of cyclic code as (n, k).
2. Enter input message bits as n.
3. Enter coefficient of generator polynomial g(x).
4. Initialize q = n-k shift register with zero status.
5. Input the first message bit.
6. Close on feedback switch
7. If g0 is zero then ro  go and shift by one bit positions.
8. Input the next message bit and repeat the same procedure.
9. When all message bits are transmitted then transmit check bits.

* Decoding
1. Enter order of code (n, k).
2. Enter generator polynomial coeff.
3. Enter received data code bits Y.
4. Initialize all shift registers S 0 , S1 ...Sq-1 as zero status.
5. Input message bit one and where,

S0 = Y1  Sq-1
S1 = S0  Sq-1
S2 = S1  Sq-1
6. The bits are shifted by one bit and output bit is transmitted.
7. Input next message bit and output is obtained.
8. When all inputs are 1 then shift registers has syndrome vector.

RESULTS
* Coding

Order of polynomial = 3
Generator Polynomial coefficient
P 3 = 1 P2 = 0 ( P3 + P2 + P + 1 )
P=1
Order of Cyclic code (7,4)
Message bits 1 1 0 0
here q = n-k = 3
there are 3 shift registers
Code vector is X = ( M3 M2 M1 M0 C2 C1 C0 )
X = [ 1 1 0 0 0 1 10 ]

Department: Electronics and Telecommunication Engg. Page 57


Class: TE (ETC); Subject: Digital Communication

* Decoding
Code order 7,4
Polynomial coefficient 1 0 1 1
Y=1001101
Shift Y bit S0 (Y1  Sq-1 ) S1 (S0  S2 ) S2 = S 1

- - 0 0 0

1 1 1 0 0

2 0 0 1 0

3 0 0 0 1

4 1 0 1 0

5 1 1 0 1

6 0 1 0 0

7 1 1 1 0

S = ( S2, S1, S0 ) = ( 0, 1, 1)

% Program for coding & decoding of cyclic code


clc;
clear all;
close all;
%% coding of cyclic code
n = 7; k = 4;% Set codeword length and message length.
gpm =[1 1 0 1]
gp=poly2sym(gpm)
msg = [0 0 0 0; 0 0 0 1; 0 0 1 0; 0 0 1 1; 0 1 0 0; 0 1 0 1; 0 1 1 0; 0 1 1 1; 1 0 0 0; 1 0 0 1; 1 0 1 0; 1
0 1 1;1 1 0 0; 1 1 0 1; 1 1 1 0; 1 1 1 1]; % Message is a binary matrix.
code = encode(msg,n,k,'cyclic',gpm); % Code will be a binary matrix.
msg
code
%Decoding for cyclic code
genmat=[1 0 0 0 1 1 0; 0 1 0 0 0 1 1; 0 0 1 0 1 1 1; 0 0 0 1 1 0 1]
parmat = gen2par(genmat)
ht=transpose(parmat)
trt = syndtable(parmat); % Produce decoding table.
disp('error vector table')
disp(trt)
rm = [1 1 0 1 1 0 1] % Suppose this is the received vector.
rp=poly2sym(rm)
[q,r] = quorem(rp,gp)
disp('syndrome polynomial')

Department: Electronics and Telecommunication Engg. Page 58


Class: TE (ETC); Subject: Digital Communication

disp(r)
syndrome=sym2poly(r)
syndrome_de = bi2de(syndrome,'left-msb'); % Convert to decimal.
disp(['Syndrome = ',num2str(syndrome_de),...
' (decimal), ',num2str(syndrome),' (binary)'])
corrvect = trt(1+syndrome_de,:) % Correction vector
% Now compute the corrected codeword.
correctedcode = xor(rm,corrvect)

SAMPLE OUTPUT
gpm =

1 1 0 1

gp = x^3+x^2+1

msg =

0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1
code =

0 0 0 0 0 0 0
1 0 1 0 0 0 1
1 1 1 0 0 1 0
0 1 0 0 0 1 1
0 1 1 0 1 0 0
1 1 0 0 1 0 1
1 0 0 0 1 1 0
0 0 1 0 1 1 1
1 1 0 1 0 0 0
0 1 1 1 0 0 1

Department: Electronics and Telecommunication Engg. Page 59


Pravara Rural Engineering College, Loni Class: TE (ETC); Subject: Digital Communication

0 0 1 1 0 1 0
1 0 0 1 0 1 1
1 0 1 1 1 0 0
0 0 0 1 1 0 1
0 1 0 1 1 1 0
1 1 1 1 1 1 1

genmat =
1 0 0 0 1 1 0
0 1 0 0 0 1 1
0 0 1 0 1 1 1
0 0 0 1 1 0 1
parmat =
1 0 1 1 1 0 0
1 1 1 0 0 1 0
0 1 1 1 0 0 1
ht =
1 1 0
0 1 1
1 1 1
1 0 1
1 0 0
0 1 0
0 0 1

error vector table


0 0 0 0 0 0 0
0 0 0 0 0 0 1
0 0 0 0 0 1 0
0 1 0 0 0 0 0
0 0 0 0 1 0 0
0 0 0 1 0 0 0
1 0 0 0 0 0 0
0 0 1 0 0 0 0
rm =

1 1 0 1 1 0 1

rp = x^6+x^5+x^3+x^2+1

q= x^3

r = 1+x^2
syndrome polynomial

Department: Electronics and Telecommunication Engg. Page 60


Pravara Rural Engineering College, Loni Class: TE (ETC); Subject: Digital Communication

S = 1+x^2
syndrome =
1 0 1
Syndrome = 5 (decimal), 1 0 1 (binary)
corrvect =
0 0 0 1 0 0 0
correctedcode =

1 1 0 0 1 0 1

Conclusion:

Department: Electronics and Telecommunication Engg. Page 61

You might also like