DC Practical Manual Te
DC Practical Manual Te
INDEX
TITLE: Study of BPSK transmitter & receiver using suitable hardware setup/kit.
OBJECTIVE : To study concept of BPSK Generation and Reception
THEORY
BPSK (also sometimes called PRK, Phase Reversal Keying, or 2PSK) is the simplest form of phase
shift keying (PSK). It uses two phases which are separated by 180° and so can also be termed 2-
PSK. It does not particularly matter exactly where the constellation points are positioned, and in
this figure they are shown on the real axis, at 0° and 180°. This modulation is the most robust of all
the PSKs since it takes the highest level of noise or distortion to make the demodulator reach an
incorrect decision. It is, however, only able to modulate at 1 bit/symbol (as seen in the figure) and
so is unsuitable for high data-rate applications.
This yields two phases, 0 and π. In the specific form, binary data is often conveyed with the
following signals:
This use of this basis function is shown at the end of the next section in a signal timing diagram.
The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator would produce.
The bit-stream that causes this output is shown above the signal (the other parts of this figure are
relevant only to QPSK).
The bit error rate (BER) of BPSK in AWGN can be calculated as:
or
Since there is only one bit per symbol, this is also the symbol error rate.
Sometimes this is known as quaternary PSK, quadriphase PSK, 4-PSK, or 4-QAM. (Although the
root concepts of QPSK and 4-QAM are different, the resulting modulated radio waves are exactly
the same.) QPSK uses four points on the constellation diagram, equispaced around a circle. With
four phases, QPSK can encode two bits per symbol, shown in the diagram with gray coding to
minimize the bit error rate (BER) — sometimes misperceived as twice the BER of BPSK.
The mathematical analysis shows that QPSK can be used either to double the data rate compared
with a BPSK system while maintaining the same bandwidth of the signal, or to maintain the data-
rate of BPSK but halving the bandwidth needed. In this latter case, the BER of QPSK is exactly the
same as the BER of BPSK - and deciding differently is a common confusion when considering or
describing QPSK.
Given that radio communication channels are allocated by agencies such as the Federal
Communication Commission giving a prescribed (maximum) bandwidth, the advantage of QPSK
over BPSK becomes evident: QPSK transmits twice the data rate in a given bandwidth compared to
BPSK - at the same BER. The engineering penalty that is paid is that QPSK transmitters and
receivers are more complicated than the ones for BPSK. However, with modern electronics
technology, the penalty in cost is very moderate.
As with BPSK, there are phase ambiguity problems at the receiving end, and differentially encoded
QPSK is often used in practice. The following diagram is BPSK receiver
Procedure:-
3] Connect O/P of ‘MULT’ Block i.e. BPSK O/P to I/P of 1496 Sq. ckt.
9] Observe Signal’s at different test points together with I/P bit pattern.
10] Observe filter O/P & COMP. Block O/P. The O/P of COMP. Block is receiver detected O/P.
TITLE: Study of FSK transmitter & receiver using suitable hardware setup/kit.
FSK Waveform
On a closer look at the FSK waveform, it can be seen that it can be represented as the sum of two
ASK waveforms.
FSK Modulator: The demodulation of FSK waveform can be carried out by a phase locked loop.
As known, the phase locked loop tries to 'lock' to the input frequency. It achieves this by generating
corresponding output voltage to be fed to the voltage controlled oscillator, if any frequency
deviation at its input is encountered. Thus the PLL detector follows the frequency changes &
generates proportional output voltage. The output voltage from PLL contains the carrier
components. Therefore the signal is passed through the low pass filter to remove them. The
resulting wave is too rounded to be used for digital data processing. Also, the amplitude level may
be very low due to channel attenuation. The signal is 'Squared Up' by feeding it to the voltage
comparator. Figure shows the functional blocks involved in FSK demodulation.
FSK Demodulator: Since the amplitude change in FSK waveform does not matter, this modulation
technique is very reliable even in noisy & fading channels. But there is always a price to be paid to
gain that advantage. The price in this case is widening of the required bandwidth. The bandwidth
increase depends upon the two carrier frequencies used & the digital data rate. Also, for a given
data, the higher the frequencies & the more they differ from each other, the wider the required
bandwidth. The bandwidth required is at least doubled than that in the ASK modulation. This
means that lesser number of communication channels for given band of frequencies.
PROCEDURE:
1. Make the connection according to the circuit diagram.
2. Connect Binary Data Generator to the FSK modulator with desired data pattern output to CRO.
3. Connect FSK modulator output on CRO.
4. Now demodulate the FSK modulator output at receiver side.
5. Find the transmitted data pattern on CRO.
THEORY: Data transmission system using binary encoding transmits sequence of 1, 0. these bits
may be represented in number of ways. In PSK system we transmit in phase sine wave for logic 1
& 180 degree out of phase wave for logic 0. When this data is received it is corrupted by noise &
there is finite probability that receiver will make an error in determining logic 1 & logic 0. To
reduce the probability of error we use the concept of co-relaters or matched filter as optimum filter.
In matched filter we integrate input data for one bit period. At the end of integration if output is
more than certain level we come to know that bit is logic 1 or logic 0. It is instructive to note that
integrator filters signal & noise such that signal voltage varies linearly with time & noise increases
more slowly.
        In the circuit 8-bit data is given to parallel to serial converter. Bit duration of this data can
be varied by basic clock output. Signal coming out of parallel to serial converter is fed to PSK
Generator. Amplitude of output of PSK Generator can be varied by using nearby pot. Now we have
not used any filter after PSK gen. so noise generated by multiplication process is there with the
desired PSK signal. Now this PSK signal is fed to the receiver.
        Output of filter is fed to serial to parallel converter. Output of receiver can be observed on
LED’s provided on the panel.
PROCEDURE:
      1. Switch on the power supply.
      2. Observe clock output on CRO & connect it to i/p of control block.
      3. Set bit pattern as 00100101 using dip switch. “1”on the switch is LSB. When s/w is
          to on position o/p is “1”.
      4. Observe o/p of p/s block on CRO Measure bit period by varying pot nearby clock.
          Make bit period maximum.
      5. Connect o/p of p/s block to i/p of PSK gen. Observe o/p of PSK gen.
      6. Keep pot nearby PAK gen. to such a position that o/p of PAK gen. will be 500 mVp-
          p in amplitude
      7. Observe the o/p of noise gen. o/p & set min. amplitude (i.e. 0V).
      8. Connect o/p of noise gen. to i/p2 of adder block & PSK generator O/P to i/p 1 adder
          block.
      9. Connect O/P of adder block to i/p of matched filter observe the o/p of matched filter
      10. Keep S/W above receiver latch to LE
      11. Observe O/P on LEDs the same data should be at o/p.
      12. Now go on varying clock frequency slowly increase the level, every time measure
          the bit period. Observe at what bit period 1st error comes. Also observe that if you
          increase amplitude of PSK gen. error vanishes.
      13. By keeping pot of amplitude of PSK generator to lowest position & by varying clock
          period measure no. of errors vs bit period. To observe stable readings make switch
               above receiver latch to ground position. For particular bit period take no of readings
               by moving this s/w from LE to ground.
           14. Plot the graph of bit period Vs error probability.
           15. In 8 bits if one bit is in error then error probability is 12.5%. If we increase no. of
               bits more accurate results are obtained
           Practical Kit Front Panel
In mathematics, computer science, telecommunication, and information theory, error detection and
correction has great practical importance in maintaining data (information) integrity across noisy
channels and less-than-reliable storage media.
• Error detection is the ability to detect the presence of errors caused by noise or other impairments
during transmission from the transmitter to the receiver.
• Error correction is the additional ability to reconstruct the original, error-free data. There are two
basic ways to design the channel code and protocol for an error correcting system:
• Automatic repeat-request (ARQ): The transmitter sends the data and also an error detection code,
which the receiver uses to check for errors, and requests retransmission of erroneous data. In many
cases, the request is implicit; the receiver sends an acknowledgement (ACK) of correctly received
data, and the transmitter re-sends anything not acknowledged within a reasonable period of time.
• Forward error correction (FEC): The transmitter encodes the data with an error correcting code
(ECC) and sends the coded message. The receiver never sends any messages back to the
transmitter. The receiver decodes what it receives into the "most likely" data. The codes are
designed so that it would take an "unreasonable" amount of noise to trick the receiver into
misinterpreting the data.
It is possible to combine the two, so that minor errors are corrected without retransmission, and
major errors are detected and a retransmission requested. The combination is called hybrid
automatic repeat-request.
In telecommunication, a redundancy check is extra data added to a message for the purposes of
error detection. Several schemes exist to achieve error detection, and generally they are quite
simple. All error detection codes (which include all error-detection-and-correction codes) transmit
more bits than were in the original data. Most codes are "systematic": the transmitter sends a fixed
number of original data bits, followed by fixed number of check bits (usually referred to as
redundancy in the literature) which are derived from the data bits by some deterministic algorithm.
The receiver applies the same algorithm to the received data bits and compares its output to the
received check bits; if the values do not match, an error has occurred at some point during the
transmission. In a system that uses a "nonsystematic" code, such as some raptor codes, data bits are
transformed into at least as many code bits, and the transmitter sends only the code bits.
Parity schemes
A parity bit is an error detection mechanism that can only detect an odd number of errors. The
stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity
bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity;
odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate
the error, and even correct it if the error affects a single bit: this is the principle behind the
Hamming code.
There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of
bit errors (one, three, five, and so on). If even numbers of bits (two, four, six and so on) are flipped,
the parity bit appears to be correct, even though the data is corrupt.
If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1
bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if
one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance
x <= d+1 from any word in the mapping) it can successfully detect it as an erroneous word. Even
more, d or fewer errors will never transform a valid word into another, because the Hamming
distance between each valid word is at least d+1, and such errors only lead to invalid words that are
detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using
the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n
word is transmitted with maximum d errors.
Automatic Repeat-request (ARQ) is an error control method for data transmission which makes use
of error detection codes, acknowledgment and/or negative acknowledgement messages and
timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the
receiver to the transmitter to indicate that it has correctly received a data frame. Usually, when the
transmitter does not receive the acknowledgment before the timeout occurs (i.e. within a reasonable
amount of time after sending the data frame), it retransmits the frame until it is either correctly
received or the error persists beyond a predetermined number of retransmissions.
A few types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ and Selective Repeat
ARQ. Hybrid ARQ is a combination of ARQ and forward error correction.
Error-correcting code
An error-correcting code (ECC) or forward error correction (FEC) code is redundant data that is
added to the message on the sender side. If the number of errors is within the capability of the code
being used, the receiver can use the extra information to discover the locations of the errors and
correct them. Since the receiver does not have to ask the sender for retransmission of the data, a
back-channel is not necessary in forward error correction, so it is suitable for simplex
communication such as broadcasting. Error correcting codes are used in computer data storage, for
example CDs, DVDs and in dynamic RAM. It is also used in digital transmission, especially
wireless communication, since wireless communication without FEC often would suffer from
packet-error rates close to 100%, and conventional automatic repeat request error control would
yield a very low good put.
The hamming code technique, which is an error-detection and error-correction technique, was
proposed by R.W. Hamming. Whenever a data packet is transmitted over a network, there are
possibilities that the data bits may get lost or damaged during transmission.
Let's understand the Hamming code concept with an example: Let's say you have received a 7-bit
Hamming code which is 1011011.
First, let us talk about the redundant bits. The redundant bits are some extra binary bits that are not
part of the original data, but they are generated & added to the original data bit. All this is done to
ensure that the data bits don't get damaged and if they do, we can recover them.
Now the question arises, how do we determine the number of redundant bits to be added? We use
the formula, 2r >= m+r+1; where r = redundant bit & m = data bit.
From the formula we can make out that there are 4 data bits and 3 redundancy bits, referring to the
received 7-bit hamming code.
As we go through the example, the first step is to identify the bit position of the data & all the bit
positions which are powers of 2 are marked as parity bits (e.g. 1, 2, 4, 8, etc.). The following image
will help in visualizing the received hamming code of 7 bits.
First, we need to detect whether there are any errors in this received hamming code.
Step 1: For checking parity bit P1, use check one and skip one method, which means, starting
from P1 and then skip P2, take D3 then skip P4 then take D5, and then skip D6 and take D7, this
way we will have the following bits,
As we can observe the total number of bits are odd so we will write the value of parity bit as P1 =
1. This means error is there.
Step 2: Check for P2 but while checking for P2, we will use check two and skip two method,
which will give us the following data bits. But remember since we are checking for P2, so we have
to start our count from P2 (P1 should not be considered).
As we can observe that the number of 1's are even, then we will write the value of P2 = 0. This
means there is no error.
Step 3: Check for P4 but while checking for P4, we will use check four and skip four methods,
which will give us the following data bits. But remember since we are checking for P4, so we have
started our count from P4(P1 & P2 should not be considered).
As we can observe that the number of 1's are odd, then we will write the value of P4 = 1. This
means the error is there.
So, from the above parity analysis, P1 & P4 are not equal to 0, so we can clearly say that the
received hamming code has errors. The parity P0, P1, and P2 is obtained by following equation,
P0 = D1 D2 D4
P1 = D1 D3 D4
P2 = D2 D3 D4
Since we found that received code has an error, so now we must correct them. To correct the errors,
use the following steps: Now the error word E will be:
Now we have to determine the decimal value of this error word 101 which is 5 (22 *1 + 21 * 0 + 20
*1 = 5). We get E = 5, which states that the error is in the fifth data bit. To correct it, just invert the
fifth data bit. So the correct data will be:
Procedure:
TITLE: Study of DSSS transmitter and receiver using suitable hardware setup/kit.
 OBJECTIVE: To study direct sequence spread spectrum BPSK modulation and demodulation
technique.
APPARATUS              :
                              Sr. No.     Apparatus                          Range
                              1.          DSSS kit
                              2.          DSO                                Dual Channel, 60 MHz
        THEORY: Spread Spectrum techniques were and are still used in military applications,
because of their high security, and their less susceptibility to interference from other parties. In this
technique, multiple users share the same bandwidth, without significantly interfering with each
other. The spreading waveform is controlled by a Pseudo-Noise (PN) sequence, which is a binary
random sequence. This PN is then multiplied with the original baseband signal, which has a lower
frequency, which yields a spread waveform that has a noise like properties. In the receiver, the
opposite happens, when the pass band signal is first demodulated, and then despreads using the
same PN waveform. An important factor here is the synchronization between the two generated
sequences. In this report, I will try to illustrate the design process of such a system, and then come
up with a full circuit design.
Pseudo Noise (PN)
        As we mentioned earlier, PN is the key factor in DS-SS systems. A Pseudo Noise or
Pseudorandom sequence is a binary sequence with an autocorrelation that resembles, over a period,
the autocorrelation of a random binary sequence. It is generated using a Shift Register, and a
Combinational Logic circuit as its feedback. The Logic Circuit determines the PN words. In this
design i used the so-called Maximum–Length PN sequence. It is a sequence of period 2m. 1
generated by a linear feedback shift register, which has feedback logic of only modulo–2 adders
(XOR Gates). Some properties of the Maximum–Length sequences are:
        In each period of a maximum–length sequence, the number of 1s is always one more than
the number of 0s. This is called the Balance property.
Circuit Diagram:
        Among the runs of 1s and 0s in each period of such sequence, one–half the runs of each
kind are of length one, one–fourth are of length two, one–eighth are of length three, and so on. This
is called the Run property.
        The Autocorrelation function of such sequence is periodic and binary valued. This is called
the Correlation property1.
        A block diagram of a Maximum–Length PN generator is shown in fig. with a 4–bit register
and one modulo–2 adder. This has a period of 24 .1 = 15, and it was the configuration used in this
design as we will show later.
Direct Sequence - Spread Spectrum
(1.1)
        where m(t) is the data sequence, p(t) is the PN spreading sequence, fc is the carrier
frequency, and . is the carrier phase angle at t=0. Each symbol in m(t) represents a data symbol and
has a duration of Ts . Each pulse in p(t) represents a chip, and has a duration of Tc. The transitions
of the data symbols and chips coincide such that the ratio Ts to Tc is an integer. The waveforms
m(t) and p(t) are shown in fig.(1.3). Here we notice the higher frequency of the spreading signal
p(t). The resulting spread signal is then modulated using the BPSK scheme. The carrier frequency
fc should have a frequency at least 5 times the chip frequency p(t).
        In the demodulator section, we simply reverse the process. We Demodulate the BPSK
signal first, Low Pass Filter the signal, and then Dispread the filtered signal, to obtain the original
message. The process is described by the following equations:
Circuit Diagram:
      As shown in eq.(1.2) and eq.(1.3) when we multiply two cosine signals together, we will
obtain two expressions, one of which has twice the frequency of the original message. And this part
can be removed by a LPF. The output is mss(t) as shown in fig.(1.4). This design is based on
Coherent Detection BPSK, so we don’t have to worry about carrier synchronization issues.
        As for the PN sequence in the receiver, it is mentioned earlier that it should be an exact
replica of the one used in the transmitter, with no delays, cause this might cause severe errors in the
incoming message. After the signal gets multiplied with the PN sequence, the signal dispreads, and
we obtain the original bit signal m(t), that was transmitted. The block diagram of the receiver is
shown in fig. (1.4).
PROCEDURE:
   9) Observe o/p of 1496 squaring circuits & o/p of band pass filter. Adjust it properly using pot
       provided near B.P. filter section.
   10) Connect o/p frequency divider to i/p 1 of PSK receiver.
   11) Observe o/p of PSK receiver & o/p of filter & comparator.
   12) Connect o/p of filter & comparator to receiver multiplier block & also connect PN sequence
   to receiver multiplier block.
   13) Observe o/p of receiver multiplier block which is our transmitted pattern.
2. PC
M-ary PSK
M-ary Encoding: The word binary denotes two-bits. M just denotes a digit that resembles to the
number of conditions, levels, or combinations possible for a given number of binary variables. This
is the type of digital modulation method used for data transmission in which in its place of one-bit,
two or more bits are transmitted at a time. As a single signal is used for multiple bit transmission,
the channel bandwidth is reduced.
M-ary Equation
If a digital signal is given below four situations, such as voltage levels, frequencies, phases and
amplitude, then M = 4. The number of bits essential to create a given number of conditions is
expressed mathematically as
N=log2M, Where,
N is the number of bits necessary. M is the number of conditions, levels, or combinations possible
with N bits.
The above equation can be re-arranged as: 2 N = M or instance, with two bits, 22 = 4 conditions are
possible. In BPSK we transmit each bit individually depending on whether b(t) is logic 0 or logic 1.
We transmit one or another of sinusoid for the bit time Tb. The Sinusoids differing in phase by
2π/2=180. In QPSK two bits re grouped depending on which of four two bit words develops. We
transmit one or another of four sinusoid of duration 2Tb. The Sinusoids differing in phase by
2π/4=90. The scheme can be extended to N bits. For this N bits are grouped, so that in this N bit
symbols, extending over the time NTb.
There re 2N =M possible symbols. Which is defined as M-ary PSK.
M-ary PSK: The M symbols are represented by sinusoids of duration NTb =Ts. The M symbols
are differ from one another by the phase,
The wave forms of eq (1) are represented by the dots in fig. in signal space. The coordinate axes are
orthogonal wave forms.
M-ary PSK Modulator: At the transmitter the bit stream is applied to serial to parallel converter
   •   The scheme for generating carrier at the demodulator is known as carrier recovery circuit
       carrier recovery circuit consists of a device to rise the received signal to the Mth power.
   •   A BPF whose pass band is centred around Mf0 a frequency divider.
   •   Only the wave forms of the signals at the outputs of carrier recovery circuit are relevant not
       their amplitudes.
      The recovered carrier is then multiplied with the received signal, which is then applied to
       integrators.
      The integrators extend their integration over the same time period again bit synchronizer is
       needed.
      The integrator outputs are voltages whose amplitudes are proportional to the TsPe and TsP0
       Respectively and change at the symbol rate.
      Finally TsPe and TsP0 are applied to A/D converter which reconstructs the digital N bit
       signal which constitutes the transmitted signal.
Conclusion:
It is possible to combine ASK, FSK and PSK. ASK and PSK to be combined to create Quadrature
amplitude modulation (QAM). Quadrature Amplitude Modulation or QAM (pronounced “kwam”)
is a digital modulation technique that uses the data to be transmitted to vary both the amplitude and
the phase of a sinusoidal waveform, while keeping its frequency constant. QAM is a natural
extension of binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK), both of
which vary only the phase of the waveform. The number of different waveforms (unique
combinations of amplitude and phase) used in QAM depends on the modem and may vary with the
quality of the channel. With 16-QAM, for example, 16 different waveforms are available. 64-QAM
and 256-QAM are also common. In all cases, each different waveform, or amplitude-phase
combination, is a modulation symbol that represents a specific group of bits. In the modulator,
consecutive data bits are grouped together four at a time to form quad bits and each quad bit is
represented by a different modulation symbol. In the demodulator, each different modulation
symbol in the received signal is interpreted as a unique pattern of 4 bits. Figure 7.1 shows all 16
QAM modulation symbols superposed on the same axes. Four different colours are used in the
figure and each colour is used for four different waveforms. Each waveform has a different
combination of phase and amplitude.
QAM constellations Figure 7.2 shows the constellation diagram for 16-QAM. The constellation
diagram is a pictorial representation showing all possible modulation symbols (or signal states) as a
set of constellation points. The position of each point in the diagram shows the amplitude and the
phase of the corresponding symbol. Each constellation point corresponds (is mapped to) to a
different quadbit.
Figure 7.2. 16-QAM constellation (4-bits per modulation symbol). Although any mapping between
quadbits and constellation points would work under ideal conditions, the mapping usually uses a
Gray code to ensure that the quadbits corresponding to adjacent constellation points differ only by
one bit. This facilitates error correction since a small displacement of a constellation point due to
noise will likely cause only one bit of the demodulated quadbit to be erroneous.
Although any mapping between quadbits and constellation points would work under ideal
conditions, the mapping usually uses a Gray code to ensure that the quadbits corresponding to
adjacent constellation points differ only by one bit. This facilitates error correction since a small
displacement of a constellation point due to noise will likely cause only one bit of the demodulated
quadbit to be erroneous.
A typical QAM modulator
A QAM signal can be generated by independently amplitude-modulating two carriers in quadrature
(cos ωt and sin ωt), as shown in Figure 7.3.
    n= 2*floor(r_im/2)+1;
    n(n>max(Im)) = max(Im);
    n(n<min(Im)) = min(Im);
    % To Decimal conversion
       oRe = ind(floor((m+4)/2+1))-1;
    oIm = ind(floor((n+4)/2+1))-1;
    % To binary string
    pRe = dec2bin(oRe,k/2);
    pIm = dec2bin(oIm,k/2);
    % binary string to number
    pRe = pRe.';
    pRe = pRe(1:end).';
    pRe = reshape(str2num(pRe).',k/2,N).' ;
       pIm = pIm.';
    pIm = pIm(1:end).';
    pIm = reshape(str2num(pIm).',k/2,N).' ;
    % counting errors for real and imaginary
    Err(i) = size(find([cRe- pRe]),1) + size(find([cIm - pIm]),1) ;
end
sBer = Err/(N*k);
tBer = (1/k)*3/2*erfc(sqrt(k*0.05*(10.^(bdB/10))));
% plot
figure
semilogy(bdB,tBer,'rs-','LineWidth',2);
hold on
semilogy(bdB,sBer,'kx-','LineWidth',2);
grid on
legend('theory', 'simulation');
xlabel('SNR dB')
ylabel('Bit Error Rate')
title('BER VS SNR')
Conclusion:
The information about the bit stream is contained in the changes of phase of the transmitted signal.
A synchronous demodulator would be sensitive to these phase reversals. The appearance of a BPSK
signal in the time domain is shown in Figure 8.2 (lower trace). The upper trace is the binary
message sequence.
% Recovering of Data
%receiver
r=snbpsk;
bhat=[real(r)<0];
bhat=bhat(:)';
bhat1=bhat;
ne=sum(b~=bhat1);
BER=ne/nr_data_bits;
BER1=[BER1 BER];
SNR1=[SNR1 SNR];
end
% Plotting of BER graph of BPSK
figure(3);
semilogy(SNR1,BER1,'-*');
grid on;
xlabel('SNR=Eb/No(db)');
ylabel('BER');
title('Simulation of BER for BPSK ');
legend('BER-simulated');
OUTPUT:
Conclusion:
                Huffman Coding
       Huffman Coding uses the principal as that of Shannon Fano algorithm. This type of
coding makes average no. of digits per message equal to entropy. The messages are
arranged in accordance to their decreasing probability. The 2 digit message of lowest
probability is assigned binary 0 and 1.
       The two probabilities are added & sum is placed in the last stage just that the
probability are decreasing in order. Again 0 and 1 are assigned to last 2 probabilities this
goes on till the final stage. The code is taken in reverse order by linking for a part icular
level.
ALGORITHM
4.     Partition the message into two halves and continue till one element in the subset
remains.
7. Stop.
                 Huffman Coding
1.     Start.
4.     Go on adding min 2 probabilities and also assign binary values to each level 0 or 5.
       Continue this till all probabilities are finished.
#include< stdio.h>
#include< conio.h>
#include< string.h>
struct node
{
char sym[10];
float pro;
int arr[20];
int top;
}s[20];
typedef struct node node;
void prints(int l,int h,node s[])
{
int i;
for(i=l;i< =h;i++)
{
printf("\n%s\t%f",s[i].sym,s[i].pro);
}
}
void shannon(int l,int h,node s[])
{
float pack1=0,pack2=0,diff1=0,diff2=0;
int i,d,k,j;
if((l+1)==h || l==h || l>h)
{
if(l==h || l>h)
return;
s[h].arr[++(s[h].top)]=0;
s[l].arr[++(s[l].top)]=1;
return;
}
else
{
for(i=l;i< =h-1;i++)
pack1=pack1+s[i].pro;
pack2=pack2+s[h].pro;
diff1=pack1-pack2;
if(diff1< 0)
diff1=diff1*-1;
j=2;
while(j!=h-l+1)
{
k=h-j;
pack1=pack2=0;
for(i=l;i< =k;i++)
pack1=pack1+s[i].pro;
for(i=h;i>k;i--)
pack2=pack2+s[i].pro;
diff2=pack1-pack2;
if(diff2< 0)
diff2=diff2*-1;
if(diff2>=diff1)
break;
diff1=diff2;
j++;
}
k++;
for(i=l;i< =k;i++)
s[i].arr[++(s[i].top)]=1;
for(i=k+1;i< =h;i++)
s[i].arr[++(s[i].top)]=0;
shannon(l,k,s);
shannon(k+1,h,s);
}
}
void main()
{
int n,i,j;
float x,total=0;
char ch[10];
node temp;
clrscr();
printf("Enter How Many Symbols Do You Want To Enter\t: ");
scanf("%d",&n);
for(i=0;i< n;i++)
{
printf("Enter symbol %d ---> ",i+1);
scanf("%s",ch);
strcpy(s[i].sym,ch);
}
for(i=0;i< n;i++)
{
printf("\n\tEnter probability for %s ---> ",s[i].sym);
scanf("%f",&x);
s[i].pro=x;
total=total+s[i].pro;
if(total>1)
{
printf("\t\tThis probability is not possible.Enter new probability");
total=total-s[i].pro;
i--;
}
}
s[i].pro=1-total;
for(j=1;j< =n-1;j++)
{
for(i=0;i< n-1;i++)
{
if((s[i].pro)>(s[i+1].pro))
{
temp.pro=s[i].pro;
strcpy(temp.sym,s[i].sym);
s[i].pro=s[i+1].pro;
strcpy(s[i].sym,s[i+1].sym);
s[i+1].pro=temp.pro;
strcpy(s[i+1].sym,temp.sym);
}
}
}
for(i=0;i< n;i++)
s[i].top=-1;
shannon(0,n-1,s);
printf("                                        ");
printf("\n\n\n\tSymbol\tProbability\tCode");
for(i=n-1;i>=0;i--)
{
printf("\n\t%s\t%f\t",s[i].sym,s[i].pro);
for(j=0;j< =s[i].top;j++)
printf("%d",s[i].arr[j]);
}
printf("\n                                        ");
getch();
}
/********************* OUTPUT **************************
Enter How Many Symbols Do You Want To Enter : 6
Enter symbol 1 ---> a
Enter symbol 2 ---> b
Enter symbol 3 ---> c
Enter symbol 4 ---> d
Enter symbol 5 ---> e
Enter symbol 6 ---> f
Enter probability for a ---> 0.3
Enter probability for b ---> 0.25
Enter probability for c ---> 0.20
Enter probability for d ---> 0.12
==========================================
Output
hcode =
0 0 1 0 1 1 0 0 0
dhsig =
1 2 3 4
Conclusion:
AIM: Write a program for determination of various entropies and mutual information of a given
channel. Test various types of channel such as
a) Noise free channel.
b) Error free channel
c) Binary symmetric channel
d) Noisy channel
Compare channel capacity of above channels.
Apparatus: PC, C or MATLAB software, Printer.
THEORY
       The information emitted by a discrete memory less source is related to the inverse
probability of occurrence.
                I(x) = log2 ( 1/ Px)
       Entropy of a discrete memory less space is the measure of the average information
content per source per symbol and is given by the expression-
                        n
               H(xi) = ∑ Pi log2 ( 1/ Pxi)
                       i=1
       Consider a memory less channel where xi is the transmitted message and yi is the
received message. If noise is present in the system, then uncertainty about transmission x i
when yi is received is log ( 1/ P(xi /yi) )
                             n m
                H(x/y) = ∑ ∑ P(xi ,yi) log ( 1/ P(xi /yi))
                           i=1 j=1
Similarly
                               n m
                H(Y/X) = ∑ ∑ P(xi ,yi) log ( 1/ P(yi /xi))
                             i=1 j=1
                                n m
                   H(x, y) = ∑ ∑ P(xi ,yi) log ( 1/ P(xi ,yi))
                              i=1 j=1
H(xi ,yi) is the joint entropy, H(xi/yj) and H(yj/xi) is the conditional entropy.
Entropy is a measure of the average information content per source symbol.
Mutual Information
The mutual information I (X; Y) of a channel is
Information Channels
An information channel is characterized by an input range of symbols {x1, x2, . . . , xU }, an output
range {y1, y2, . . . , yV } and a set of conditional probabilities P(yj /xi ) that determines the
relationship between the input xi and the output yj . This conditional probability corresponds to that
of receiving symbol yj if symbol xi was previously transmitted.
The set of probabilities P(yj /xi ) is arranged into a matrix Pch that characterizes completely the
corresponding discrete channel:
Pi j = P(yj /xi )
Lossless Channel
A channel described by a channel matrix with only one nonzero element in each column is called a
lossless channel. In the lossless channel no source information is lost in transmission.
Deterministic Channel
A channel described by a channel matrix with only one nonzero element in each row is called a
deterministic channel.
     each row has only one nonzero element, this element must be unity .
     output symbol will be received.
Noiseless Channel
    Both lossless & deterministic.
    The channel matrix has only one element in each row and in each column, and this element
       is unity.
    The input and output are of the same size; that is, m = n
ALGORITHMS
2. Read P(xi,yi).
                     n
              H(x) = ∑ Pi log2 ( 1/ Pi)
                    i=1
                     n
              H(y) = ∑ Pj log2 ( 1/ Pj)
                    j=1
                          n m
               H(x/y) = ∑ ∑ P(xi ,yi) log ( 1/ P(xi, yi))
                        i=1 j=1
7. Stop.
3. Read P(xi).
8. Stop.
#include<stdio.h>
#include<conio.h>
#include<math.h>
#define MAX 10
void main()
{
  float Hx,Hy,Hxy,Hxby,Hybx,M,Ixy,p1,p2,A,B,L,E,j1,j2,j3,j4;
  clrscr();
  printf("\n BINARY SYMMENTRY CHANNEL\n");
  printf("\n Enter the prob:-");
  scanf("\n%f",&p1);
  printf("\n Enter the conditional prob:-");
  scanf("\n%f",&A);
  p2=1-p1;
  B=1-A;
  M=1/(log10(2));
  L=p1+A-(2*p1*A);
  E=1-L;
  j1=p1*B;
  j2=p1*A;
  j3=p2*A;
  j4=p2*B;
  Hx=(p1*log10(1/p1)*M)+(p2*log10(1/p2)*M);
  Hy=((L*log10(1/L)*M)+(E*log10(1/E)*M));
  Hxy=(j1*log10(1/j1)*M)+(j2*log10(1/j2)*M)+(j3*log10(1/j3)*M)+(j4*log10(1/j4)*M);
  Hxby=Hxy-Hy;
  Hybx=Hxy-Hx;
  Ixy=Hx-Hxby;
  printf("\n\tentropy Hx=%f\tbits/symbol\n\n",Hx);
 printf("\n\tentropy Hy=%f\tbits/symbol\n\n",Hy);
 printf("\n\tentropy Hxy%f\tbits/symbol\n\n",Hxy);
 printf("\n\tentropy Hxby=%f\tbits/symbol\n\n",Hxby);
 printf("\n\tentropy Hybx=%f\tbits/symbol\n\n",Hybx);
 printf("\n\tmutual information Ixy=%f\n",Ixy);
 getch();
 }
#include<stdio.h>
#include<conio.h>
#include<math.h>
#define MAX 10
void main()
{
  float Hx,Hy,Hxy,Hxby,Hybx,p1,p2,A,B,C,D,L,M,WL,WA,WB,I,E,j1,j2,j3,j4;
  clrscr();
  printf("\n BINARY NONSYMMENTRY CHANNEL\n");
  printf("\n Enter the prob:-");
  scanf("\n%f",&p1);
  printf("\n Enter the conditional prob:-");
  scanf("\n%f%f",&A,&B);
  p2=1-p1;
  M=1/(log10(2));
  L=B+((1-A-B)*p1);
  C=1-A;
  D=1-B;
  E=1-L;
  j1=p1*A;
  j2=p1*D;
  j3=p2*C;
  j4=p2*B;
WL=((L*log10(1/L)*M)+(E*log10(1/E)*M));
WA=(A*log10(1/A)*M)+(C*log10(1/C)*M);
WB=(B*log10(1/B)*M)+(D*log10(1/D)*M);
Hx=(p1*log10(1/p1)*M)+(p2*log10(1/p2)*M);
Hy=Hx;
 Hxy=(j1*log10(1/j1)*M)+(j2*log10(1/j2)*M)+(j3*log10(1/j3)*M)+(j4*log10(1/j4)*M);
  Hxby=Hxy-Hy;
  Hybx=Hxy-Hx;
 I=(WL-(p1*WA)-(p2*WB));
 printf("\n\Hx=%f\n\n",Hx);
 printf("\n\Hy=%f\n\n",Hy);
 printf("\n\tentropy Hxy%f\tbits/symbol\n\n",Hxy);
 printf("\n\tentropy Hxby=%f\tbits/symbol\n\n",Hxby);
 printf("\n\tentropy Hybx=%f\tbits/symbol\n\n",Hybx);
 printf("\n\tmutual information=%f\n",I);
getch();
}
Conclusion:
AIM a. Given a generator matrix, write a program to generate Linear Block Coding.
b. Given a generator matrix, write a program to decode and correct the error.
THEORY
                 * Coding
       For a (n, k) block code system a generator matrix of the order (n, k) is generated.
The generator matrix can be divided into an identity matrix and parity matrix.
[ G11 G12 ... G1N ] = [ d11 d12 ... d1k ] G11 G12 ... G1N
                  * Decoding
       Let us assume that RT is the received code and G is the transmitted code, If the
error was there then
RT = G  e
S = RT HT
= e HT
ALGORITHM
                  * Coding
1.     Start
2.     Input parity and order of codes.
3.     Input code data word and multidata with generator matrix.
4.     Generate code and print them.
5.     Stop.
                  *   Decoding
1.      Start.
2.      Input parity and order of code.
3.      Input received code.
4.      Calculate syndrome for received code.
5.      Calculate syndrome for single bit error.
6.      Complete 2 syndromes to locate and correct the error.
7.      Display corrected value if error is present else print the received code is correct.
8.      Stop.
TEST RESULTS
                 * Coding
Consider a (6,3) block code
                       100011
               G=      010101
                       001111
        Data          Code
        000           000     000
        001           001     101
        010           010     101
        011           011     011
        100           100     011
        101           101     101
        110           110     110
        111           111     000
                   * Decoding
     Given (n, k) = 7,4
                1000110
         G= 0100011
            0010101
            0001111
        HT = 1 1 0
              011
              101
              110
              100
              010
              001
clc;
clear all;
close all;
%%coding of linear block code
n = 7; k = 4;
genmat = [1 0 0 0 1 1 0; 0 1 0 0 0 1 1; 0 0 1 0 1 1 1; 0 0 0 1 1 0 1 ];
disp('genmat=')
disp(genmat)
msg = [0 0 0 0;0 0 0 1;0 0 1 0;0 0 1 1];
code = encode(msg,n,k,'linear',genmat);
msg
code
%% decoding of linear block code
parmat = gen2par(genmat)
evt = syndtable(parmat); % Produce decoding table.
disp('Error vector table')
disp(evt)
recd = [0 0 0 1 1 0 0] % Suppose this is the received vector.
syndrome = rem(recd * parmat',2);
syndrome_de = bi2de(syndrome,'left-msb'); % Convert to decimal.
disp(['Syndrome = ',num2str(syndrome_de),...
     ' (decimal), ',num2str(syndrome),' (binary)'])
corrvect = evt(1+syndrome_de,:) % Correction vector
% Now compute the corrected codeword.
correctedcode = rem(corrvect+recd,2)
SAMPLE OUTPUT
genmat=
   1   0 0  0          1    1    0
   0   1 0  0          0    1    1
   0   0 1  0          1    1    1
   0   0 0  1          1    0    1
msg =
   0   0 0  0
   0   0 0  1
   0   0 1  0
   0   0 1  1
code =
   0     0   0    0    0    0    0
   0     0   0    1    1    0    1
   0     0   1    0    1    1    1
   0     0   1    1    0    1    0
parmat =
   1     0   1    1    1    0    0
   1     1   1    0    0    1    0
   0     1   1    1    0    0    1
recd =
0 0 0 1 1 0 0
corrvect =
0 0 0 0 0 0 1
correctedcode =
0 0 0 1 1 0 1
Conclusion:
THEORY
*       Coding
        Cyclic codes are subclasses of linear block codes. They have a property that a
cyclic shift of one codeword produces another codeword.
here xn-1, x1, x0 represent individual cyclic code bits. X= code vector. If above code is
shifted cyclically.
        A linear code is called cyclic code if every cyclic shift produces another cyclic
code.
*      Decoding
       In a cyclic code also during transmission some errors may occur. Syndrome
decoding can be used to decode correct the errors the received code vector is represented
by Y.
The shift registers generate ‘q’ bit syndrome vector. Initially all shift register contents are
zero. Received code Y is entered through so one by one bit. The contents of flipflops of
shift registers keep on changing depending on input bits and polynomial bits.
       Two conditions are linear if the poly bit is zero then message bit is directly
proportional ahead else entry are X’ored with poly bit and then shifted.
The syndrome vector is given by- S = (Sq-1, Sq-2, Sq-3 . . .,S1, S0)
ALGORITHMS
*      Coding
1.     Enter order of cyclic code as (n, k).
2.     Enter input message bits as n.
3.     Enter coefficient of generator polynomial g(x).
4.     Initialize q = n-k shift register with zero status.
5.     Input the first message bit.
6.     Close on feedback switch
7.     If g0 is zero then ro  go and shift by one bit positions.
8.     Input the next message bit and repeat the same procedure.
9.     When all message bits are transmitted then transmit check bits.
*      Decoding
1.     Enter order of code (n, k).
2.     Enter generator polynomial coeff.
3.     Enter received data code bits Y.
4.     Initialize all shift registers S 0 , S1 ...Sq-1 as zero status.
5.     Input message bit one and where,
       S0 = Y1  Sq-1
       S1 = S0  Sq-1
       S2 = S1  Sq-1
6.     The bits are shifted by one bit and output bit is transmitted.
7.     Input next message bit and output is obtained.
8.     When all inputs are 1 then shift registers has syndrome vector.
RESULTS
*      Coding
       Order of polynomial = 3
       Generator Polynomial coefficient
              P 3 = 1 P2 = 0 ( P3 + P2 + P + 1 )
              P=1
       Order of Cyclic code (7,4)
       Message bits 1 1 0 0
       here q = n-k = 3
       there are 3 shift registers
       Code vector is X = ( M3 M2 M1 M0 C2 C1 C0 )
       X = [ 1 1 0 0 0 1 10 ]
*       Decoding
        Code order 7,4
        Polynomial coefficient 1 0 1 1
        Y=1001101
       Shift    Y bit         S0 (Y1  Sq-1 )           S1 (S0  S2 )        S2 = S 1
- - 0 0 0
1 1 1 0 0
2 0 0 1 0
3 0 0 0 1
4 1 0 1 0
5 1 1 0 1
6 0 1 0 0
7 1 1 1 0
S = ( S2, S1, S0 ) = ( 0, 1, 1)
disp(r)
syndrome=sym2poly(r)
syndrome_de = bi2de(syndrome,'left-msb'); % Convert to decimal.
disp(['Syndrome = ',num2str(syndrome_de),...
    ' (decimal), ',num2str(syndrome),' (binary)'])
corrvect = trt(1+syndrome_de,:) % Correction vector
% Now compute the corrected codeword.
correctedcode = xor(rm,corrvect)
SAMPLE OUTPUT
gpm =
1 1 0 1
gp = x^3+x^2+1
msg =
   0     0   0   0
   0     0   0   1
   0     0   1   0
   0     0   1   1
   0     1   0   0
   0     1   0   1
   0     1   1   0
   0     1   1   1
   1     0   0   0
   1     0   0   1
   1     0   1   0
   1     0   1   1
   1     1   0   0
   1     1   0   1
   1     1   1   0
   1     1   1   1
code =
   0     0   0   0   0   0   0
   1     0   1   0   0   0   1
   1     1   1   0   0   1   0
   0     1   0   0   0   1   1
   0     1   1   0   1   0   0
   1     1   0   0   1   0   1
   1     0   0   0   1   1   0
   0     0   1   0   1   1   1
   1     1   0   1   0   0   0
   0     1   1   1   0   0   1
   0   0     1   1    0    1    0
   1   0     0   1    0    1    1
   1   0     1   1    1    0    0
   0   0     0   1    1    0    1
   0   1     0   1    1    1    0
   1   1     1   1    1    1    1
genmat =
   1 0       0   0    1    1    0
   0 1       0   0    0    1    1
   0 0       1   0    1    1    1
   0 0       0   1    1    0    1
parmat =
   1 0       1   1    1    0    0
   1 1       1   0    0    1    0
   0 1       1   1    0    0    1
ht =
   1 1       0
   0 1       1
   1 1       1
   1 0       1
   1 0       0
   0 1       0
   0 0       1
1 1 0 1 1 0 1
rp = x^6+x^5+x^3+x^2+1
q= x^3
r = 1+x^2
 syndrome polynomial
S = 1+x^2
 syndrome =
   1 0 1
Syndrome = 5 (decimal), 1 0 1 (binary)
corrvect =
   0 0 0 1 0 0 0
correctedcode =
1 1 0 0 1 0 1
Conclusion: