0% found this document useful (0 votes)
25 views6 pages

QB DC

qbdc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

QB DC

qbdc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

RN SHETTY TRUST®

RNS INSTITUTE OF TECHNOLOGY


Autonomous Institution Affiliated to Visvesvaraya Technological University, Belagavi
Approved by AICTE, New Delhi, Accredited by NAAC with 'A+' Grade
Channasandra, Dr. Vishnuvardhan Road, Bengaluru - 560 098
Ph: (080) 28611880, 28611881 URL: www.rnsit.ac.in
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

Question Bank
SUBJECT CODE AND TITLE BEC 503 DIGITAL COMMUNICATION
SCHEME 2022 BATCH 2023-27
SEMESTER& SECTION 5th B
FACULTY NAME
Dr. Smitha N

Q. Question M R C
No. arks BT* Os
Module 1
1
Define Hilbert Transform and explain the interpretation of Hilbert Transformer in 7 L2 1
time domain & frequency domain
2
8 L2 1
State and prove the properties of Hilbert Transform
𝑡
3 Find the Hilbert Transform of 𝑥(𝑡) = 𝐴 𝑟𝑒𝑐𝑡 𝑇 6 L3 1
4 For the bandpass signal, 𝑠(𝑡) = 𝐴𝑐 cos[2𝜋𝑓𝑐 𝑡 + ∅(𝑡)] 𝑚(𝑡) find the following,
L
find the following: (a) Pre – Envelope (b) Complex envelope (c) Inphase and 8 1
3
quadrature components
5 Obtain the Canonical representation of band – pass signal and draw the schematic
L
block diagrams for deriving the in – phase & quadrature components of a band – pass 8 1
2
signal followed by its reconstruction using the same components.
6 Obtain the Polar representation of band – pass signal and draw the illustrating
L
phasor diagrams. 7 1
3
7 a) Consider a low pass signal 𝑔(𝑡), whose spectrum G(f) is defined for −𝑊 ≤ 𝑓 ≤
𝑊. Sketch the spectral contents of 𝑔+ (𝑡) and 𝑔− (𝑡).
L
b) Define pre envelope of a real valued signal . Given a band pass signal s(t), 7 1
3
sketch the amplitude spectra of signal s)t), pre envelope s(t) and complex envelope of
s(t) .
8 L
8 1
3
9 L
Explain in brief the AWGN model of digital communication system 6 1
1
10 With AWGN model of a channel, explain Gram – Schmidt orthogonalization L
7 1
procedure. 2
11 Explain the geometric representation of set of M energy signals as linear combination
L
of N orthonormal basis function. Illustrate for the case N=2 and M=3 with necessary 7 1
2
diagram and expressions .
12 Mention the useful relation of geometric representation of signals as vectors. Also L
8 1
Show that energy of a signal is a squared length of corresponding signal Vector. 2
13 With aid of neat diagram explain the operation of correlator receiver L
6 1
1
14 With supporting derivations for impulse function and neat diagram explain the
L
operation of Matched filter receiver 6 1
1
15 Using the Gram – Schmidt orthogonalization procedure, find a set of orthonormal
basis functions to represent the four signals 𝑠1 (𝑡), 𝑠2 (𝑡), 𝑠3 (𝑡) 𝑎𝑛𝑑 𝑠4 (𝑡).And sketch
the resulting orthonormal basis functions.

S1(t) S
2 (t)
1 1 L
10 1
3
0 0 2T
T
/3 /3
S
S
4 (t)
3 (t) 1
1

0 T T T 0
/3
express each signals 𝑠1 (𝑡), 𝑠2 (𝑡), 𝑠3 (𝑡) 𝑎𝑛𝑑 𝑠4 (𝑡) in terms of basis functions, Also
draw the constellation diagram
16 Using the Gram – Schmidt orthogonalization procedure, find a set of orthonormal
basis functions to represent the two signals 𝑠1 (𝑡) 𝑎𝑛𝑑 𝑠2 (𝑡).

2𝐸 𝑛
𝑠1 (𝑡) = √ cos 2𝜋𝑓1 𝑡 0 ≤ 𝑡 ≤ 𝑇, 𝑓1 = , 𝑛 − 𝑖𝑛𝑡𝑒𝑔𝑒𝑟, 𝑛 ≠ 0 L
𝑇 𝑇 10 1
3
2𝐸 𝑚
𝑠2 (𝑡) = √ 𝑇 cos 2𝜋𝑓2 𝑡 0 ≤ 𝑡 ≤ 𝑇, 𝑓2 = , 𝑚 − 𝑖𝑛𝑡𝑒𝑔𝑒𝑟, 𝑚 ≠ 0
𝑇
express each signals 𝑠1 (𝑡) 𝑎𝑛𝑑 𝑠2 (𝑡)in terms of basis functions, Also draw the
constellation diagram
17 What is Maximum likelihood? Derive final ML decision rule starting from maximum
L
a posteriori probability (MAP) rule. Write Correlator detection and ML decoder block 10 1
2
diagram.

Module 2
1 Derive the expression for probability of error for BPSK
8 L2 2
2 Draw the QPSK waveform (odd sequence ,even sequence . in phase, quad phase,
qpsk) for the following binary sequence 1 0 1 1 0 1 0 1 8 L3 2

3 Explain the operation of coherent BPSK technique with its generator and receiver 8
L
block diagram.
2 2
4 Explain the operation of coherent BFSK technique with its generator and receiver
block diagram. 7 L1 2

5 Derive the expression for probability of error for coherent BFSK technique 2
7 L2
6 Define bandwidth and Bandwidth efficiency for digital modulation technique. 2
Find the bandwidth, symbol rate and Bandwidth efficiency for a system with
following parameters for BPSK ,QPSK and 8-PSK schemes
8 L3
input data rate: 32kbps

7 Define constellation? Draw the constellation diagram for BPSK, QPSK and 8- L2
PSK 8 2

8 Discuss the operation of DPSK transmitter and receiver L1


8 2
9 Draw the waveforms illustrating the detail operation of DPSK till demodulation of the L2
sequence, also draw the sinusoidal waveform for transmitted signal. 8 2
1101100
10 Explain M-ary PSK. Draw the constellation diagram for M=8 8 L2 2
11 Explain M-ary QAM. Draw the constellation diagram for M=4 and M=16. 8 L2 2
12 Bring out the difference between coherent and non-coherent demodulation L1
4 2
techniques.
13 Derive bandwidth efficiency. Tabulate and comment on the bandwidth efficicency L2
7 2
of M-ary PSK signals for different values of M
14 Binary data are transmitted over a microwave link at the rate of 106 bits/sec and the L3
power spectral density of the noise at the receiver input is 10-10 W/Hz. Find the average
carrier power required to maintain an average probability of error Pe 10-4 for the 6 2
following cases.i) Binary PSK ii) DPSK Note: erfc(2.63)=2x10-4 or
Q(3.7) = 10-4
15 Explain with a neat block diagram non-coherent receiver for the detection of binary- L1
8 2
FSK signals
16 Explain Octa-phase shift keying with constellation diagram and derive probability of L2
8 2
17 error
18 Explain quadrature phase shift keying (QPSK) with constellation diagram. Derive an 10 L2
2
expression for probability of error.
19 An FSK system transmits binary data at arate of 2 Mbps. During transmission an 8 L3
AWGN of zero mean and PSD 10-20 W/Hz is added to the signal. The amplitude of the
2
received signal is 1µV. Determine the average probability of error assuming (a)
Coherent detection (b) Non-coherent detection.
MODULE 3
1 With a neat block diagram explain information communication system block diagram L2
Define (i) Self information Also justify why to take logarithmic function for
measurement of self-information?(ii)Entropy (iii)Rate of source i) State the properties
of entropy (ii) Drive an expression for average information content of symbols in long 8 3
independent sequence.

2 The international Morse code uses a sequence of dots and dashes to transmit letters of L3
the English alphabet. The dash is represented by a current pulse that has a duration of
3 units and the dot has a duration of 1 unit. The probability of occurrence of a dash is
l /3 of the probability of occurrence of a dot.
(i) Calculate the information content of a dot and a dash. 6 3
(ii) Calculate the average information in the dot-dash code.
(iii) Assume that the dot lasts 1 msec, which is the same time interval as the
pause between symbols. Find the average rate of information transmission.

3 Prove that entropy of zero memory extension source is given by H(Sn)=nH(S). 6 L 3


2
4 A binary source is emitting an independent sequence of O's and 1 's with probabilities L3 3
p and 1 - p, respectively. Plot the entropy of this source versus p (0 < p<1). 6

5 Construct the Huffman code with minimum code variance for the following L3 3
probabilities and also determine the code variance and code efficiency: {0.25,
0.25. 0.125, 0.125, 0.125, 0.0625, 0.0625} 8

6 Explain code efficiency and code redundancy 4 L2 3


7 What is mutual information? Mention its properties. 4 L2 3
8 Prove the identities L2 3
i)H[X,Y]=H[X]+H[Y]
ii)H[X,Y]=H[X]+H[Y/X] 8
I(X,Y)=H(X)-H(X/Y)=H(Y)-H(Y/X)
9 State Information capacity law 2 L1 3
10 Explain channel coding and source coding theoem 6 L2 3
11 A source emits one of four possible symbols during each signaling interval. The symbols
occur with the probabilities p0 = 0.4 ,p1 = 0.3 ,p2 = 0.2 ,p3 = 0.1
which sum to unity as they should. Find the amount of information gained by observing the
source emitting each of these symbols
12 Consider the four codes listed below:

a. Two of these four codes are prefix codes. Identify them and construct their individual
decision trees.
b. Apply the Kraft inequality to codes I, II, III, and IV. Discuss your results in light of those
obtained in part a.

13 Consider the following binary sequence 11101001100010110100 


Use the Lempel–Ziv algorithm to encode this sequence, assuming that the binary symbols 0
and 1 are already in the cookbook.

14 Consider a sequence of letters of the English alphabet with their probabilities of occurrence

Compute two different Huffman codes for this alphabet. In one case, move a combined
symbol in the coding procedure as high as possible; in the second case, move it as low as
possible. Hence, for each of the two codes, find the average codeword length and the variance
of the average codeword length over the ensemble of letters. Comment on your results.

15 Consider a discrete memoryless source with alphabet {s0, s1, s2} and statistics {0.7, 0.15,
0.15} for its output.
a. Apply the Huffman algorithm to this source. Hence, show that the average codeword
length of the Huffman code equals 1.3 bits/symbol.
b. Let the source be extended to order two. Apply the Huffman algorithm to the resulting
extended source and show that the average codeword length of the new code equals 1.1975
bits/symbol.
c. Extend the order of the extended source to three and reapply the Huffman algorithm;
hence,
calculate the average codeword length.
d. Compare the average codeword length calculated in parts b and c with the entropy of the
original source.

16 Consider a binary symmetric channel characterized by the transition probability p. Plot the
mutual information of the channel as a function of p1, the a priori probability of symbol 1 at the
channel input. Do your calculations for the transition probability p = 0, 0.1, 0.2, 0.3, 0.5.

17

Module 4
1 What are different methods of controlling errors Explain L 4
6
2 ,5
2 What are types of error and types of codes in error control coding 6 L 4
2 ,5
3 In a (15,5) cyclic code, the generator polynomial is given by : g(X) = 4
1+X+X2+X4+X5+X8 + X10 (i) Draw the block diagram of encoder and syndrome L ,5
calculator. (ii) Find whether 8
3
r(X) = 1+X4+X6+X8+X14 a valid code word or not.
4 For a Linear Block Code the syndrome is given by: 4
S1= r1+r2 + r3+ r5 , ,5
S2= r1+r2 + r4+ r6 , L
S3= r1+r3+ r4+ r7 10
3
(i) Find Generator Matrix (ii) Find Parity Check Matrix (ii) Draw the Encoder
Circuit (iii) How many errors can be detected and corrected?
5 Define G and H matrix and show that CHT = 0. L 4
6 ,5
2
6 Design a linear block code with a minimum distance of 3 and a message block size of 8 L 4
bits. 8 ,5
3
7 For a (6,3) cyclic code Find out: 4
i)Generator Polynomial ii)Generator Matrix L ,5
8
iii)Parity Check matrix iv)Equation for code words. 3
8 A (7,4) Cyclic Code has the generator polynomial g(x) = 1+x+x3 4
Calculate the syndrome for received vector R=[1 1 1 1 1 1 1],R=[1 0 1 0 1 0 1]. L ,5
8
Draw syndrome calculation circuit. 3
9 Explain syndrome and its properties L 4
6 ,5
2
10 Design a linear block code with a minimum distance of 3 and a message block size of 8 L 4
bits. 10 ,5
3
11 Hamming codes are said to be perfect single-error correcting codes. Justify the fact that 4
Hamming codes are perfect. L
6 ,5
3
12 The generator polynomial of a (15,11) Hamming code is defined by 4
G(x) = 1 + x + x4
Develop the encoder and syndrome calculator for this code, using a systematic form for the L ,5
10
code 3

Module 5
1 For (2,1,3) Convolution Encoder with g(1)=1101, g(1)=1011
(i) Write transition table
(ii) State diagram
(iii) Draw the code tree
4
10 L3 ,5
(iv) Draw the trellis diagram
(v) Find the encoded output for the message(11101) by traversing the code tree

2 A (7,4) Cyclic Code has the generator polynomial g(x) = 1+x+x3 4


Calculate the syndrome for received vector R=[1 1 1 1 1 1 1],R=[1 0 1 0 1 0 1]. 10 L3 ,5
Draw syndrome calculation circuit.
3 Obtain the output of the (2,1,2) convolution encoder for g1=111,g2= 011 for 8 L 4
message 11101. Detail the contents of the shift register after every clock. 3 ,5
4 What are convolution codes? How they are different from block codes. 6 L 4
2 ,5
5 Consider a (3,1,2) Convolution Encoder with g(1)=110, g(1)=101 and g(1)=111 10 L 4
(i) Draw the encoder diagram 3 ,5
(ii)Find the code word for the message sequence (11101) using Generator Matrix
and Transform domain approach.
6 Explain Viterbi decoding with example 8 L 4
2 ,5
7 For a (2,1,3) convolution encoder with g1=1101 and g2=1011 draw the convolution 10 L 4
encoder block diagram. Write down state transition table. Draw code tree. Find the 3 ,5
encoder output produced by message 11101 traversing through the code tree.
8 Explain state diagram and state transition table. 6 L 4
2 ,5
9 Write a note on Trellis diagram with example 6 L 4
2 ,5
10 Explain Maximum Likelihood decoding of convolution code with suitable 6 L 4
example 2 ,5
11 4
,5
L
10
3

12 4
,5

L
10
3

13 4
,5

L
10
3

Course Coordinator Module Coordinator Program Coordinator/ HOD

You might also like