0% found this document useful (0 votes)
37 views88 pages

2P6 Communications

The document outlines a course on Communications, covering topics such as signals, channels, analogue modulation, and digital communication. It includes a brief history of communication technologies and key concepts like signal power, bandwidth, and noise in communication channels. The course aims to teach the design of communication schemes while considering power and bandwidth constraints.

Uploaded by

zqbn5xnpyq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views88 pages

2P6 Communications

The document outlines a course on Communications, covering topics such as signals, channels, analogue modulation, and digital communication. It includes a brief history of communication technologies and key concepts like signal power, bandwidth, and noise in communication channels. The course aims to teach the design of communication schemes while considering power and bandwidth constraints.

Uploaded by

zqbn5xnpyq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

1B Paper 6: Communications

Handout 1: Introduction, Signals, and Channels

Ramji Venkataramanan

Signal Processing and Communications Lab


Department of Engineering
rv285@cam.ac.uk

Lent Term 2024

1 / 23

Course Information

• Seven lectures, recordings will be accessible after each lecture


via the Panopto block on Moodle
• Lecture handouts (both filled and unfilled) will be posted on
Moodle: https://www.vle.cam.ac.uk
• Feedback via email (rv285@cam.ac.uk) or using anonymous
feedback facility

2 / 23
Topics
• Signals and Channels
• Analogue Modulation (AM, FM)
• Digitisation of Analogue Signals (sampling recap and
quantisation)
• Digital Signals and Modulation
• A brief introduction to Channel Coding
• Multiple Access

References:
S. Haykin and M. Moher,
Introduction to Analog & Digital Communications 2nd Ed.,
John Wiley & Sons, 2007
R. G. Gallager, Principles of Digital Communications,
Cambridge University Press, 2008
3 / 23

“Communications” teaching

School IA IB IIA IIB

3F7 Information Theory


Engineering

4F5 Advanced
2P6 Comms Communications
and Coding
3F4 Data
Transmission

1P4 Fourier Series 2P7 Probability


Maths

Functions, 2P6 Fourier 3F1 Random


Trigonometry Transforms Processes

4 / 23
A Brief History
Analogue Communications
• Telephone: patented in 1876
• Radio: AM since early 1900s, FM patented in 1930s
• BBC broadcast analogue TV from 1936-2012

Digital Communications
• Telegraph: first optical/semaphore 1767, electrical 1816
• Mobile Communications: GSM (1991) ! 3G ! 4G LTE
• Wi-Fi, first deployed in 1997, Bluetooth in ’98
• Asymmetric Digital Subscriber Line (ADSL), up to 4Mbit/s,
appeared early 2000
• Digital Video Broadcasting (DVB), first broadcast ever in the
UK, in 1998. Since 2012, all broadcast TV in the UK is digital
5 / 23

The Basic Idea


Communication: The process of delivering information from an
information source to a destination through a communication
channel.

Source Destination

input output
Transmitter Channel Receiver
waveform waveform

More generally, we could have multiple sources delivering


information to multiple destinations through a common channel
6 / 23
Multiple Access Channel:
Destination
Source Transmitter

Source Transmitter Channel Receiver

Source Transmitter

Broadcast Channel:

Source
Receiver Destination

Transmitter Channel Receiver Destination

Receiver Destination

7 / 23

For most of this course, we will focus on the point-to-point


communication model:

Source Destination

input output
Transmitter Channel Receiver
waveform waveform

8 / 23
Block Diagram Components

• Source of information: May be analogue (voice, music, video),


or digital (e.g., e-mail, any file on your computer)
• Transmitter: translates the information into a signal suitable
for transmission over the channel
• Channel: medium used to transmit the signal to the receiver
- E.g., optical fibre, wireless channel, magnetic recording...
- May distort transmitted signal, e.g., add noise or attenuate it

• Receiver: reconstructs the source of information from the


received signal
• Destination: for whom the information is intended

9 / 23

Key Signal Properties

Two properties of signals that are important for communication:


1. Power
2. Bandwidth

Let us define these terms and understand why they are relevant.

10 / 23
Signal Energy

The energy of a signal x(t) is defined as


Z 1
Ex = |x(t)|2 dt
1

If X (!) is the Fourier transform of x(t), recall Parseval’s theorem:


Z 1 Z 1 Z 1
1
Ex = |x(t)|2 dt = |X (!)|2 d! = |X (f )|2 df
1 2⇡ 1 1

• ! = 2⇡f is the frequency in radians, f is frequency in Hz


• |X (f )|2 is the energy spectral density
Can think of |X (f )|2 df as the energy of the signal in the
frequency band [f , f + df ]

11 / 23

Signal Power

For a signal x(t) whose energy is infinite, the power is defined as


Z T /2
1
Px = lim |x(t)|2 dt
T !1 T T /2

Why is signal power important?


• We are usually concerned about energy of the transmitted
signal per unit time, i.e., transmit power
• Lower transmit power implies longer battery life for your phone
• But lower transmit power also makes signal harder to detect
at the receiver in the presence of noise!
• Need clever Tx + Rx designs that make judicious use of
available transmit power

12 / 23
Bandwidth
The bandwidth of a signal is roughly the range of frequencies over
which its spectrum (Fourier transform) is non-zero.
|X(f )|

f
W W
• For real signals, bandwidth measured as the range of positive
frequencies as |X (f )| is symmetric around 0
(as X ( f ) = X ⇤ (f ) for real x(t))
• In communications, signal bandwidth typically specified in Hz
A signal is called low-pass or baseband if its spectral content is
centred around f = 0.
• The bandwidth of the baseband signal above is W
• E.g., audio signals are baseband with bandwidth ⇡ 20 kHz
Voice signals in telephone systems have bandwidth ⇡ 4 kHz
13 / 23

Passband signals
A signal is said to be passband if its spectral content is centred
around ±fc , where fc 0

|X(f )|

f
fc W fc fc + W fc W fc fc + W

The bandwidth of this passband signal is 2W


Examples of passband signals:
• AM (Amplitude-modulated) radio signals have bandwidth
⇡ 10 kHz around fc ⇡ 1 MHz
• Transmitted signals in a WiFi network have bandwidth ⇡ 20
MHz around fc ⇡ 2.4 GHz

14 / 23
x(t)
1

T T
t
2 2

T T
rect(t/T ) is the rectangular pulse, which is 1 for 2 t 2,
and 0 elsewhere. What is its bandwidth?

0 15 / 23

Bandwidth – A sensible definition?


Many real-world signals are time-limited
) These will not be strictly limited in frequency

The absolute bandwidth of rect(t/T ) is 1.


Other, more practical, definitions of bandwidth:
1. 90% bandwidth: The range of frequencies which contain 90%
of the energy of the spectrum
2. 3-dB bandwidth: The range of frequencies which contain 50%
of the energy of the spectrum
3. Null-to-null bandwidth: The width of the “main lobe” of the
spectrum for the rect signal

• The “main-lobe” bandwidth of rect(t/T ) is T1


• If we also include one side-lobe, bandwidth of rect(t/T ) is 2
T

16 / 23
Thus, bandwidth is a measure of the extent of significant spectral
content of the signal
Bandwidth is a scarce resource, especially in mobile (cellular)
communication:
• Wireless bandwidth licensed and regulated by OFCOM
• A company has to buy a slice of spectrum, say few tens of
MHz around fc ⇡ 2 GHz, and restrict its transmitted signals
to within the spectrum
• Passband 4G spectrum of few tens of MHz auctioned for
hundreds of millions of £ to telecom companies!
Wired channels such as telephone lines and USB cables act like
linear systems or filters:
• Their transfer function is roughly flat over a band of
frequencies [ W , W ] around 0, and then attenuates to 0 for
higher frequencies.
• Therefore, transmitted signals need to be bandlimited to W

In both wired and wireless communication, need good Tx + Rx


designs that make optimal use of available bandwidth 17 / 23

Communication Channels

What is a channel?
The medium used to transmit the signal from transmitter to
receiver.
• Introduces attenuation and noise
• So the received signal is a faded and noisy version of what the
transmitter sent
• Noise and attenuation can cause errors at the receiver

Channel Input Communications Channel Output


Channel

18 / 23
Some Real-world Channels

From “Fundamentals of Wireless Communication”, Tse and Viswanath, CUP 2005

1. Mobile Wireless Channel:


• There is distortion of the signal caused by multipath
propagation and mobility
• Exact type of distortion depends on the signal bandwidth
2. Optical Fibre Channel:
• Very large BW, cheap production, low attenuation
• Cons: dispersion of optical pulses, expensive regenerators reqd.
• Used in the core of the internet, for long-distance
communication networks
3. Electrical Wire Channel:
• Twisted pair cables (e.g., Ethernet) have limited bandwidth,
high attenuation; Cheap, used for short distances
19 / 23

Modelling a channel
KEY Q: How to model a channel ?
Channels are often modelled as linear systems with additive noise:
Channel output y (t) generated from input x(t) as
y (t) = h(t) ? x(t) + n(t)
In frequency domain:

Y (f ) = H(f )X (f ) + N(f )

For example, the frequency response of a telephone cable may look


like:
|H(f )|

f
W W 20 / 23
Additive Noise Channel
If the input is restricted to the band where the channel H(f ) is
flat, then the channel is

Y (f ) = X (f ) + N(f )

or
y (t) = x(t) + n(t)
This is a very popular and useful model. What about n(t)?

n(t) is thermal noise at the Rx:


• Thermal noise is the noise generated by the thermal agitation
of electrons inside an electrical conductor
• Happens regardless of the applied voltage
• All receivers (WiFi, mobile phone, AM, FM,...) generate
thermal noise

21 / 23

Additive Gaussian Noise


x(t) y (t)

n(t)

Thermal noise n(t) is modelled as a Gaussian random process:


– At each time t, n(t) is a Gaussian random variable
– A rigorous description requires knowledge of random processes
(in 3F1)

• The additive Gaussian noise channel is the workhorse of


communication theory: good model for many real-world
communication systems

• Channels whose frequency response H(f ) is not flat are


important in practice, but outside the scope of this course
22 / 23
In the remainder of the course:
– we will learn how to design both analogue & digital
communication schemes (Tx + Rx)
– keeping in mind power and bandwidth constraints
– we’ll then study how noise a↵ects the performance of these
schemes

23 / 23
1B Paper 6: Communications
Handout 2: Analogue Modulation

Ramji Venkataramanan

Signal Processing and Communications Lab


Department of Engineering
ramji.v@eng.cam.ac.uk

Lent Term 2024

1 / 32

Modulation

Modulation is the process by which some characteristic of a carrier


wave is varied in accordance with an information bearing signal

A commonly used carrier is a sinusoidal wave, e.g., cos(2⇡fc t).


fc is called the carrier frequency.
• We are allotted a certain bandwidth centred around fc for our
information signal
• E.g. BBC Cambridgeshire: fc = 96 MHz, information
bandwidth ⇡ 200 KHz
• Q: Why is fc usually large?
A: Antenna size / c ) larger frequency, smaller antennas!

2 / 32
Analogue vs. Digital Modulation
Analogue Modulation: A continuous information signal x(t)
(e.g., speech, audio) is used to directly modulate the carrier wave.
We’ll study two kinds of analogue modulation:
1. Amplitude Modulation (AM) : Information x(t) modulates the
amplitude of the carrier wave
2. Frequency Modulation (FM): Information x(t) modulates the
frequency of the carrier wave
We’ll learn about:
– Power & bandwidth of AM & FM signals
– Tx & Rx design

In the last 4 lectures, we will study digital modulation:


• x(t) is first digitised into bits
• Digital modulation then used to transport bits across the
channel
3 / 32

Amplitude Modulation (AM)


• Information signal x(t), carrier cos(2⇡fc t)
• The transmitted AM signal is

sAM (t) = [a0 + x(t)] cos(2⇡fc t)

• a0 is a positive constant chosen so that maxt |x(t)| < a0


• The modulation index of the AM signal is defined as

maxt |x(t)|
mA =
a0

“The percentage that the carrier’s amplitude varies above and


below its unmodulated level”
Why is the modulation index important ?
ma < 1 is desirable because we can extract the information signal
x(t) from the modulated signal by envelope detection.
4 / 32
0.25 0.5

0.2
x(t) 0.4 [a0 + x(t)] cos 2⇡fc t
0.3
0.15

0.2
0.1

0.1
0.05
0
0
-0.1

-0.05
-0.2

-0.1
-0.3

-0.15 -0.4

-0.2 -0.5
0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140

0.25

0.2 x(t) cos 2⇡fc t


0.15

0.1 When modulation index > 1:


d d • Phase reversals occur
0.05

-0.05
BMB ⇥⇥ • x(t) cannot be detected by
B ⇥
-0.1

tracing the +ve envelope


B⇥
-0.15

phase reversals
-0.2

-0.25
0 20 40 60 80 100 120 140

5 / 32

AM Receiver - Envelope Detector

sAM (t) C RL Vout(t)

• On the positive half-cycle of the input signal, capacitor C


charges rapidly up to the peak value of input sAM (t)
• When input signal falls below this peak, diode becomes
reverse-biased: capacitor discharges slowly through load
resistor RL
• In the next positive half-cycle, when input signal becomes
greater than voltage across the capacitor, diode conducts
again until next peak value
• Process repeats . . .

Very inexpensive receiver, but envelope detection needs mA < 1.


6 / 32
Circuit Diagram

AM wave input Envelope detector output

7 / 32

Spectrum of AM

Next, let’s look at the spectrum of sAM (t) = [a0 + x(t)] cos(2⇡fc t)

SAM (f ) = F[sAM (t)]



(e j2⇡fc t + e j2⇡fc t )
= F [a0 + x(t)]
2
a0 1
= [ (f fc ) + (f + fc )] + [X (f fc ) + X (f + fc )]
|2 {z } |2 {z }
carrier information

(F[.] denotes the Fourier transform operation)

8 / 32
Example

a0 1
SAM (f ) = [ (f fc ) + (f + fc )] + [X (f fc ) + X (f + fc )]
2 2

X(f )
C

f
W 0 W

a0 /2 SAM (f ) a0 /2

C/2 C/2

fc W fc fc + W fc W fc fc + W

9 / 32

Properties of AM

sAM (t) = [a0 + x(t)] cos(2⇡fc t)


a0 1
SAM (f ) = [ (f fc ) + (f + fc )] + [X (f fc ) + X (f + fc )]
2 2

1. Bandwidth: From spectrum calculation, we see that if x(t) is


a baseband signal with (one-sided) bandwidth W , the AM
signal sAM (t) is passband with bandwidth

BAM = 2W

2. Power: We now prove that the power of the AM signal is


a02 PX
PAM = +
2 2
where PX is the power of x(t).
10 / 32
Power of AM signal
Z T
1
PAM = lim [a0 + x(t)]2 cos2 (2⇡fc t) dt
T !1 T 0
Z T
1 1 + cos(4⇡fc t)
= lim [a0 + x(t)]2 dt
T !1 T 0 2
Z T
a02 PX 1 [a0 + x(t)]2
= + + lim cos(4⇡fc t) dt
2 2 T !1 T 0 2
RT
(We assume that T1 0 x(t)dt = 0 as a non-zero-mean can be
absorbed into a0 .)
Now show that the last the last term is ⇡ 0.
• cos(4⇡fc t) is a high-frequency sinusoid with period Tc = 1
2fc .
• g (t) = (a0 + x(t))2 /2 is a baseband signal which changes
much more slowly than cos(4⇡fc t).

LA a dt Xt
11 / 32

IEEE
Hence, with T = nTc , we have
ternary
Z Z
1 T
1 ⇣ Tc
g (t) cos(4⇡fc t) dt ⇡ g (0) cos(4⇡fc t) dt +
T 0 nT c 0
Z 2Tc Z nTc ⌘
+ g (Tc ) cos(4⇡fc t) dt . . . + g ((n 1)Tc ) cos(4⇡fc t) dt
Tc (n 1)Tc
= 0.

a02 PX
Hence PAM = 2 + 2 .

12 / 32
Double Sideband Suppressed Carrier (DSB-SC)
The power of AM signal is
a02 PX
PAM = +
2
|{z} 2
carrier
• The presence of a0 makes envelope detection possible, but
a2
requires extra power of 20 corresponding to the carrier
• In DSB-SC, we eliminate the a0 :
We transmit only the sidebands, and suppress the carrier
X(f )

f
W 0 W

Lower Sideband Upper Sideband


Sdsb sc (f )

fc W fc fc + W fc W fc fc + W
13 / 32

The transmitted DSB-SC waveform is

sdsb-sc (t) = x(t) cos(2⇡fc t)

X(f )

f
W 0 W

Lower Sideband Upper Sideband


Sdsb sc (f )

fc W fc fc + W fc W fc fc + W

How to recover x(t) at the receiver?


Phase reversals ) cannot use envelope detection

14 / 32
DSB-SC receiver

DSB-SC Receiver: Product Modulator + Low-pass filter

sdsb sc (t) ⇥ v(t) Low-pass


filter
x̂(t)

ICE as att
cos(2⇡fc t) COSGIKE

Step 1: Multiplying received signal by cos(2⇡fc t) gives

x(t) x(t) cos(4⇡fc t)


v (t) = x(t) cos2 (2⇡fc t) = +
2
|{z} | 2
{z }
low freq. high freq.

if
15 / 32

DSB-SC receiver

sdsb sc (t) ⇥ v(t) Low-pass


filter
x̂(t)

cos(2⇡fc t)

Step 2: Low-pass filter eliminates the high-frequency component.


Ideal low-pass filter has H(f ) = constant for W  f  W , and
zero otherwise

16 / 32
Properties of DSB-SC

sdsb-sc (t) = x(t) cos(2⇡fc t)


1
Sdsb-sc (f ) = (X (f + fc ) + X (f fc ))
2
• Bandwidth of DSB-SC is Bdsb-sc = 2W , same as AM
• Power of DSB SC is Pdsb-sc = P2X
(follows from AM power calculation)
• DSB-SC requires less power than AM as the carrier is not
transmitted
• But DSB-SC receiver is more complex than AM !
We assumed that receiver can generate locally generate a
frequency fc sinusoid that is synchronised perfectly in phase
and frequency with transmitter’s carrier
• E↵ect of phase mismatch at Rx is explored in Examples paper

17 / 32

Single Sideband Suppressed Carrier (SSB-SC)


DSB-SC transmits less power than AM. Can we also save
bandwidth?
• x(t) is real ) X ( f ) = X ⇤ (f )
) Need to specify X (f ) only for f > 0
• In other words, transmission of both sidebands is not strictly
necessary: we could obtain one sideband from the other!
X(f )

f
W 0 W
Upper Sideband
Sssb sc (f )

fc W fc fc fc + W

• Bandwidth is Bssb-sc = W , half of that of AM or DSB-SC!


• Power is is Pssb-sc = P4X , half of DSB-SC
18 / 32
Summary: Amplitude Modulation
X(f )

Information signal

f
W 0 W

SAM (f )

AM

fc W fc fc + W fc W fc fc + W

Sdsb sc (f )

DSB-SC

fc W fc fc + W fc W fc fc + W

Sssb sc (f )

SSB-SC

fc W fc fc fc + W
19 / 32

You can now do Questions 1–5 on


Examples Paper 8.

20 / 32
Frequency Modulation (FM)
In FM, the information signal x(t) modulates the instantaneous
frequency of the carrier wave.
The instantaneous frequency f (t) is varied linearly with x(t):

f (t) = fc + kf x(t)

This translates to an instantaneous phase ✓(t) given by


Z t Z t
✓(t) = 2⇡ f (u)du = 2⇡fc t + 2⇡kf x(u)du
0 0

The modulated FM signal


⇣ Z t ⌘
sFM (t) = Ac cos(✓(t)) = Ac cos 2⇡fc t + 2⇡kf x(u)du
0
• Ac is the carrier amplitude
• kf is called the frequency-sensitivity factor
21 / 32

Example
What information signal does this FM wave correspond to?
1

0.8

0.6

0.4

0.2
sFM(t)

−0.2

−0.4

−0.6

−0.8

−1
0 1 2 3 4 5 6 7 8 9 10
t

o
(a) a constant, (b) a ramp, (c) a sinusoid, (d) no clue
22 / 32
FM Demodulation
At the receiver, how do we recover x(t) from the FM wave?
(ignoring e↵ects of noise)
⇣ Z t ⌘
sFM (t) = Ac cos 2⇡fc t + 2⇡kf x(u)du
0

The derivative is
⇣ Z t ⌘
dsFM (t)
= 2⇡Ac [fc + kf x(t)] sin 2⇡fc t + 2⇡kf x(u)du
dt 0

• The derivative is a passband signal with amplitude modulation


by [fc + kf x(t)]
• If fc large enough, we can recover x(t) by envelope detection
of dsFM (t)
dt !
• Hence FM demodulator is a di↵erentiator + envelope detector
d F
• Di↵erentiator: dt ! j2⇡f (frequency response). See
Haykin-Moher book for details on how to build a di↵erentiator
23 / 32

Properties of FM
⇣ Z t ⌘
sFM (t) = Ac cos 2⇡fc t + 2⇡kf x(u)du
0

2
• Power of FM signal = A2c , regardless of x(t)
• Non-linearity: FM(x1 (t) + x2 (t)) 6= FM(x1 (t)) + FM(x2 (t))

• FM is more robust to additive noise than AM.


Intuitively, this is because the message is “hidden” in the
frequency of the signal rather than the amplitude.
• But this robustness comes at the cost of increased
transmission bandwidth
• What is the bandwidth of the FM signal sFM (t)?
The spectral analysis is a bit complicated, but we will do it for
a simple case . . . where x(t) is a sinusoid (a pure tone)

24 / 32
FM modulation of a tone
Consider FM modulation of a tone x(t) = ax cos(2⇡fx t). We have

f (t) = fc + kf ax cos(2⇡fx t)
k f ax
✓(t) = 2⇡fc t + sin(2⇡fx t)
fx

• f = kf ax is called the frequency deviation


f is the max. deviation of the carrier frequency f (t) from fc
• = kffxax = fxf is called the modulation index
is the max. deviation of the carrier phase ✓(t) from 2⇡fc t

Then the FM signal becomes

sFM (t) = Ac cos (2⇡fc t + sin(2⇡fx t))


25 / 32

The spectrum of the FM signal

We want to study the frequency spectrum of

sFM (t) = Ac cos (2⇡fc t + sin(2⇡fx t))

You will show in the Examples Paper that


1
Ac X
SFM (f ) = Jn ( ) [ (f fc nfx ) + (f + fc + nfx ) ]
2 n= 1

1
R⇡
where Jn ( ) = 2⇡ ⇡ e j( sin u nu) du

Jn (.) is called the nth order Bessel function of the first kind.

26 / 32
Plots of Jn ( ) vs
1
J0
J1
J2
J3
0.5
J4

-0.5
0 5 10 15

27 / 32

Example
What is the spectrum of the FM signal when x(t) is a pure tone
and the modulation index = 5 ?
Jn ( ) vs n for =5
0.4

0.3

0.2

0.1
Jn(β)

−0.1

−0.2

−0.3

−0.4
0 5 10 15
n
28 / 32
The spectrum is
1
Ac X
SFM (f ) = Jn (5) [ (f fc nfx ) + (f + fc + nfx ) ]
2 n= 1

X(f)

-fx fx

|SFM(f)|

-fc fc

29 / 32

Bandwidth of FM signals
To summarise, even for the case where x(t) has a single frequency
fx , the spectrum of the FM wave is rather complicated:
• There is a carrier component at fc , and components located
symmetrically on either side of fc at fc ± fx , fc ± 2fx ,. . .
• The absolute bandwidth is infinite, but . . . the side
components at fc ± nfx become negligible for large enough n

Carson’s rule for the e↵ective bandwidth of FM signals:


1. The bandwidth of an FM signal generated by modulating a
single tone is ⇣ ⌘
1
BFM ⇡ 2 f + 2fx = 2 f 1 +

2. For an FM signal generated by modulating a general signal


x(t) with bandwidth W , the bandwidth BFM ⇡ 2 f + 2W
(Recall: for any FM wave, f is the frequency deviation around fc )
30 / 32
Example

BBC Radio Cambridgeshire: fc = 96 MHz and f = 75 kHz.


Assuming that the voice/music signals have W = 15 kHz, we have
I bandwidth
f 75
= = =5
W 15
and the bandwidth

BFM = 2( f + W ) = 2(75 + 15) = 180 kHz,

while
BAM = 2W = 30 kHz
FM signals have larger bandwidth than AM, but have better
robustness against noise.

31 / 32

Summary: Analogue Modulation

Amplitude Modulation with information signal of bandwidth W


• AM modulated signal: Bandwidth 2W , high power, simple Rx
using envelope detection
• DSB-SC: Bandwidth 2W , lower power, more complex Rx
• SSB-SC: Bandwidth W , even lower power, Rx similar to
DSB-SC

Frequency Modulation with information signal of bandwidth W :


• FM signal has constant carrier amplitude ) constant power
• Bandwidth of FM signal depends on both and W
Can be significantly greater than 2W
• Better robustness to noise than AM as the information is
“hidden” in the phase

32 / 32
1B Paper 6: Communications
Handout 3: Digitisation of Analogue Signals

Ramji Venkataramanan

Signal Processing and Communications Lab


Department of Engineering
rv285@cam.ac.uk

Lent Term 2024

1 / 17

Two Types of Sources

1. Analogue: Continuous-time, continuous-amplitude sources,


e.g., speech, music

2. Digital: Can be represented as bits, e.g., email, computer


files, JPEG image files, mp3 music files
In general, a digital source is a discrete-time sequence of
symbols drawn from a finite alphabet

E.g., X1 , X2 , X3 , . . . Xi 2 {a, b, c, d} for all i

In this handout, we will learn how to e↵ectively convert analogue


signals into digital

2 / 17
Digitisation of Analogue Signals

Digitisation:
The process by which an analogue signal is converted into digital
format, i.e., from a continuous signal (in time and amplitude) to a
discrete signal (in time and amplitude). It consists of
• Sampling (discretises the time axis)
• Quantisation (discretises the signal amplitude axis)

Digitisation is also called analogue-to-digital conversion (ADC).

Why do we want to do this?

3 / 17

Why Digital ?
There are many advantages of transmitting digital signals:
• Robustness: In analogue communication systems, the e↵ect
of channel noise, signal distortion etc. are cumulative.
In contrast, regenerators can be used to recover and
retransmit a digital signal exactly before it excessively
degrades.
• Performance: Powerful error-correcting codes can correct bit
errors that may occur in the transmission of digital signals
• Encryption: Digital communication systems can be made
highly secure by exploiting powerful encryption algorithms

Digital communication does increase system complexity.


But dramatic improvements in hardware technology have made
design and implementation very cost-e↵ective.

4 / 17
End-to-end Digital Communication System
Step 1: Digitisation: Sampling + Quantisation
sampling quant.
x(t) ! x(nT ) ! . . . 0101110111000 . . . (bits)

Step 2:

110 001 100 111 110 000 100 111

input output
Transmitter Channel Receiver
waveform waveform

Channel noise can cause errors at the receiver. We’ll later see how
to deal with this using coding.

Let’s start with Step 1

5 / 17

Sampling
• Let x(t) be a continuous-time signal for 1 < t < 1
• Choose a sampling interval T , and read o↵ the values of x(t)
at times
..., 3T , 2T , T , 0, T , 2T , 3T , . . .
• The obtained values x(nT ) are the samples of x(t)

2.5

x(t)
2

1.5

0.5

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
6 / 17
Recovering x(t) from its samples
3

2.5

x(t)
2

1.5

0.5

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t
• Can you recover x(t) from just the samples x(nT )?
1
Yes, if the sampling rate T > 2W , where W is the bandwidth
of x(t) (in Hz)

Recovery easier to understand in frequency domain . . .


7 / 17

Frequency domain interpretation of sampling


A continuous-time representation of the sampled signal is
X X
xs (t) = x(nT ) (t nT ) = x(t) (t nT )
n n
P
n (t nT ) is periodic and can be expressed as a Fourier series

X Z T /2
jn 2⇡ t 1 j 2⇡n t 1
cn e T with cn = (t) e T dt =
n
T T /2 T
Therefore t t
1
1 X j 2⇡n
xs (t) = x(t)e T t settle t
T n= 1

and
1 X
1 ⇣ n⌘
Xs (f ) = X f
T n= 1 T

8 / 17
Anti-aliasing filter
Xs (f ) t

f
2fs fs W W fs W fs fs + W 2fs

we want fs W W
X (f ) can be recovered from Xs (f ) using an “ideal reconstruction”
or “anti-aliasing” filter

Nyquist Rate
Consider a signal x(t) with bandwidth W . Then, we can recover
x(t) from its samples {x(nT )} provided that the sampling
frequency fs = T1 satisfies fs > 2W

The sampled version {x(nT )}, n 2 Z is a discrete-time signal, but


not yet digital!
To represent the sampled signal using bits, we need to quantise it
9 / 17

Uniform Quantisation
The sampled signal can take continuous values. To convert it into
digital, we: 1) Assign a discrete amplitude from a finite set of
levels (with step ), and 2) Assign bits to those amplitudes
1.05
closest
0.75
level
0 45
signal amplitude

ni

0.15

0.45

0.75

1 05
time

10 / 17
What the quantised signal looks like

signal amplitude

time

Each sample x(nT ) is mapped to the nearest quantisation level


11 / 17

Sampling vs. Quantisation

Sampling is a lossless procedure as long as the sampling rate is


greater than the Nyquist rate:
) x(t) can be perfectly reconstructed from its samples x(nT )

Quantisation is always lossy: you cannot recover x(nT ) from its


quantised value!

If Q(z) denotes the quantised valued of a sample x(nT ) = z, the


quantisation noise is defined as

eQ (z) = z Q(z)

If the quantiser step size is , then eQ (z) lies in the interval


[ 2, 2]

12 / 17
Quantisation Noise as a Random variable
eQ (z) = z Q(z)
We model eQ as a random variable uniformly distributed in
[ 2 , 2 ]. Why?
• If we quantise lots of samples and the step size is small, the
set of samples quantised to a level Q will be approximately
uniformly distributed in a length- interval centred around Q.
pdf of eQ
1

eQ
0 2
2

We can now easily compute the noise power


Z /2 /2
2 2 1 1 u3 2
NQ = E[eQ ] = u du = =
/2 3 12
/2

and its corresponding RMS is p


12
13 / 17

Signal to Quantisation Noise Ratio


• Assume that the signal to be quantised is a sinusoid taking
values between V and +V (in Volts)
2
• The signal power is then V2 and the RMS signal is pV
2 of levels
• The signal-to-noise ratio is
signal power (RMS signal)2 V 2 /2
SNR = = = 2
noise power (RMS noise)2 /12 2

• An n-bit uniform quantiser has 2n levels, and step size = 2V


2n
(uses n bits to represent each sample of the signal)
• Hence the SNR can be written as
V 2 /2
SNR = 2 = 3 ⇥ 22n 1 = 1.76 + 6.02n dB
/12

For fixed signal amplitude ±V :


• Larger n ) smaller step size, better quality quantiser
• But larger n also means more bits to transmit !
14 / 17
Data Rate of the Digitised Source
Assuming we sample a signal x(t) having bandwidth W at Nyquist
rate, and we use an n-bit uniform quantiser, the digitised source
will have a rate of

R = n 2W bits per second

(Because we have 2W samples/sec., each represented with n bits)

Assume we want to digitise a speech signal whose bandwidth


W = 3.2kHz using Nyquist sampling and a 10-bit uniform
quantiser

Bit Rate R = 10 ⇥ 2 ⇥ 3200 = 64000 bits per second = 64 kbps

15 / 17

Non-uniform quantisation

Mobile phones use clever quantisers which reduce the bit-rate by a


factor of 5, from 64kbps (our uniform quantiser) to 13 kbps!
• Idea: Smaller step sizes in the vicinity of smaller (or more
frequently) occuring signal values, larger step sizes for larger
(or rarer) signal values
• This is called non-uniform quantisation or sometimes,
companding (Examples Paper 9, Problem 1.b)

16 / 17
Summary

• We learned how to digitise band-limited, continuous-valued


sources: Sample at Nyquist rate, then quantise
• Sampling is a lossless operation (when sampling rate >
Nyquist rate), but quantisation is lossy
• Trade-o↵ in Quantisation: Bits/sample (n) vs SNR:
Larger n, lower quantisation noise, but more bits to transmit
• We will next learn how to associate bits with signals in order
to transmit them across communication channels
• To transport bits across a channel, it doesn’t matter where
they came from!
You should now be able to do all of Examples Paper 8

17 / 17
1B Paper 6: Communications
Handout 4: Digital Baseband Modulation

Ramji Venkataramanan

Signal Processing and Communications Lab


Department of Engineering
rv285@cam.ac.uk

Lent Term 2024

1 / 42

Data Transmission
We have seen how analogue sources can be digitised. E.g., An
MPEG or QuickTime file is a stream of bits

! . . . 10110010001101010 . . .

Now we have to transport those bits across a channel:


(Digitised source)
110 001 100 111 110 000 100 111

input output
Transmitter Channel Receiver
waveform waveform

2 / 42
110001 110001

Channel Encoder Channel Decoder

010110100 010010110
input output
Modulator Channel Demodulator
waveform waveform

Tx Rx
The transmitter (Tx) does two things:
1. Encoding: Adding redundancy to the source bits to protect
against noise
2. Modulation: Transforming the coded bits into waveforms.
The receiver (Rx) does:
• Demodulation: noisy output waveform ! output bits
• Decoding: Try to correct errors in the output bits and recover
the source bits
3 / 42

Modulation/Demodulation
We’ll first consider the modulation and demodulation blocks
assuming that the channel encoder/decoder are fixed, and look at
the design of the channel encoder and decoder later.
(Encoded bits)
10110100 10100110

x(t) y (t)
Modulator + Demodulator

n(t)

We now study a digital baseband modulation technique called


Pulse Amplitude Modulation (PAM) & analyse its performance
over an Additive White Gaussian Noise (AWGN) channel
4 / 42
The Symbol Constellation
The digital modulation scheme has two basic components.
1. The first is a mapping from bits to real/complex numbers, e.g.

0! A, 1!A (binary symbols)


00 ! 3A, 01 ! A, 10 ! A, 11 ! 3A (4-ary symbols)

The set of values the bits are mapped to is called the constellation,
e.g., the 4-ary constellation above is { 3A, A, A, 3A}.
Once we fix a constellation, a sequence of bits can be uniquely
mapped to constellation symbols. E.g., with constellation { A, A}

0101110010 ! A, A, A, A, A, A, A, A, A, A

With constellation { 3A, A, A, 3A}, the same sequence of bits is


mapped as 01 01 11 00 10 ! A, A, 3A, 3A, A

In a constellation with M symbols, each symbol represents log2 M


bits
5 / 42

The Pulse Shape


2. The second component of Pulse Amplitude Modulation is a
unit-energy baseband waveform denoted p(t), called the pulse
shape. E.g., a sinc pulse or a rect (
pulse:
⇣ ⇡t ⌘ ⇤
1 p1 for t 2 T T
,
p(t) = p sinc or p(t) = T 2 2
T T 0 otherwise
T is called the symbol time of the pulse
A sequence of constellation symbols X0 , X1 , X2 , . . . is used to
generate a baseband signal as follows
X
xb (t) = Xk p(t kT )
k

Thus we have the following important steps to associate bits with


a baseband signal xb (t):
X
. . . 0 1 0 1 1 1 0 0 1 0 . . . ! X0 , X1 , X2 , . . . ! Xk p(t kT )
k

6 / 42
Rate of Transmission P
The modulated baseband signal is xb (t) = k Xk p(t kT ).
With the rect pulse shape
( ⇤
p1 for t 2 T T
p(t) = T 2, 2
0 otherwise

and Xk 2 {+A, A}, xb (t) looks like


p
A/ T

T t
2 T 2T 3T 4T
p
A/ T

Every T seconds, a new symbol is introduced by shifting the pulse


and modulating its amplitude with the symbol.

1 log2 M
The transmission rate is T symbols/sec or 10541
T bits/second

7 / 42

Desirable Properties of the Pulse Shape p(t)


p(t) is chosen to satisfy the following important objectives:
1. We want p(t) to decay quickly in time, i.e., the e↵ect of
symbol Xk should not start much before t = kT or last much
beyond t = (k + 1)T

2. We want p(t) to be approximately band-limited.


For a fixed sequence of symbols {Xk }, the spectrum of xb (t) is
PE PF" P t KT PA
# 524kt
X X tht
j2⇡fkT
Xb (f ) = F Xk p(t kT ) = P(f )
PA Xk e e
k k

Hence the bandwidth of xb (t) is the same as that of the pulse p(t)

3. The retrieval of the information sequence from the noisy


received waveform xb (t) + n(t) should be simple and relatively
reliable. In the absence of noise, the symbols {Xk }k2Z should
be recovered perfectly at the receiver.
8 / 42
Orthonormality of pulse shifts
Consider the third objective, namely, simple and reliable detection.
To achieve this, the pulse is chosen to have the following
“orthonormal shifts” property:
Z 1 ⇢
1 itif k km
=m
1
p(t kT )p(t mT ) dt =
f
0 itif kk6=mm
(1)

We’ll see how this property makes signal detection at the Rx simple
• This property is satisfied by the rect pulse shape
( ⇤
p1 T T
for t 2 2 , 2
p(t) = T
0 otherwise

• The sinc pulse p(t) = p1 sinc ⇡t T also has orthonormal


T
shifts! (You will show this in Examples Paper 9, Q.2)

9 / 42

Time Decay vs. Bandwidth Trade-o↵


The first two objectives say that we want p(t) to:
1. Decay quickly in time
2. Be approximately band-limited
But . . . faster decay in time , larger bandwidth
Consider the pulse p(t) = p1 sinc ⇡t
T T

p(t) P(f)

It
1
The sinc is perfectly band-limited to W = 2T
1
But decays slowly in time |p(t)| ⇠ |t|

10 / 42
Next consider
( the rectangular pulse

p1 for t 2 T T
,
p(t) = T 2 2
0 otherwise sine HT

|P(f)|

This pulse is perfectly time-limited to the interval [ T /2, T /2).


But . . .
• Decays slowly in freq. |P(f )| ⇠ |f1|
• Main-lobe bandwidth = 1
T

11 / 42

In practice, the pulse shape is often chosen to have a root raised


cosine spectrum

P (f )

1 0 1
2T 2T

1 1
Bandwidth slightly larger than 2T ; decay in time |p(t)| ⇠ |t|2

A happy compromise!
• More on raised cosine pulses in 3F4
• For intuition, it often helps to envision xb (t) with a rect pulse,
though it is never used in practice

12 / 42
Reminder

Data ! constellation symbols ! continuous waveform


Thus we have the following important steps to associate bits with
a baseband signal xb (t):
X
. . . 0 1 0 1 1 1 0 0 1 0 . . . ! X0 , X1 , X2 , . . . ! Xk p(t kT )
k

13 / 42

PAM Demodulation
Now, assume that we have picked a constellation and a pulse shape
satisfying the objectives, and we transmit the baseband waveform
X
xb (t) = Xk p(t kT )
k

over a baseband channel y (t) = xb (t) + n(t)

xb (t) y (t)

n(t) noise

How does the receiver recover the information symbols


{X0 , X1 , X2 , . . .} from y (t)?
• This process is called demodulation
• We will see that the orthonormal shift property of p(t) leads
to a simple and elegant demodulator
14 / 42
Matched Filter Demodulator
Let us first understand the operation assuming no noise, i.e.,
X
y (t) = xb (t) = Xk p(t kT )
k

t = mT
Filter r(t)
y(t) r(mT )
h(t) = p( t)

y (t) is passed through a filter with impulse response h(t) = p( t)


This is called a matched filter. The filter output is
r (t) = y (t) ? h(t) = xb (t) ? h(t) (assuming no noise)
Z 1
= xb (⌧ )h(t ⌧ )d⌧

i
1
Z 1
= xb (⌧ )p(⌧ t)d⌧
1
X Z 1
= Xk P e KT p x EkTde
p(⌧ )p(⌧ t)d⌧
k 1
15 / 42

Matched filter output


t = mT
Filter r(t)
y(t) r(mT )
h(t) = p( t)

X Z 1
r (t) = Xk p(⌧ kT )p(⌧ t)d⌧
k 1
X Z 1
= Xk p(u + t kT )p(u) du (using u = ⌧ t)
k 1
X
= Xk g (t kT )
k

where Z 1
g (t) = p(u + t)p(u) du
1

Matched filter output r (t) is of the form as the PAM signal, except
that the ‘pulse’ is now g (t)
16 / 42
Sampled matched filter output
t = mT
Filter r(t)
y(t) r(mT )
h(t) = p( t)

By sampling the filter output at time t = mT , m = 0, 1, 2, . . ., you


get
X
r (mT ) = Xk g ((m k)T )
k

Because of the orthonormal shifts property of p(t)


Z 1 ⇢
1 if k = m
g ((m k)T ) = p(u + (m k)T ) p(u) du =
1 0 6 m
if k =
Therefore,
X
r (mT ) = Xk g ((m k)T ) = Xm
k

Orthonormal shifts property is crucial for this demodulator to work!


17 / 42

For two di↵erent choices of pulse p(t), we now visualize


P
• the transmitted PAM signal k Xk p(t kT ), and
P
• the matched filter output r (t) = k Xk g (t kT )

18 / 42
Example 1: Transmitted signal
( ⇤
p1 for t 2 T T
Rectangular pulse: p(t) = T 2, 2
0 otherwise
Assume T = 1 and the symbols {X0 , X1 , X2 , X3 } = {1, 1, 1, 1}

1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -1

-1 0 1 2 3 4 -1 0 1 2 3 4
Time Time

P3
Right panel shows the transmitted PAM signal k=0 Xk p(t kT )
Left panel shows each component of the sum separately
19 / 42

Example 1: Matched filter output


t = mT
Filter r(t)
y(t) r(mT )
h(t) = p( t)

P
Matched filter output r (t) = 3k=0 Xk g (t kT )
Z 1 (
1 + Tt , T  t  0,
g (t) = p(u + t)p(u)du =
1 1 Tt , 0  t  T

1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -1

-1 0 1 2 3 4 -1 0 1 2 3 4
Time Time

Left: each component of the sum separately, Right: r (t)


20 / 42
Example 2

A practical choice: Root-raised cosine pulse p(t)

P (f )

1 0 1
2T 2T

Assume T = 10 and the symbols {X0 , X1 , X2 , X3 } = {1, 1, 1, 1}


Let us visualise :
P
• the transmitted PAM signal 3k=0 Xk p(t kT ), and
P
• the matched filter output r (t) = 3k=0 Xk g (t kT )

21 / 42

P3
Transmitted PAM signal k=0 Xk p(t kT ):

P3
Matched filter output r (t) = k=0 Xk g (t kT ):

Figures from Principles of Digital Communication by B. Rimoldi, CUP 2016. 22 / 42


What happens when there is noise at
the receiver?

23 / 42

Demodulation with Noisy y (t)


Now consider the noisy case. The receiver gets y (t) = x(t) + n(t)
t = mT
Filter r(t)
y(t) r(mT )
h(t) = p( t)

The matched filter output is


r (t) = y (t) ? h(t) = xb (t) ? h(t) + n(t) ? h(t)
X Z 1
= Xk g (t kT ) + n(⌧ 4
APR de t)d⌧
)p(⌧
k 1

Sampling at t = mT , m = 0, 1, 2, . . ., we now get


r (mT ) = Xm + Nm
where Nm is noise part of the filter output at time mT :
Z 1
Nm = n(⌧ )p(⌧ mT )d⌧
1

24 / 42
Properties of the Noise
Let us denote r (mT ), the sampled output at time mT , by Ym .

Y m = X m + Nm , m = 0, 1, 2, . . .

Note that this is a discrete-time channel. We have converted the


continuous-time problem into a discrete-time one of detecting the
symbols Xm from the noisy outputs Ym .
• To do this, we first need to understand the properties of the
noise Nm . Recall that
Z 1
Nm = n(⌧ )p(⌧ mT )d⌧
1

• Nm is a random variable whose distribution depends on the


statistics of the random process n(t).
You will learn about random processes and their characterisation in 3F1
& 3F4, but this is outside the scope of this course. For now, we will
directly specify the distribution of Nm and analyse the detection problem.
25 / 42

Y m = X m + Nm , m = 0, 1, 2, . . .
Modelling n(t) as a Gaussian process leads to the following
important characterisation of Nm :
• For each m, Nm is a Gaussian random variable with zero
mean, and variance 2 that can be estimated empirically
• Further N1 , N2 , . . . are independent
• Thus the sequence of random variables {Nm }, m = 0, 1, . . .
are independent and identically distributed as N (0, 2 ).

Detection
• At the Rx, how do we detect the information symbol Xm from
Ym for m = 0, 1, . . .?
• Remember that each Xm belongs to the symbol constellation

26 / 42
Detection for Binary PAM
Let’s start with a simple binary constellation, then generalise.
Consider a constellation where each Xm 2 { A, A}. This is called
binary PAM or BPSK (‘Binary Phase Shift Keying’)
Y =X +N

The detection problem is now:


Given Y , how to decide whether X = A or X = A?
Observe that:
Y = A + N if X = A and Y = A + N if X = A
• N is distributed as N (0, 2 )
• Therefore the pdf f (Y |X = A) is Gaussian with mean A and
variance 2
• Similarly the pdf f (Y |X = A) is Gaussian with mean A
and variance 2
Note: Adding a constant to a random variable just shifts the
mean, does not change the shape of the distribution
27 / 42

f (Y |X = A) f (Y |X = A)

Y
A A

Let X̂ denote the decoded symbol. When the symbols A and A


are a priori equally likely, the optimal detection rule is:
X̂ = A if f (Y | X = A) f (Y | X = A)
X̂ = A if f (Y | X = A) > f (Y | X = A)
“Choose the symbol from which Y is most likely to have occurred”
• This decoder is called the maximum-likelihood decoder
• This decoder is intuitive and seems sensible, and is in fact, the
optimal detection rule when all the constellation symbols are
equally likely (we will not prove this here)
• It is then a special case of the Maximum a Posteriori (MAP)
detection rule, which minimises the probability of detection
error (discussed in 3F4)
28 / 42
f (Y |X = A) f (Y |X = A)

Y
A A

The detection rule can be compactly written as

X̂ = arg max f (Y |X = x)
x2{A, A}

X̂ = arg max f (Y |X = x)
x2{A, A}
1 (Y x)2 /2 2
= arg max p e = arg min (Y x)2
x2{A, A} 2⇡ 2 x2{A, A}

Thus the detection rule is just: X̂ = A if Y 0, X̂ = A if Y < 0


“Choose the constellation symbol closest to the output Y ”
29 / 42

Decision Regions
The detection rule partitions the space of Y (the real line) into
decision regions.
For binary PAM, we just derived the following decision regions:

X̂ = A X̂ = A

Y
A A

Q: When does the detector make an error?

A: When X = A and Y < 0, or When X = A and Y > 0

We will calculate the probability of error shortly, but let’s first find
the detection rule for general PAM constellations
30 / 42
Detection for General PAM Constellations
The detection rule can easily be extended to a general
constellation C
• E.g., C may be the 3-ary constellation { 2A, 0, 2A} or a 4-ary
constellation { 3A, A, A, 3A}
• The maximum-likelihood principle is the same: “Choose the
constellation symbol from which y is most likely to have
occurred ”

X̂ = arg max f (Y |X = x)
x2 C
1 (Y x)2 /2 2
= arg max p e = arg min (Y x)2
x2 C 2⇡ 2 x2 C

Thus, the detection rule for any PAM constellation boils down to:
“Choose the constellation symbol closest to the output Y ”
31 / 42

Example: 3-ary PAM


2
Y = X + N, N ⇠ N (0, )
What is the optimal detection rule and the associated decision
regions if X belongs to the 3-ary constellation { 2A, 0, 2A}?
The “nearest symbol to Y ” decoding rule yields
8
< 2A if Y < A
X̂ = 0 if AY <A
:
2A if Y > A

X̂ = 2A X̂ = 2A
X̂ = 0

A A Y
2A 0 2A

32 / 42
Probability of Detection Error
Y =X +N
Consider binary PAM with X 2 {A A}. The decision regions are:
X̂ = A X̂ = A

Y
A A

The detector makes an error when X = A and Y < 0, or when


X = A and Y > 0
The probability of detection error is
Pe = P(X̂ 6= X )
= P(X = A)P(X̂ = A| X = A) + P(X = A)P(X̂ = A| X = A)
= 12 P(X̂ = A | X = A) + 12 P(X̂ = A | X = A)
1
(The symbols are equally likely ) P(X = A) = P(X = A) = 2 ) 33 / 42

Let us first examine P(X̂ = A | X = A)

P(X̂ = A | X = A) = P(Y > 0 | X = A)


= P( A + N > 0 | X = A)
(a)
= P(N > A | X = A) = P(N > A)

(a) is true because the noise random variable N is independent of


the transmitted symbol X . Similarly,

P(X̂ = A | X = A) = P(Y < 0 | X = A)


= P(A + N < 0 | X = A)
= P(N < A | X = A) = P(N < A)

34 / 42
The probability of detection error is therefore

Pe = 12 P(X̂ = A | X = A) + 12 P(X̂ = A | X = A)
= 12 P(N > A) + 1
2 P(N < A)
✓ ◆
(b) (c) N A
= P(N > A) = P >

• (b) holds due to the symmetry of the Gaussian pdf N (0, 2 ):

P (N < A) P (N > A)

A 0 A

• In (c), we have expressed the probability in terms of a


standard Gaussian random variable with distribution N (0, 1)
• Recall from 1B Paper 7 (Probability) that if N is distributed
as N (0, 2 ) then N is distributed as N (0, 1)
35 / 42

The Q-function
The error probability is usually expressed in terms of the
Q-function, which is defined as:
Z 1
2
Q(x) = p1 e u /2 du
2⇡
x

N (0, 1) pdf
Q(x)

0 x

• Q(x) is the probability that a standard Gaussian N (0, 1)


random variable takes value greater than x
• Also note that Q(x) = 1 (x), where (.) is the cdf of a
N (0, 1) random variable

36 / 42
Pe in terms of the signal-to-noise ratio
The probability of detection error is therefore
✓ ◆ ⇣ ⌘ r !
N A A Es
Pe = P(N > A) = P > =Q =Q 2

where Es is the average energy per symbol of the constellation:


1
Es = (A2 + ( A)2 ) = A2
2

• For a binary constellation, each symbol corresponds to 1 bit.


) the average energy per bit Eb is also equal to A2 in this
case
• For an M-point constellation, the average energy per symbol
Es = Eb log2 M
• Eb2 is called the signal-to-noise ratio (snr) of the transmission
scheme
• Pe can be plotted as a function of the snr Eb2 . . .
37 / 42

Pe vs snr for binary PAM


0
10

−2
10

−4
10

−6
10

−8
e

10
P

−10
10

−12
10

−14
10

−16
10
−2 0 2 4 6 8 10 12 14 16 18
snr Eb/σ2 (dB)

To get Pe of 10 3, we need snr Eb / 2 ⇡ 9 dB


38 / 42
Error Probability vs Transmit Power

The probability of error for binary PAM decays rapidly as snr ":
2
• Q(x) ⇡ e x /2 for large x > 0 ) Pe ⇡ e snr/2

Can we set the snr Eb2 to be as high as we want, by increasing Eb ?


(i.e., by increasing A since Eb = Es = A2 for binary PAM)
• The problem is that transmitted power also increases!
• Intuition: 1 symbol transmitted every T seconds with average
energy Es ) transmit power is Es /T
• Thus as you increase the snr, you battery drains faster!

39 / 42

Power of PAM signal


X
xb (t) = Xk p(t kT )
k
With any constellation the power of the baseband PAM signal
xb (t) is
Es Eb log2 M
= ,
T T

where
• Es is the average symbol energy of the constellation.
• Eb is the average energy per bit

Intuition:
• In each symbol period of length T , a symbol with average
energy Es modulates a unit energy pulse
• A rigorous calculation of power has to take into account the
fact that the transmitted symbols X1 , X2 , . . . are drawn
randomly from the constellation (done in 3F4)
40 / 42
Pulse Amplitude Modulation - The Key Points
10110100 10100100

xb (t) y (t)
Modulator + Demodulator

n(t)

PAM is a way to map a sequence of information bits to a


continuous-time baseband waveform
1. Pick a constellation, map the information bits to symbols
X1 , X2 , . . . in the constellation
2. These symbols then modulate the amplitude of a pulse shape
p(t) to generate the baseband waveform xb (t)
X
xb (t) = Xk p(t kT )
k
41 / 42

Desirable properties of the pulse shape p(t):


• p(t) should decay quickly in time; its bandwidth W shouldn’t
be too large
• Orthonormal shifts property for simple and reliable decoding
At the receiver, first demodulate then detect:
• The demodulator is a matched filter with IR h(t) = p( t)
• Matched filter output is sampled at times . . . , 0, T , 2T , . . ..
At time mT , the output is
Y m = X m + Nm
Nm is Gaussian noise with zero mean and variance 2 that
can be empirically estimated
• Detection rule: X̂m = the constellation symbol closest to Ym
Probability of detection error can be calculated:
• Decays exponentially with snr Es / 2
• Es is average energy/symbol of the constellation; power of
PAM waveform xb (t) is Es /T
42 / 42
1B Paper 6: Communications
Handout 5: Digital Passband Modulation

Ramji Venkataramanan

Signal Processing and Communications Lab


Department of Engineering
rv285@cam.ac.uk

Lent Term 2024

1 / 22

Pulse Amplitude Modulation (recap)


Recall that the PAM signal carrying the information symbols
X1 , X2 , . . . is X
xb (t) = Xk p(t kT )
k

• xb (t) is a baseband signal


(Recall: its bandwidth is the same as that of the pulse p(t))
• If the channel is a baseband channel, e.g., ethernet cable, we
can directly transmit xb (t)
But most channels are passband — we are only allowed to transmit
our signal over a fixed frequency band centred around a carrier
frequency fc .
E.g., a wireless channel may have fc = 2 GHz and channel
bandwidth = 10 MHz
How do you “up-convert” the PAM signal to passband ?
2 / 22
Baseband to Passband
The natural thing to do is to modulate the amplitude of a
high-frequency carrier with xb (t):
" #
X
x(t) = xb (t) cos(2⇡fc t) = Xk p(t kT ) cos(2⇡fc t)
k

x(t) y (t)

n(t)

• For lack of a standard name, we’ll call this passband


modulation scheme up-converted PAM.
• Note the similarity with analogue modulation (DSB-SC): here
the information signal is a “digital” waveform, which is
determined by the constellation symbols and a pulse.
3 / 22

Up-converted PAM with a rectangular pulse


(
p1 for t 2 (0, T ]
p(t) = T
0 otherwise

is also called “Amplitude Shift Keying” (ASK).


In this case, the transmitted passband waveform is
X
x(t) = Xk p(t kT ) cos(2⇡fc t)
k
1
= p Xk cos(2⇡fc t) for t 2 [kT , (k + 1)T )
T
where Xk is chosen from a PAM constellation, say { A, A}.
Note that a rectangular pulse (and consequently, ASK) is not
bandwidth efficient.
(You may encounter the terminology ASK in past Tripos questions. You
will not be examined on it, but recognise that it is just up-converted
PAM with a rectangular pulse shape.)
4 / 22
Demodulation of up-converted PAM at the Rx
First, down-convert via product modulator + low-pass filter
Low-pass
y(t) ⇥ v(t)
filter [ W, W ]
yb (t)

cos(2⇡fc t)
X
yb (t) = xb (t) + nb (t) = Xk p(t kT ) + nb (t)
k

where nb (t) is baseband noise.


Next demodulate baseband waveform yb (t). We already know how
to do this: pass yb (t) through matched filter, and then sample at
times {mT }m2Z
t = mT
Filter r(t)
yb(t) r(mT )
h(t) = p( t)

5 / 22

Detection

• The sampled output of the matched filter at time mT is

Y m = X m + Nm
R
where Nm = nb (⌧ )p(⌧ mT ) d⌧ (see prev. handout for details)

• Nm is Gaussian with zero mean and variance 2

( 2 can be empirically estimated)


• Maximum-likelihood detection: Choose X̂m to be the
constellation symbol closest to Ym

6 / 22
Spectrum of up-converted PAM
The transmitted waveform x(t) = xb (t) cos(2⇡fc t) has spectrum
1
X (f ) = [Xb (f fc ) + Xb (f + fc )]
2
Note that X
xb (t) = Xk p(t kT )
k
is a real signal since both the pulse p(t) and the symbols {Xk } are
real-valued ) Xb ( f ) = Xb⇤ (f )
• The spectrum X (f ) for f 0 determines X (f ) for f < 0
|Xb (f )|

f
W 0 W

Lower Sideband Upper Sideband


|X(f )|

fc W fc fc + W fc W fc fc + W

• Sending both sidebands is redundant, since all the information


is contained in one 7 / 22

A bandwidth-efficient alternative to up-converted PAM


• One way to save bandwidth is to send only one sideband, just
like SSB-SC (see Amplitude Modulation handout)
• But since we are transmitting digital information, there is a
better way: make the information symbols complex-valued
The baseband waveform is again
X
xb (t) = Xk p(t kT )
k

but now the constellation from which the symbols Xk are drawn is
complex-valued (i.e., Xk are now two-dimensional)
xb (t) is now complex, but the passband signal we transmit has to
be real. It is generated as Xb t In X 4 Is tat sin att
h i
x(t) = Re xb (t) e j2⇡fc t
= Re(xb (t))cos(2⇡fc t) Im(xb (t))sin(2⇡fc t)
This is called Quadrature Amplitude Modulation (QAM)
8 / 22
Quadrature Amplitude Modulation
The upconverted QAM waveform that we transmit is
X
x(t) = p(t kT ) [Re(Xk ) cos(2⇡fc t) Im(Xk ) sin(2⇡fc t)]
Kal
k
X I
= p(t kT ) |Xk | cos(2⇡fc t + k)
k
where |Xk | and k denote the magnitude and phase of the
complex symbol Xk
Thus, one can understand QAM in two ways:
1. QAM has two carriers, the cosine carries Re(Xk ) and the sin
carries the Im(Xk ).
2. In QAM, the information symbol modulates both the
amplitude and phase of the carrier; in up-converted PAM the
information symbol is real and only modulates the amplitude
• Pulse shape p(t) is the same as that for PAM
• The main di↵erence between QAM and PAM is the
constellation. In PAM, Im(Xk ) = 0. 9 / 22

Some typical QAM Constellations


Im(Xk ) Im(Xk )

Re(Xk ) Re(Xk )
A A A A

BPSK QPSK

A A

8-PSK 16-QAM

In “Phase Shift Keying” (PSK), the magnitude of Xk is constant,


and the information is in the phase of the symbol.
In a constellation with M symbols, each symbol corresponds to
log2 M bits
10 / 22
Average Energy per Symbol
Im(Xk )

Re(Xk )
A A A A A A

BPSK QPSK 8-PSK

For all the PSK constellations, average symbol energy Es = A2

Im(Xk )
d

Average energy per symbol for 16-QAM


Re(Xk )
40d 2
Es = 16 = 2.5d 2

16-QAM

Average energy per bit Eb = Es /log2 M


11 / 22

The transmitted waveform is


X ⇥ r ⇤
x(t) = p(t kT ) Xk cos(2⇡fc t) Xki sin(2⇡fc t)
k

where Xkr := Re(Xk ) and Xki := Im(Xk )

x(t) y (t)

n(t)

• The energy per symbol is important because the power of


x(t) / ETs
• At the Rx, we have to down-convert y (t) to a baseband
waveform via product modulators and low-pass filter
• For QAM, we need two product modulators, one for the
cosine and the other for sine

12 / 22
At the receiver, we have
X ⇥ ⇤
y (t) = p(t kT ) Xkr cos(2⇡fc t) Xki sin(2⇡fc t) + n(t)
k
ki.sn ncdcnantay
PCE
KTEYIE y(t) ⇥ tfED Low-pass
filter [ W, W ]
y r (t)

Cicely cos(2⇡fc t)
if
FI
Low-pass
y(t) ⇥ filter [ W, W ]
y i (t)

e sin(2⇡fc t)

After down-converting, we get


X
r
y (t) = Xkr p(t kT ) + nr (t)
k
X
y i (t) = Xki p(t kT ) + ni (t)
k

where nr (t) and ni (t) are filtered (baseband) versions of n(t).


Next perform matched-filter demodulation of y r (t) and y i (t)
13 / 22

Demodulation
t = mT
r Filter r(t)
y (t) Ymr
h(t) = p( t)

t = mT
i Filter r(t)
y (t) Ymi
h(t) = p( t)

• The sampled outputs of the matched filters for m = 0, 1, . . .


are:

Ymr = Xmr + Nm
r

Ymi = Xmi + Nm
i

• It can be shown that Nmr and N i are each independent


m
2
Gaussians distributed as N (0, ) for each m
• For each m, we now have to detect the complex-valued
constellation symbol Xm = (Xmr , Xmi ) from Ym = (Ymr , Ymi )
14 / 22
Detection
The discrete-time channel is

Y =X +N

where X , Y , N are now two-dimensional vectors, i.e., complex


numbers with X = (X r , X i ) and Y = (Y r , Y i )
The optimal detection rule (assuming all constellation symbols are
equally likely) is the maximum-likelihood detection rule
Y X N N 0,8
X̂ = arg max f (Y |x)
x2C y ni an 0,8
“Choose the symbol from which y is most likely to have occurred”
The conditional distribution of Y = (Y r , Y i ) given x = (x r , x i ) is

p 1 (Y r x r )2 /2 2 i
p 1 e (Y x ) /2
i 2 2
f (Y |x) = 2
e
2⇡ 2⇡ 2
1 [(Y r x r )2 +(Y i x i )2 ]/2 2
= 2⇡ 2
e

15 / 22

The optimal detector is therefore


X̂ = arg min (Y r x r )2 + (Y i x i )2 = arg min |Y x|2
x2C x2C

Choose the constellation symbol x closest to observed output Y

(Same detection principle as PAM, but the symbols are complex in


QAM)
Example 1: The decision regions for QPSK
Im(Y )

X̂ = p1
p4 p1

A
Re(Y )

p3 p2

16 / 22
X̂ = arg min |Y x|2
x2C

Example 2: The decision regions for 8-PSK


Im(Y )

p7
p6 p8
X̂ = p1
A
p5 Re(Y )
p1

p4 p2
p3

• Can similarly sketch the decision regions for other


constellations
• We can also calculate the probability of detection error
Pe = P(X̂ 6= X ) for various constellations
17 / 22

Im(Xk )
Im(Xk ) d

d
Re(Xk ) Re(Xk )
A A

16-QAM
8-PSK

We won’t do an exact calculation of QAM probability of error Pe ,


but can be shown that:
• Pe is a Q function depending on d , where d is the
separation between adjacent constellation points and 2 is the
variance of the Gaussian noise Pe Q E exp c
• Pe decays exponentially with d /2 2

Suppose we increase the number of constellation points, e.g.,


16 QAM ! 64 QAM ! 256 QAM, while keeping Es constant
(Es = average energy per symbol) :
• Transmission rate increases: 4 bits/symbol ! 6 bits/sym.
! 8 bits/sym. (good!)
• To keep Es constant, d has to decrease ) Pe increases (bad!)
18 / 22
Note that both up-converted PAM and QAM are essentially
amplitude modulation for digital information:
• The (baseband) information signal xb (t) is generated from
bits via a constellation as
X
xb (t) = Xk p(t kT ).
k

• The real and imaginary parts of xb (t) modulate the amplitude


of the carriers cos(2⇡fc t) and sin(2⇡fc t), respectively.

One can also transmit digital information by modulating the


phase/frequency of a carrier.
• Frequency shift keying (FSK) is one such method
• Binary FSK: In each symbol time [kT , (k + 1)T ), transmit
one bit Xk via

cos(2⇡(fc f )t) if Xk = 0,
x(t) =
cos(2⇡(fc + f )t) if Xk = 1.
19 / 22

Example of binary FSK waveform for information bits 1 0 1 0 1 . . .

(Image source: Wikipedia) 20 / 22


Quadrature Amplitude Modulation - The Key Points
10110100 10100100

x(t) y (t)
QAM Modulator + QAM Demodulator

n(t)
QAM is a technique to convert bits to a passband waveform:
1. QAM constellations are complex in general
2. Thus thePbaseband waveform is also complex
xb (t) = k Xk p(t kT )
3. We then up-convert and transmit a real passband waveform:
h i
j2⇡fc t
x(t) = Re xb (t) e
X
= p(t kT ) [Re(Xk ) cos(2⇡fc t) Im(Xk ) sin(2⇡fc t)]
k
21 / 22

At the receiver:
• Down-convert using product modulator + low-pass filter
(separately for the sine, cosine carriers)
• Demodulate the baseband waveforms using matched filter
• Detection rule: Pick the constellation symbol closest to the
(complex) output symbol
Properties of the QAM signal:
• Rate = 1/T QAM symbols/s or logT2 M bits/s, where M is the
constellation size
• Passband bandwidth of QAM waveform = 2W , where W is
the bandwidth of the (baseband) pulse p(t)
• Note that up-converted PAM also has the same bandwidth
2W .

QAM is a very widely used modulation scheme. E.g., 4G LTE uses


QPSK/16-QAM/64-QAM, also used in optical fibre communication

You can now do Questions 1–5 in Examples Paper 9


22 / 22
1B Paper 6: Communications
Handout 6: Channel Coding

Ramji Venkataramanan

Signal Processing and Communications Lab


Department of Engineering
rv285@cam.ac.uk

Lent Term 2024

1 / 17

(digitised source bits)


110001 110001

Channel Encoder Channel Decoder

010110100 010010110

input output
Modulator Channel Demodulator
waveform waveform

• So far, we focused on the mod & demod blocks, and studied


two modulation schemes – PAM and QAM
• We also calculated the probability of symbol error for some of
these schemes
• Thus, for a fixed modulation scheme (e.g. QPSK), we can
estimate the probability that that a bit will be in error at the
output of the demodulator/detector
2 / 17
Binary Channel
110001 110001

Channel Encoder Channel Decoder

010110100 010010110
input output
Modulator Channel Demodulator
waveform waveform

• Every modulation scheme has an associated probability of bit


error, say p, that we can estimate theoretically or empirically
• For a fixed modulation scheme, the part of the system
enclosed by dashed lines can thus be considered an overall
binary channel with bit error probability p
3 / 17

Thus an equivalent representation of the communication system


for a fixed modulation scheme is
110001 110001

Channel Encoder Channel Decoder

010110100 010010110
Binary Channel

If the modulation scheme has a bit error probability p:


• A 0 input is flipped by the binary channel to a 1 with
probability p
• A 1 input is flipped by the binary channel to a 0 with
probability p
It is important to remember that the binary channel
• Is not the actual physical channel in the communication system
• Is the overall channel assuming that the modulation scheme is
fixed and we have estimated its bit error probability p
4 / 17
Binary Symmetric Channel (BSC)
As the binary channel flips each bit (0/1) with equal probability p,
it is called a Binary Symmetric Channel. Represented as:
1 p
X =0 Y =0

X =1 Y =1
1 p

P(Y = 0|X = 0) = 1 p, P(Y = 1|X = 1) = 1 p


P(Y = 1|X = 0) = p, P(Y = 0|X = 1) = p

p is the “crossover probability”; the channel is called BSC(p)


5 / 17

Channel Coding

Thus the system is now:

(source bits) (decoded bits)


110001 110001

Channel Encoder Channel Decoder

010110100 010010110
BSC(p)
(encoded bits) (received bits)

We will now study channel coding, which consists of adding


redundancy to the source bits at the transmitter to recover from
errors at the receiver

6 / 17
Repetition Code
.9
X =0 Y =0
.1
.1

X =1 Y =1
.9

The simplest channel code for the BSC is a (n, 1) repetition code:
• Encoding: Simply repeat each source bit n times (n is odd)
• Decoding: By “majority vote”. Declare 0 if greater than n/2
of the received bits are 0, otherwise decode 1
Example: (3, 1) Repetition Code
Source bits: 0 1 1 0 0...
Encoded bits: 000 111 111 000 000 . . .
Received bits: 001 101 111 011 000 . . .
Decoded bits: 0 1 1 1 0...
7 / 17

Decoding Errors and Data Rate


Q: With a (3, 1) repetition code, when is a decoded bit in error ?
A: When the channel flips two or more of the three encoded bits
The probability of decoding error when this code is used over a
BSC(0.1) is 32 (.1)2 (.9) + 33 (.1)3 = 0.028
The rate of the code is 13 (3 encoded bits for each source bit)

Q: With a (5, 1) repetition code, when is a decoded bit in error ?


A: When the channel flips three or more of the five encoded bits
The probability of decoding error is 0.0086 (Ex. Paper 9, Q.6)
The rate of the code is 15

• We’d like the rate to be as close to 1 as possible, i.e., fewer


redundant bits to transmit
• We’d also like the probability of decoding error to be as small
as possible

These two objectives are seemingly in tension . . .


8 / 17
Probability of Error vs Rate
.9
X =0 Y =0
.1
.1
.9
X =1 Y =1

(n, 1) Repetition Code


As we increase repetition code length n:
• A decoding error occurs only if at least (n + 1)/2 bits are
flipped ) Probability of decoding error goes to 0 as n ! 1 ,
• Rate = n1 , which also goes to 0 /

Can we have codes at strictly +ve code rate whose P(error) ! 0?


In 1948, it was proved that the answer is yes! (by Claude Shannon)

9 / 17

Block Codes
We’ll look at Shannon’s result shortly, but let’s first try to improve
on repetition codes using an idea known as block coding.
• In a block code, every block of K source bits is represented by
a sequence of N code bits (called the codeword)
• To add redundancy, we need N > K
• In a linear block code, the extra N K code bits are linear
functions of the K source bits

Example: The (N = 7, K = 4) Hamming code


Each 4-bit source block s = (s1 , s2 , s3 , s4 ), is encoded into 7-bit
codeword c = (c1 , c2 , c3 , c4 , c5 , c6 , c7 ) as follows:
• c 1 = s 1 , c 2 = s 2 , c 3 = s3 , c 4 = s 4
c 5 = s 1 s 2 s3 , c 6 = s 2 s 3 s 4 , c7 = s1 s3 s4
where denotes modulo-2 addition
• c5 , c6 , c7 are called parity check bits, and provide the
redundancy
10 / 17
The (7, 4) Hamming Code
E.g.:
For s = (0, 0, 1, 1), the codeword is (0, 0, 1, 1, 1, 0, 0)
For s = (0, 0, 0, 0), the codeword is (0, 0, 0, 0, 0, 0, 0)
The encoding operation can be represented pictorially as follows:
Example:

c5
c5 = 1

s1 s2 0
0
s3 1

c7 c6 c7 = 0 c6 = 0
s4 1

• For any Hamming codeword, the parity of each circle is even,


i.e., there must be an even number of ones in each circle
• For encoding, first fill up s1 , . . . , s4 , then c5 , c6 , c7 are easy
11 / 17

Rate and Encoding


• The rate of any (N, K ) block code is K
N
• The rate of a (7, 4) Hamming code is 47 = 0.571
• Note that the (N, 1) repetition code is a block code with
K = 1 and rate 1/N

Q: How do you encode a long sequence of source bits with a


(K , N) block code?
A: Chop up the source sequence into blocks of K bits each;
transmit the N-bit codeword for each block over the BSC.
E.g., For the (7, 4) Hamming code, the source sequence

s = . . . 1001
|{z} 0010
|{z} 1111
|{z} 1010
|{z} 0000
|{z} . . .

is divided into blocks of 4 bits; for each 4-bit block, the 7-bit
Hamming codeword can be found using the parity circles
12 / 17
Error Correction for the Hamming Code
The (7, 4) Hamming code can correct any single bit error (flip) in a
codeword.
Example: The codeword (0, 0, 1, 1, 1, 0, 0) (corresponding to source
bits (0, 0, 1, 1)) is transmitted over the BSC. Suppose the channel
flips the fourth bit so that the receiver gets r = (0, 0, 1, 0, 1, 0, 0).

r5 1

r1 r2 0 0
r3 1
r7 r4 r6 0 0
0

Fill r = (r1 , . . . , r7 ) into the parity circles. We see that the dashed
circles have odd parity.
Decoding Rule: If any circles have odd parity, flip exactly one bit
to make all of them have even parity
13 / 17

0 0
1
0 0
0⇤

Flipping the starred bit would make all the circles have even parity
We thus recover the transmitted codeword (0, 0, 1, 1, 1, 0, 0)
• When the channel flips a single bit, there is at least one circle
that becomes “dashed”
• This shows that there is a bit error, which we can correct by
flipping it back

Q: When does the (7, 4) Hamming code make a decoding error?


A: When the channel flips two or more bits (Ex. Paper 9, Q.6b)
Thus Hamming codes have good rate (= 4/7), but also rather
high probability of decoding error
14 / 17
It’s natural to wonder:
• How to design better block codes than repetition/Hamming?
• How many errors can the best (N, K ) block code correct?

Shannon in 1948 . . .
1. Showed that any communication channel has a capacity,
which is the maximum rate at which the probability error can
be made arbitrarily small.
2. Also gave a formula to compute the channel capacity

For example, Shannon’s result implies that for the BSC(0.1):


• There exist (N, K ) block codes with rate K N ⇡ 0.53 such that
you can almost always recover the correct codeword from the
noisy output sequence of the BSC(0.1)
• But N has to be very large — the block length has to be
several thousand bits long
• Practical codes with close-to-capacity performance have been
discovered in the last couple of decades (discussed in 3F7)
15 / 17

Channel Coding – The Key Points


110001 110001

Channel Encoder Channel Decoder

010110100 010010110
Binary Channel
• Once we fix a modulation scheme, we have a binary-input,
binary-output channel
• Channel coding is the act of adding redundancy to the source
bits to protect against bit errors introduced by the channel
• (N, K ) block code: K source bits ! N code bits; (N K )
bits provide redundancy
• The rate of a block code is K /N. We want the code rate to
be high, but also correct a large number of errors
• We studied two simple block codes (repetition, Hamming)
and their encoding and decoding
16 / 17
Course survey

Please complete the survey:

http://to.eng.cam.ac.uk/teaching/surveys/IB_course.html

17 / 17
1B Paper 6: Communications
Handout 7: Multiple Access, Course Summary

Ramji Venkataramanan

Signal Processing and Communications Lab


Department of Engineering
rv285@cam.ac.uk

Lent Term 2024

1 / 20

Single-User Communication
(digitised source bits)
110001 110001

Transmitter Channel Receiver

So far we have studied techniques for single-user communication.


Recall that:
• Transmitter does encoding & modulation
• Receiver does demodulation & decoding
• The user is allocated a channel of bandwidth B

What if there are many users needing to communicate to the


receiver using the same channel bandwidth?
• How do they share the channel?
• This is the problem of multiple access
2 / 20
Multiple-user communication is a typical scenario in wireless
networks, cellular communication

3 / 20

Multiple Access: The Main Ideas


Imagine that five of you (multiple users) each have a question to
ask me (receiver). What techniques can we use, such that I
understand all questions?
• One after the other, each using the whole bandwidth for a
fraction of the time. This is called time-division multiple
access
• All at the same time, but each with a di↵erent frequency.
This is called frequency-division multiple access. (Each user
communicates all the time using using a fraction of the
available bandwidth)
• All at the same time using the whole bandwidth, each with a
di↵erent signature, i.e., a di↵erent language (known to the
receiver). This is called code-division multiple access
These three techniques are abbreviated as TDMA, FDMA,
CDMA, respectively
4 / 20
We can think of each multiple-access technique as dividing up a
“box” among the users, by cutting along di↵erent axes
Signature

Frequency

Tf

Time

5 / 20

Time Division Multiple Access

In TDMA, multiple users are multiplexed in time, so that they


transmit one after the other, using the whole bandwidth B.
• Each of K users gets one slot in a frame of duration Tf
• K time slots in a frame, each of duration Tu = Tf
K

User 1 User 2 ... User K 1 User K

Tu

Tf

6 / 20
TDMA
Signature

user K
Frequency

user 2
user 1
Tu Tf

Time

Each user gets 1/K of the box


GSM, a 2nd generation standard for cellular networks used
time-division
7 / 20

Frequency Division Multiple Access

• In FDMA, K users are multiplexed in the frequency domain by


allocating a fraction of the total bandwidth to each one
• They communicate simultaneously on non-overlapping
B
frequency bands of width Bu < K , so there is no interference

User 1 User 2 User K 1 User K

...

f
fc,1 fc,2 fc,K 1 fc,K

Bu

8 / 20
• Can think of each user i having using carrier fc,i to transmit
their signal xi (t), for i = 1, . . . , K
K
X
FDMA
s (t) = xi (t) cos(2⇡fc,i t)
i=1

x1 (t)
User 1
. . .

cos(2⇡fc,1 t) sFDMA (t)

. . .
xK (t)
User K

cos(2⇡fc,K t)

• At the Rx, can separate xi (t) by multiplying s FDMA (t) by


cos(2⇡f
⇥ c,iBt) and
⇤ pass through a filter that is low-pass in the
u Bu
band 2 , 2
9 / 20

Signature

Frequency
user K
user 1
user 2

Tf

Bu
B
Time

Each user again gets 1/K of the box


A type of FDMA called Orthogonal Frequency Division
Multiplexing (OFDM) is used in 4G LTE cellular systems
10 / 20
Code Division Multiple Access
• In CDMA, each user is given a unique signature function
• The signatures are denoted ci (t), i = 1, . . . , K (K users)
These signatures are chosen to be orthogonal over each symbol
period T , i.e., for m = 0, 1, 2, . . .
Z (m+1)T ⇢
1 if j = i
ci (t)cj (t) dt =
mT 0 if j 6= i
E.g., for K = 4 users, the signatures may be
User 1 signature c1 (t) User 2 signature c2 (t)

+1 +1

0 t 0 t
T T

1
+1 +1

T T
0 t 0 t

1 1

User 3 signature c3 (t) User 4 signature c4 (t)


11 / 20

CDMA waveform
• Assume that user i wants to transmit a PAM signal
X
xi (t) = Xki p(t kT )
k
with a rectangular pulse p(t)
• The signals of the K users are multiplexed as
" K #
X
s CDMA (t) = ci (t)xi (t) cos(2⇡fc t)
i=1
• Thus each user i transmits their baseband signal xi (t) using
the entire bandwidth B over entire time frame of duration Tf
• At the Rx, after down-converting (using product modulator +
low-pass filter), we get
K
X
y (t) = ci (t)xi (t) + noise
i=1

How to separate the users’ signals x1 (t), . . . , xk (t) at the receiver ?


12 / 20
CDMA receiver
K
X
y (t) = ci (t)xi (t) + noise
i=1

• At the Rx, signal xj (t) can be separated by correlating with its


signature cj (t)
• Assuming no noise, multiplying y (t) by cj (t) and integrating,
we get
Z X K
! K Z
X
ci (t)xi (t) cj (t) dt = xi (t)ci (t)cj (t) dt
i=1 i=1
K X Z (m+1)T
X
= xi (mT ) ci (t)cj (t) dt = xj (t)
i=1 m mT

where we have used (a) xi (t) is constant over each symbol


period, and (b) the orthogonality property of the ci (t)’s
• When the number of users K is large, may only be able to
have approximately orthogonal signatures
• All 3G cellular standards use variants of CDMA
13 / 20

The Big Picture


Core Network

Cellular
Server
Router
Router
Server

“Last Mile”
copper/fibre/wireless wireless
fibre/satellite

• The core of the internet consists of routers and servers


(data-centres) connected by high-speed optical fibre links
• At the edge, computers are connected by copper wires (DSL)
or fibre, and wireless mobile devices connected to wi-fi or
cellular (e.g., 3G/4G) networks
• The digital communication design principles we studied in the
course apply to each point-to-point link of the big network
(regardless of the kind of channel)
• Multiple-access schemes are relevant for wi-fi and cellular
networks
14 / 20
Cellular Networks

The network is divided into cells; roughly speaking, there is one


base station per cell.
• Each user communicates with the base station in their cell;
the base stations are connected to the internet & phone
network via high-speed links.
• When a user moves from one cell to another, there is hand-o↵
• Multiple-access schemes such as FDMA or CDMA are needed
for users to simultaneously communicate with their
base-station; e.g., adjacent cells may use di↵erent frequency
bands to avoid interference. 15 / 20

Representing and communicating any source of information with


bits (digitisation) seems routine now . . .
– But was revolutionary in 1948 when Claude Shannon wrote a
paper called “ A Mathematical Theory of Communication”

– The digital revolution of the last few decades has its roots in
Shannon’s work.
For more on this, watch the documentary film ‘The Bit
Player’: https://thebitplayer.com

16 / 20
Course Summary
1. Power, Bandwidth (Baseband vs Passband) are important
resources for communication
2. Communication channels can be modelled as linear systems
(filters) + noise. If we communicate over a frequency band
where the channel frequency response is flat, then we get an
additive noise channel.
3. Analogue Communication: continuous-time information signal
x(t) directly modulates the carrier
• Variants of Amplitude Modulation: Power, bandwidth, receiver
structures
• FM: Constant power but requires larger bandwidth than AM;
Carson’s rule for FM bandwidth; more robust to noise
4. Digitisation: To convert an analogue source x(t) (e.g.,
speech/music) to digital:
sampling quantisation
x(t) ! {x(nT )}n2Z ! . . . 0100111 . . .
Important tradeo↵ between number of quantiser levels and
signal-to-quantisation noise ratio 17 / 20

(digitised source bits)


110001 110001

Channel Encoder Channel Decoder

010110100 010010110
input output
Modulator Channel Demodulator
waveform waveform

Digital Communication: two key parts – modulation and coding


5. Modulation: Converting bits into a waveform suitable for
transmission over the channel
– PAM for baseband: Tx & Rx structures, bandwidth, power,
performance analysis (probability of detection error)
– QAM for passband: more bandwidth-efficient than PAM
6. Coding: Adding redundancy to source bits to make them
robust to channel errors
– An (N, K ) block code: K source bits ! N code bits (N > K )
– Two simple block codes: (N, 1) repetition code and (7, 4)
Hamming code
18 / 20
To conclude, Information Engineering is about:
• Communication: Representing information compactly, and
transmitting it reliably over noisy channels
• Signal Processing : Algorithms to extract clean signals from
noisy data (e.g. GPS, medical imaging)
• Control: E.g., gyroscope in your phone, auto-pilot in an
aircraft, autonomous driving
• Statistical Inference & Machine Learning: Extracting and
learning essential features from data to make useful
predictions. E.g., voice recognition, autocomplete, . . .

Paper 6 lays the foundation for many of these topics

19 / 20

Relevant Past Tripos Questions (Communications)


From 1B Paper 6:
• 2014-2022, Questions 5 (last part) and 6
• 2013, Question 5
• 2012, Questions 5 and 6
• 2011, Questions 5 and 6, parts (a) and (b)
• 2010, Questions 5 and 6
• 2009, Questions 5 and 6 [note: In 6(c), SNR is defined di↵erently
from what we have in Handout 4]
• 2008, Questions 5(e) and 6
• 2007, Questions 4 and 5(a), (b)
• 2006, Question 5 (b),(c)
• 2005, Question 5
• 2004, Question 6 (a), (b), (c)
• 2003, Question 5 excluding the final two lines of part (d)
• 2002, Question 5 excluding part (a)
• 2002, Question 6 except part (c).

20 / 20

You might also like