0% found this document useful (0 votes)
7 views94 pages

q7-2020 Compressed

The document discusses various error correction techniques, including Linear Block Codes, Polynomial Codes, Cyclic Codes, and Convolutional Codes, along with their properties and decoding methods. It explains the concepts of Automatic Repeat reQuest (ARQ) strategies, linear codes, generator matrices, and syndromes for error detection and correction. Additionally, it covers the characteristics of cyclic codes and their relationship with polynomial codes, including examples like Hamming codes and maximum-length codes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views94 pages

q7-2020 Compressed

The document discusses various error correction techniques, including Linear Block Codes, Polynomial Codes, Cyclic Codes, and Convolutional Codes, along with their properties and decoding methods. It explains the concepts of Automatic Repeat reQuest (ARQ) strategies, linear codes, generator matrices, and syndromes for error detection and correction. Additionally, it covers the characteristics of cyclic codes and their relationship with polynomial codes, including examples like Hamming codes and maximum-length codes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

Part 7 Linear Block Codes,

Polynomial Codes/Cyclic Codes,


Convolutional Codes, and
Viterbi Decoding
Introduction

© Po-Ning Chen@ece.nctu IDC7-2


Introduction
o Error correction versus error detection
n There is an alternative system approach to achieve
reliable transmission other than forward error correction
(FEC).
n By the system structure, it is named automatic-repeat
request (ARQ), which is a combination of error
detection and noiseless feedback.

© Po-Ning Chen@ece.nctu IDC7-3


Introduction
o Classifications of ARQ
n ARQ with stop-and-wait strategy
o After the transmission of a codeword, the transmitter stops and
waits for the feedback before moving onto the next block of
message bits.
n Continuous ARQ with pullback
o The transmitter continues its transmissions until a
retransmission request is received, at which point it stops and
pulls back to the incorrectly transmitted codeword.
n Continuous ARQ with selective repeat
o Only retransmit the codewords that are incorrectly transmitted.

© Po-Ning Chen@ece.nctu IDC7-4


Linear Block Codes
o Linear code
n A code is linear if any two codewords in the code can
be added in modulo-2 arithmetic to produce a third
codeword in the code.
n The codewords of a linear code can always be obtained
through a “linear” operation in the sense of modulo-2
arithmetic.

© Po-Ning Chen@ece.nctu IDC7-5


Linear Block Codes
o For a linear code, there exists a k-by-n generator matrix G
such that

o Generator matrix G is said to be in the canonical form if its


k rows are linearly independent.

© Po-Ning Chen@ece.nctu IDC7-6


What will happen if one row is linearly dependent on other rows?

Suppose

Then

© Po-Ning Chen@ece.nctu IDC7-7


Hence, the number of distinct code words is at most 2k-1 (not the anticipated 2k).

© Po-Ning Chen@ece.nctu IDC7-8


Linear Block Codes
o Parity-check matrix H
n The parity-check matrix of a canonical generator matrix
is an (n-k)-by-n matrix satisfying

where the columns of H are linearly independent.


n Then, the codewords (or error-free receptions) should
satisfy (n-k) parity-check equations.

© Po-Ning Chen@ece.nctu IDC7-9


Linear Block Codes
o Syndrome s
n The receptions may be erroneous (with error pattern e).

n With the help of parity-check matrix, we obtain

© Po-Ning Chen@ece.nctu IDC7-10


Linear Block Codes
o In short, syndromes are all possible symptoms that possibly happen at
the output of parity-check examination.
n Similar to the disease symptoms that can be observed and examined
from outside the body.
n Based on the symptoms, the doctors diagnose (possibly) what
disease occurs inside the body.
o Based on the symptoms, the receiver “diagnoses” which bit is erroneous
(i.e., ill) based on the symptoms (so that the receiver can correct the “ill”
bit.)
n Notably, the syndrome only depends on the error pattern, and is
completely independent of the transmitted codeword.

© Po-Ning Chen@ece.nctu IDC7-11


Linear Block Codes
o Properties of syndromes
n Syndrome depends only on the error pattern, and not on
the transmitted code word.

n All error patterns that differ by (at least) a code word


have the same syndrome s.

© Po-Ning Chen@ece.nctu IDC7-12


0. It suffices to fix the error pattern e and vary the codewords
when determining the coset.
1. All elements in a coset have the same syndrome since

2. There are 2k elements in a coset since

i.e., coset elements are the same iff two codewords are the
same.

3. Cosets are disjoint.

4. The number of cosets is 2n-k, i.e., the number of syndromes


is 2n-k.
Thus, syndromes (with only (n-k) unknowns) cannot uniquely
determine the error pattern (with n unknowns).
© Po-Ning Chen@ece.nctu IDC7-13
Linear Block Codes
o Systematic code
n A code is systematic if the message bits are a part of the
codewords.
n The remaining part of a systematic code is called parity
bits.

n Usually, message bits are transmitted first because the


receiver can do “direct hard decision” when “necessary”.

© Po-Ning Chen@ece.nctu IDC7-14


Linear Block Codes
o For a systematic code,

and

n Example. (n, 1) repetition codes

© Po-Ning Chen@ece.nctu IDC7-15


Linear Block Codes
o Error correcting capability of a linear block code
n Hamming distance (for “0”- “1” binary sequences)

n Minimum (pair-wise) Hamming distance dmin

© Po-Ning Chen@ece.nctu IDC7-16


n Operational meaning of the minimum (pair-wise)
Hamming distance

There exists at least a


pair of codewords, of
which the distance is dmin.

n Minimum distance decoder

© Po-Ning Chen@ece.nctu IDC7-17


n Based on the minimum distance decoder, if the received
vector and the transmitted codeword differ at most

namely, if the number of 1’s in the (true) error pattern is at


most this number, then no error in decoding decision is
obtained.
n If the received vector and the transmitted codeword
differ at t positions, where

then erroneous decision is possibly made (such as when the


transmitted codeword is one of the pairs with distance dmin.)
© Po-Ning Chen@ece.nctu IDC7-18
n We therefore name this quantity the error correcting
capability of a code (not limited to linear block codes).

n The error correcting capability of a linear block code


can be easily determined by the minimum Hamming
weight of codewords. (This does not apply for non-
linear codes!)

n By linearity,

© Po-Ning Chen@ece.nctu IDC7-19


Linear Block Codes
o Syndrome decoding for (n, k) linear block codes

© Po-Ning Chen@ece.nctu IDC7-20


Linear Block Codes
o Syndrome decoding using standard array for an (n, k) block
code
=0

syndrome rHT

Find the element in coset(rHT), who has the minimum weight.


This element is usually named the coset leader.
The coset leader in each coset is fixed and known before the reception of r.
© Po-Ning Chen@ece.nctu IDC7-21
Linear Block Codes
o Example: (7,4) Hamming codes

© Po-Ning Chen@ece.nctu IDC7-22


Linear Block Codes
o Decoding table

© Po-Ning Chen@ece.nctu IDC7-23


Linear Block Codes For (7,4) Hamming code,
the coset leaders form a
o Appendix: The notion of perfect code perfect ball (of radius 1).

n (7, 4) Hamming code is a binary perfect code.

All of the 27 = 128


binary sequences are
confined with the 24 =
16 non-overlapping
balls of radius 1,

© Po-Ning Chen@ece.nctu IDC7-24


Linear Block Codes
o Dual code

n Every (n, k) linear block code with generator matrix G


and parity-check matrix H has an (n, n-k) dual code
with generator matrix H and parity-check matrix G.

© Po-Ning Chen@ece.nctu IDC7-25


Polynomial Codes/Cyclic Codes
o Polynomial expression of a linear code
n Polynomial code: A special type of linear codes.
o Represent a code [c0, c1, …, cn-1] as a code
polynomial of degree n-1

where X is called the indeterminate.

© Po-Ning Chen@ece.nctu IDC7-26


Polynomial Codes/Cyclic Codes
n Generator polynomial (of a polynomial code)

n The code polynomial (of a polynomial code) includes


all polynomials of degree (n-1), which can be divided
by g(X), and hence can be expressed as

© Po-Ning Chen@ece.nctu IDC7-27


Polynomial Codes/Cyclic Codes
o Example of a (6, 3) polynomial code

© Po-Ning Chen@ece.nctu IDC7-28


Polynomial Codes/Cyclic Codes
o Property of a polynomial code

© Po-Ning Chen@ece.nctu IDC7-29


Polynomial Codes
/Cyclic Codes
encode encode
o Example of a (6, 3)
polynomial code
(Continue)

© Po-Ning Chen@ece.nctu IDC7-30


Polynomial
Codes/Cyclic Codes

o Encoder for systematic polynomial codes

© Po-Ning Chen@ece.nctu IDC7-31


o Example of the usual long division

multiply

Temporary remainder Temporary quotient

© Po-Ning Chen@ece.nctu IDC7-32


multiply

……. repeat this procedure until the last term is shifted into the shift register…..

Temporary remainder Temporary quotient

© Po-Ning Chen@ece.nctu IDC7-33


© Po-Ning Chen@ece.nctu IDC7-34
o An alternative realization of the long division (with the shift
register containing not the remainder but the “lower power
coefficients”)

© Po-Ning Chen@ece.nctu IDC7-35


© Po-Ning Chen@ece.nctu IDC7-36
© Po-Ning Chen@ece.nctu IDC7-37
On for first k clocks
Off for the remaining (n-k) clocks

On for g1=1
Off for g1=0
g1 g2 gn-k-1

D D D D D
parity bits codeword

Down for
first k clocks
Up for the
remaining
(n-k) clocks
After k clocks,

© Po-Ning Chen@ece.nctu IDC7-38


Polynomial
Codes/Cyclic Codes

o Decoder for polynomial codes


n How to find the syndrome polynomial s(X) of
polynomial codes with respect to the received word
polynomial r(X)?
o Recall that syndromes are all symptoms that possibly
happen at the output of parity-check examination.
o If there is no error in transmission, r(X) mod g(X) = 0.
Indeed, s(X) = r(X) mod g(X).

© Po-Ning Chen@ece.nctu IDC7-39


Polynomial Codes/Cyclic Codes
n Relation of syndrome polynomial s(X), error polynomial
e(X) and received vector polynomial r(X) for systematic
polynomial codes.

© Po-Ning Chen@ece.nctu IDC7-40


Polynomial Codes/Cyclic Codes
n Relation of syndrome polynomial s(X), error polynomial
e(X) and received vector polynomial r(X) for general
polynomial codes.

© Po-Ning Chen@ece.nctu IDC7-41


Polynomial Codes/Cyclic Codes

On for g1=1
Off for g1=0
gn-k-1 gn-k-2 g1

Received bits
D D D D D

Syndrome calculator

© Po-Ning Chen@ece.nctu IDC7-42


Polynomial Codes
/Cyclic Codes
encode encode
o Example of a (6, 3)
polynomial code
(Continue)

© Po-Ning Chen@ece.nctu IDC7-43


© Po-Ning Chen@ece.nctu IDC7-44
Polynomial Codes/Cyclic Codes
o The decoding of a systematic code is to simply add the
coset leader, corresponding to the syndrome polynomial, to
the received vector polynomial.

© Po-Ning Chen@ece.nctu IDC7-45


Polynomial Codes/Cyclic Codes
o Definition of cyclic codes
n Cyclic property: Any cyclic shift of a codeword is also
a codeword.
n Linearity property: Any sum of two codewords is also
a codeword.

o A cyclic code is also a polynomial code.


n See Theorem 5.3 in Shu Lin and Daniel J. Costello, Jr,
Error Control Coding, 2nd edition, Prentice Hall, 1983

© Po-Ning Chen@ece.nctu IDC7-46


Polynomial Codes/Cyclic Codes
o A cyclic code is a special type of polynomial codes with g(X) dividing
(Xn+1).
o Proof: Suppose c(X) is a non-zero code polynomial.

Then

© Po-Ning Chen@ece.nctu IDC7-47


Therefore,

A cyclic codeword must contain at least two 1’s, i.e., (100…000)


cannot be a codeword of a cyclic code (except trivially g(X) = 1).
Therefore, i in the above proof must be smaller than n.

A polynomial code is cyclic iff its generator polynomial divides Xn+1.

© Po-Ning Chen@ece.nctu IDC7-48


Polynomial Codes/Cyclic Codes
o Parity-check polynomial of cyclic codes
n After proving that g(X) must divide Xn+1, we can define
the parity-check polynomial of a cyclic code as

where h(X) is a polynomial of degree k with h0 = hk = 1.


n Since the degrees of g(X) and h(X) are respectively n-k
and k-1, and gn-k = hk = 1, we may induce that

© Po-Ning Chen@ece.nctu IDC7-49


Polynomial Codes/Cyclic Codes
n Multiplying both sides by a(X) yields:

© Po-Ning Chen@ece.nctu IDC7-50


© Po-Ning Chen@ece.nctu IDC7-51
© Po-Ning Chen@ece.nctu IDC7-52
Observation.
The parity-check matrix arranges its entries according to the coefficients of
the parity-check polynomial in reverse order as contrary to the generator
matrix arranges its entries according to the coefficients of the generator
polynomial.

© Po-Ning Chen@ece.nctu IDC7-53


Polynomial Codes/Cyclic Codes
o Remarks
n The generator matrix and parity-check matrix derived
previously are not for systematic codes.
n We can manipulate these two matrices by adding their
elements of selective rows so as to make them
“systematic”.

© Po-Ning Chen@ece.nctu IDC7-54


Polynomial Codes/Cyclic Codes
o Examples: Hamming code and Maximum-length code

n Irreducible polynomial: A polynomial that cannot be


further factored.
o All three of the above are irreducible.
n Primitive polynomial: An irreducible polynomial of
degree m, which divides Xn + 1 for n = 2m - 1 but does
not divides Xn + 1 for n < 2m - 1.
o All three irreducible polynomials are primitive.

© Po-Ning Chen@ece.nctu IDC7-55


Polynomial Codes/Cyclic Codes
n Example of cyclic codes: (7, 4, 3) Hamming code
o Any cyclic code generated by a primitive polynomial
is a Hamming code of minimum pairwise distance 3.
n Example of cyclic codes: (2m - 1, m, 2m-1) maximum-
length code
o The maximum-length code is a dual code of
Hamming codes.
o In other words, it is a cyclic code with primitive
parity-check polynomial.
o It is a code of minimum distance 2m-1 .

© Po-Ning Chen@ece.nctu IDC7-56


o It is named the maximum-length code because the
codeword length for m information bits has been
pushed to the maximum (or equivalently, the number
of codewords for code of length n has been pushed to
the minimum). For example,

So, there are in total 7 code words if c is originally


nonzero. Adding the all-zero codeword gives 8 = 23
codewords. This makes the (7, 3) maximum-length code.
© Po-Ning Chen@ece.nctu IDC7-57
o So, the nonzero codeword in a maximum-length
code must be a circular shift of any other nonzero
codeword.

© Po-Ning Chen@ece.nctu IDC7-58


n Example of cyclic codes: Cyclic redundancy check (CRC)
codes
o Cyclic codes are extremely well-suited for error
detection owing to the simplicity of its implementation,
and its superior error-detection capability.
o Binary (n, k, dmin) CRC codes can detect:
n all contiguous error bursts of length n - k or less.
n 2n-k+1-4 of contiguous error bursts of length n-k+1
n all combinations of dmin-1 or fewer errors
n all error patterns with an odd number of errors if the
generator polynomial g(X) has an even number of
nonzero coefficients.

© Po-Ning Chen@ece.nctu IDC7-59


n Example of cyclic codes: Bose-Chaudhuri-Hocquenghem
(BCH) codes
n A special type of BCH codes: Primitive (n, k, dmin)
BCH codes with parameters satisfying

n (n, k, 3) Hamming code can be described as BCH


codes.
n The table in the next slide illustrates the generator
polynomial g(X) of some binary BCH codes.

© Po-Ning Chen@ece.nctu IDC7-60


n = block length
k = number of message bits
t = number of errors that are guaranteed correctable

© Po-Ning Chen@ece.nctu IDC7-61


Convolutional Codes
o For block codes, the encoding and decoding must perform
in a block-by-block basis. Hence, a big buffer for the entire
message block and codeword block is required.
o Instead, for the convolutional codes, since the inputs and
outputs are governed through a convolution operation of
“finite-order” filter, only a buffer of size equal to the filter
order is demanded.
n “Convolution” is defined based on modulo-2 arithmetic
operations.

© Po-Ning Chen@ece.nctu IDC7-62


Convolutional Codes
path 1
Buffer two bits at a time

output
input D D Interleaver

path 2

(n, k, m) = (2, 1, 2) convolutional code


Note that here, n is not the codeword length, k is not the message length, and m is not the
minimum distance between codewords. See the next slide for their definitions.

© Po-Ning Chen@ece.nctu IDC7-63


Convolutional Codes
o (n, k, m) convolutional codes
n k input bits will produce n output bits.
n The most recent m “k-message-bit input blocks” will be
recorded (buffered).
n The n output bits will be given by a linear combination of the
buffered input bits.
n Constraint length K of a convolutional code
o It is the number of k-message-bit shifts, over which a single k-
message-bit block influences the encoder output.
o In other words, the encoder output depends on the current input
message block and the previous K - 1 input message blocks.
o Based on the definition, constraint length = m+1.

© Po-Ning Chen@ece.nctu IDC7-64


Convolutional Codes
n Effective code rate of a convolutional code
o In practice, km zeros are often appended at the end of
an information sequence to clear the shift register
contents.
o Hence, kL message bits will produce n(L+m) output
bits.
o The effective code rate is therefore given by:

o Since L is often much larger than m,


n k/n is named the code rate of a convolutional code.

© Po-Ning Chen@ece.nctu IDC7-65


Convolutional Codes
o Polynomial expression of convolutional codes
n Example (Slide IDC 7-63)
o D is used instead of X because the flip-flop (i.e., one
time-unit delay) is often denoted by D.

© Po-Ning Chen@ece.nctu IDC7-66


Convolutional Codes

Buffer two bits at a time

D D Interleaver

© Po-Ning Chen@ece.nctu IDC7-67


00
00
11 a
00
10
11
01 b
00
11
10
Convolutional Codes 11
01
00
01
c
d
10
00
00
11
a
o Graphical expressions of convolutional 10
00
11
10
b
codes 11
01
01
11
00 c
n Code tree 0
01
10
01
10 d
00
00
1 11
11 a
10
11
10
01 b
11
10
00
00 c
n Code trellis 01
01
10 d
11
00
11
11 a
01
10
00
01 b
01
11
01
00 c
generate the same “next code symbol” 10
10
01
10 d

© Po-Ning Chen@ece.nctu IDC7-68


Convolutional Codes
n Code trellis (continue)
00 00 00 00 00 00 00 00
00 a
11 11 11 11 11 11

10 b 11 11 11 11 11 11
00 00 00 00
10 10 10 10 10 10
01 c
01 01 01 01 01

01 01 01 01 01
10 10 10 10
11 d
Level 0 1 2 3 4 5 L L+1 L+2
Solid line : 0 Append two zeros to clear
Dashed line : 1 the shift-register contents

© Po-Ning Chen@ece.nctu IDC7-69


Convolutional Codes
n State diagram 10

00
a d
11
01 01
b 11
00 10

10 b c
c
01 00

01 11 11
d 10

00

© Po-Ning Chen@ece.nctu IDC7-70


Maximum Likelihood Decoding
of Convolutional Codes
o Likelihood function (i.e., probability function)

o Maximum-likelihood decoding

o For equal prior probability, ML decoding minimizes the


error rate.

© Po-Ning Chen@ece.nctu IDC7-71


Maximum Likelihood Decoding
of Convolutional Codes
o Minimum distance decoding
n For an additive noise,

© Po-Ning Chen@ece.nctu IDC7-72


Maximum Likelihood Decoding
of Convolutional Codes
n Example (of distance function): AWGN

n Example (of distance function): BSC

© Po-Ning Chen@ece.nctu IDC7-73


Maximum Likelihood Decoding
of Convolutional Codes
o Viterbi algorithm (for minimum distance decoding over
code trellis)
n Optimality

2 2
3

2 2
1 2 One survivor path (solid
2 3 line) for each
intermediate node
2
© Po-Ning Chen@ece.nctu IDC7-74
00
Received
sequence
01 00 Assume that the all-zero 11
00 1 00
0
11 11
1 sequence is transmitted.
11
1 00
3
10

Solid line = information 0 01


10
2 Dashed line = information 1 01
10
01
2

Received Received
01 00 01 01 00 01
sequence sequence
00 1 00 1 00 2 00 1 00 1 00 2
0 0
11 11 11 3 11 11 11

1 3 2 1 3 2
11
3
00
10 2 10 5 10 2

01 2 01 2

01 2 01 01 2 01
10 3 3
4

© Po-Ning Chen@ece.nctu IDC7-75


Received 00 Received
01 00 01 01 00 01 00
sequence sequence
00 1 00 1 00 2 00 2 00 1 00 1 00 2 00 2
0 0
4
11 11 11 11 11 11 11

1 3 2 4 1 3 2
11
2 2
00 00
10 2 10 3 10 2 10 3

01 2 01 4 01 2

01 2 01 01 01 2 01 01
3 10 3 3 3
4

01 00 01 00 00 01 00 01 00 00
00 1 00 1 00 2 00 2 00 2 00 1 00 1 00 2 00 2 00 2
0 0
5
11 11 11 11 11 11 11

1 3 2 4 1 3 2
11
2 3 2 3
00 00 00 00
10 2 10 3 10 3 10 2 10 3 10 3
01 2 01 4 01 2

01 2 01 01 01 01 2 01 01 01
3 3 3 3 3 3
10
4
© Po-Ning Chen@ece.nctu IDC7-76
00

It is possible that the ML Assume that the all-zero 11


codeword or the sequence is transmitted.
minimum distance 11
00
codeword is not equal to
the transmitted codeword. 10

Solid line = information 0 01


Dashed line = information 1 01
10

Received Received
11 00 01 11 00 01
sequence sequence
00 2 00 2 00 3 00 2 00 2 00 3
0 0
2 2
11 11 11 11

0 4 3 0
11 11
2 2
00 00
10 1 10 6 10 1

01 1 01 1

01 1 01 01 1
4
10 10
3 3

© Po-Ning Chen@ece.nctu IDC7-77


Maximum Likelihood Decoding
of Convolutional Codes
o Free distance of convolutional codes
n Under binary symmetric channels (BSCs), a
convolutional code with free distance dfree can correct t
errors iff dfree is greater than 2t.

n Question: How to determine dfree?


n Answer: By signal-flow graph.

© Po-Ning Chen@ece.nctu IDC7-78


10

00 d
a
11
01 01
b 11
10
00
10
b c
c
01 00

01 11 11
d 10

00

State diagram
© Po-Ning Chen@ece.nctu IDC7-79
Example. Input 100 generates a codeword of
length 3 (branches) with weight 5.
Input 10100 generates a codeword
10
of length 5 (branches) with weight 6.

d 10 (DL)

01 01
d
10
b c 01 (DL) 01 (DL)

00 10 (DL)
a0 11 (D L) b c 11 (D L) a1
2 2

11 11

00 (L)
a

A zero-weight input
generates a zero-
Signal graph
00
weight code pattern. Exponent of D = Hamming weight on the branch
Exponent of L = Length of the branch
State diagram
© Po-Ning Chen@ece.nctu IDC7-80
10 (DL)

01 (DL) 01 (DL)

10 (DL)
a0 11 (D L) b c 11 (D2L) a1
2

00 (L)

© Po-Ning Chen@ece.nctu IDC7-81


© Po-Ning Chen@ece.nctu IDC7-82
Maximum Likelihood Decoding
of Convolutional Codes
o Since the distance transfer function T(D, 1) = D5 + 2 D6 + 4
D7 + … enumerates the number of codewords that are a given
distance apart, it follows that

o A convolutional code may be subject to catastrophic error


propagation.
n Catastrophic error propagation = A finite number of
transmission errors may cause infinite number of decoding
errors.
n A code with potential catastrophic error propagation is named
a catastrophic code.

© Po-Ning Chen@ece.nctu IDC7-83


00 00 00 00 00 00 00 00 00
a
01 01 01 01 01 01
00 A non-zero-weight
10 b 10 10 10 10 10 10
input generates a
11 11 11 11 zero-weight code
11 11 11 11 11 11 pattern.
01 c
01 01 01 01 01
d
10 10 10 10 10
11 d 00 00 00 00
Level 0 1 2 3 4 5 L L+1 L+2 10 01 00 (L)

11
path 1 d
b c

11 10 (DL) 01 (DL)

11 (D2L)
output 01 10
input D D Interleaver a0 01 (DL) b c 10 (DL) a1

a 11 (D2L)

path 2 A zero-weight input


00
generates a zero-
weight code pattern.

© Po-Ning Chen@ece.nctu IDC7-84


© Po-Ning Chen@ece.nctu IDC7-85
n Alternative definition of catastrophic codes: A code for which an
infinite weight input causes a finite weight output
o In terms of the state diagram, a catastrophic code will have a
loop corresponding to a nonzero input for which all the output
bits are zeros.
n It can be proved that a systematic convolutional code cannot be
catastrophic.
n Unfortunately, the free distance of systematic convolutional codes
is usually smaller than that of nonsystematic convolutional codes.

Maximum free distances attainable for convolutional codes of rate 1/2.

© Po-Ning Chen@ece.nctu IDC7-86


Maximum Likelihood Decoding
of Convolutional Codes
o Asymptotic coding gain
n Hard decision decoding
o Section 6.3 (cf. Slide IDC 1-30) has established that for
BPSK transmission, the hard-decision error for each code
bit is given by:

o The error of convolutional codes (particularly at high


SNR) is dominated by the “two codewords” whose
pairwise Hamming distance is equal to dfree.
o Thus, the (code)word error rate (WER) can be analyzed
via an equivalent binary symmetric channel (BSC) with
crossover probability p.
© Po-Ning Chen@ece.nctu IDC7-87
(cf. the next slide)

© Po-Ning Chen@ece.nctu IDC7-88


For your information, assuming dfree is odd, we derive

© Po-Ning Chen@ece.nctu IDC7-89


n Soft decision decoding (can be analyzed via an equivalent
binary-input additive white Gaussian noise channel)
o WER of convolutional codes (particularly at high SNR) is
dominated by the “two codewords” whose pairwise
Hamming distance is equal to dfree.

© Po-Ning Chen@ece.nctu IDC7-90


n Based on the decision rule

© Po-Ning Chen@ece.nctu IDC7-91


o Asymptotic coding gain (here, asymptotic = at high SNR) Ga
n The performance gain due to coding (i.e., the
performance gain of a coded system against an uncoded
system)

© Po-Ning Chen@ece.nctu IDC7-92


o Asymptotic coding gain (at high SNR)

© Po-Ning Chen@ece.nctu IDC7-93


o Asymptotic coding gain (at high SNR)

Error uncoded
(log-scale)
coded

Ga

© Po-Ning Chen@ece.nctu IDC7-94

You might also like