Perfect Secrecy
Chester Rebeiro
IIT Madras
CR STINSON : chapter 2
Encryption
K K
Alice untrusted communication link Bob
E D
#%AR3Xf34^$ “Attack at Dawn!!”
Plaintext encryption (ciphertext) decryption
“Attack at Dawn!!”
How do we design ciphers?
Mallory
CR 2
Cipher Models
(What are the goals of the design?)
Computation Security Provable Security
My cipher can withstand all
attacks with complexity less
(Hardness relative to
than 22048 a tough problem)
The best attacker with the
best computation resources
would
If my cipher can be
take 3 centuries to attack broken then large
my cipher numbers can be
factored easily
Unconditional Security
My cipher is secure against
all attacks irrespective of
the
attacker’s power.
I can prove this!! This model is also known as Perfect Secrecy.
Can such a cryptosystem be built?
We shall investigate this.
CR 3
Analyzing Unconditional Security
• Assumptions
– Ciphertext only attack model
The attacker only has information about the
ciphertext. The key and plaintext are secret.
• We first analyze a single encryption then relax
this assumption by analyzing multiple
encryptions with the same key
CR 4
Encryption
plaintext set ciphertext set
ek
P C
• For a given key, the encryption (ek) defines an injective mapping between the
plaintext set (P) and ciphertext set (C)
• We assume that the key and plaintext are independent
• Alice picks a plaintext x ∈ P and encrypts it to obtain a ciphertext y ∈ C
CR 5
Plaintext Distribution
Plaintext Distribution
• Let X be a discrete random variable over the set P
• Alice chooses x from P based on some probability distribution
– Let Pr[X
X = x] be the probability that x is chosen
– This probability may depend on the language
a Plaintext set
Pr[X=a] = 1/2
b Pr[X=b] = 1/3
Pr[X=c] = 1/6
c
P
Note : Pr[a] + Pr[b] + Pr[c] = 1
CR 6
Key Distribution
Key Distribution
• Alice & Bob agree upon a key k chosen from a key set K
• Let K be a random variable denoting this choice
ek1
keyspace
Pr[K=k1] = ¾
Pr[K=k2] = ¼
There are two keys in the keyset ek2
thus there are two possible encryption
mappings
CR 7
Ciphertext Distribution
• Let Y be a discrete random variable over the set C a ek1 P
• The probability of obtaining a particular ciphertext y Q
b
depends on the plaintext and key probabilities
R
Pr[Y = y ] = ∑ Pr(k ) Pr(d k ( y )) c
k
Pr[Y = P] = Pr(k1) * Pr(c) + Pr(k2) * Pr(c)
= (3/4 * 1/6) + (1/4 * 1/6) = 1/6 a ek2
P
Pr[Y = Q] = Pr(k1) * Pr(b) + Pr(k2) * Pr(a) b Q
= (3/4 * 1/3) + (1/4 * 1/2) = 3/8
c R
Pr[Y = R] = Pr(k1) * Pr(a) + Pr(k2) * Pr(b)
= (3/4 * 1/2) + (1/4 * 1/3) = 11/24 plaintext
Pr[X=a] = 1/2 keyspace
Note: Pr[Y=P] + Pr[Y=Q] + Pr[Y=R] = 1 Pr[X=b] = 1/3 Pr[K=k1] = ¾
Pr[X=c] = 1/6 Pr[K=k2] = ¼
CR 8
Attacker’s Probabilities
• The attacker wants to determine the plaintext x
• Two scenarios
– Attacker does not have y (a priori Probability)
• Probability of determining x is simply Pr[x]
• Depends on plaintext distribution (eg. Language charcteristics)
– Attacker has y (a posteriori probability)
• Probability of determining x is simply Pr[x|y]
CR 9
A posteriori Probabilities
• How to compute the attacker’s a posteriori probabilities? Pr[ X = x | Y = y]
– Bayes’ Theorem
Pr[ x] × Pr[ y | x]
Pr[ x | y ] =
Pr[ y ] ?
The probability that y is obtained
probability of the plaintext given x depends on the keys
which provide such a mapping
probability of this ciphertext Pr[ y | x] = ∑ Pr[k ]
{ k : d k ( y ) = x}
CR 10
Pr[y|x]
Pr[P|a] = 0 a ek1 P
Pr[P|b] = 0 b Q
R
Pr[P|c] = 1 c
Pr[Q|a] = Pr[k2] = ¼
Pr[Q|b] = Pr[k1]= ¾ a ek2
P
Pr[Q|c] = 0 b Q
Pr[R|a] = Pr[k1] = ¾ c R
Pr[R|b] = Pr[k2] = ¼
Pr[R|c] = 0 keyspace
Pr[K=k1] = ¾
Pr[K=k2] = ¼
CR 11
Computing A Posteriori Probabilities
Pr[ x] × Pr[ y | x] plaintext ciphertext Pr[y|x]
Pr[ x | y ] = Pr[X=a] = 1/2 Pr[Y=P] = 1/6 Pr[P|a] = 0
Pr[ y ]
Pr[X=b] = 1/3 Pr[Y=Q] = 3/8 Pr[P|b] = 0
Pr[P|c] = 1
Pr[X=c] = 1/6 Pr[Y=R] = 11/24
Pr[Q|a] = ¼
Pr[Q|b] = ¾
Pr[a|P] = 0 Pr[b|P] = 0 Pr[c|P] = 1
Pr[Q|c] = 0
Pr[a|Q] = 1/3 Pr[b|Q] = 2/3 Pr[c|Q] = 0 Pr[R|a] = ¾
Pr[a|R] = 9/11 Pr[b|R] = 2/11 Pr[c|R] = 0 Pr[R|b] = ¼
Pr[R|c] = 0
If the attacker sees ciphertext P then she would know the plaintext was c
If the attacker sees ciphertext R then she would know a is the most likely plaintext
Not a good encryption mechanism!!
CR 12
Perfect Secrecy
• Perfect secrecy achieved when
a posteriori probabilities = a priori probabilities
Pr[ x | y ] = Pr[ x]
i.e the attacker learns nothing from the ciphertext
CR 13
Perfect Secrecy Example
• Find the a posteriori probabilities for the following scheme
• Verify that it is perfectly secret. a ek1 P
b Q
plaintext
R
Pr[X=a] = 1/2 c
Pr[X=b] = 1/3 ek2
a P
Pr[X=c] = 1/6
b Q
keyspace
c R
Pr[K=k1] = 1/3
Pr[K=k2] = 1/3
a ek3
Pr[K=k3] = 1/3 P
b Q
c R
CR 14
Observations on Perfect Secrecy
Perfect Secrecy iff
Follows from Pr[Y = y | X = x] = Pr[Y = y ]
Baye’s theorem
Perfect Indistinguishability
∀x1 , x2 ∈ P Pr[Y = y | X = x1 ] = Pr[Y = y | X = x2 ]
Perfect secrecy has nothing to do with plaintext distribution.
Thus a crypto-scheme will achieve perfect secrecy irrespective of
the language used in the plaintext.
CR 15
Shift Cipher with a Twist
• Plaintext set : P = {0,1,2,3 …, 25}
• Ciphertext set : C = {0,1,2,3 …, 25}
• Keyspace : K = {0,1,2,3 …, 25}
• Encryption Rule : eK(x) = (x + K) mod 26,
• Decryption Rule : dk(x) = (x – K) mod 26
where K∈K and x∈P
The Twist : the key changes after every encryption
CR 16
The Twisted Shift Cipher is Perfectly
Secure
Keys chosen with uniform probability
This is 1 because the sum is over
all values of x
P C
For every pair of y and x, there
is exactly one key . Probability of
that key is 1/26
CR 17
The Twisted Shift Cipher is Perfectly
Secure
CR 18
Shannon’s Theorem
If |K| = |C| = |P| then the system provides perfect secrecy iff
(1) every key is used with equal probability 1/|K|, and
(2) for every x ∈ P and y ∈ C, there exists a unique key k ∈ K such that ek(x) = y
Intuition :
Every y ∈ C can result from any of the possible plaintexts x
Since |K| = |P| there is exactly one mapping from each plaintext to y
Since each key is equi-probable, each of these mappings is equally probable
CR 19
One Time Pad
(Verman’s Cipher)
length L
plaintext ciphertext
plaintext
ciphertext block
exor
key
key
Encryption : x ⊕ k = y
length L Decryption : y ⊕ k = x
chosen uniformly from keyspace of size 2L
Pr[K = k] = 1/2L
CR 20
One Tme Pad (Example)
CR 21
One Time Pad is Perfectly Secure
• Proof using indistinguishability
Pr[Y = y | X = x] = Pr[ X = x, K = k | X = x] from x ⊕ k = y
1
= Pr[K = k ] = L
2
1
Pr[Y = y | X = x1 ] = L
= Pr[Y = y | X = x2 ]
2
∀x1 , x2 ∈ X
This implies perfect Indistinguishability
that is independent of the plaintext distribution
CR 22
Limitations of Perfect Secrecy
• Key must be at least as long as the message
– Limits applicability if messages are long
• Key must be changed for every encryption
– If the same key is used twice, then an adversary can compute
the ex-or of the messages
x1 ⊕ k = y1
x2 ⊕ k = y 2
x1 ⊕ x2 = y1 ⊕ y2
The attacker can then do language analysis to determine y1 and
y2
CR 23
Computational Security
• Perfect secrecy is difficult to achieve in practice
• Instead we use a crypto-scheme that cannot be
broken in reasonable time with reasonable success
• This means,
– Security is only achieved against adversaries that run in
polynomial time
– Attackers can potentially succeed with a very small
probability (attackers need to be very lucky to succeed)
CR 24
Quantifying Information
CR 25
Quantifying Information
• Alice thinks of a number (0 or 1)
• The choice is denoted by a discrete random variable X.
What is X?
X
• What is the information in X?
• What is Mallory’s uncertainty about X?
– Depends on the probability distribution of X
CR 26
Uncertainty
What is X?
• Lets assume Mallory know this probability
distribution.
• If Pr[X = 1] = 1 and Pr[X = 0] = 0
– Then Mallory can determine with 100% accuracy
• If Pr[X = 0] = .75 and Pr[X = 1] = .25
– Mallory will guess X as 0, and gets it right 75% of
Mallory’s Uncertainty
the time
• If Pr[X=0] = Pr[X = 1] = 0.5
– Mallory’s guess would be similar to a uniformly
random guess. Gets it right ½ the time.
0 .5 1
Pr[X=0]
CR 27
Entropy
(Quantifying Information)
• Suppose we consider a discrete R.V. X taking values from the
set {x1, x2, x3, …, xn},
each symbol occurring with probability
{p1, p2, p3, …, pn}
• Entropy is defined as the minimum number of bits (on
average) that is required to represent a string from this set?
Probability that the ith
symbol occurs
n
1
H ( X ) = ∑ pi log 2
i =1 pi
Entropy of X
Bits to encode the ith symbol
CR 28
What is the Entropy of X?
What is X?
X
Pr[X=0] = p and Pr[X=1] = 1 - p 1
H(X) = – plog2p – (1-p) log2(1 – p)
H(X)
H(X)p=0 = 0, H(X)p=1 = 0, H(X)p=.5 = 1
using limp->0 (p log p) = 0 0 .5 1
p 1
CR 29
Properties of H(X)
• If X is a random variable, which takes on values {1,2,3,….n}
with probabilities p1, p2, p3, ….pn, then
1. H(X) ≤ log2 n
2. When p1= p2=p3= … pn = 1/n then H(X) = log2n
Example an 8 face dice.
If the dice is fair, then we obtain the maximum entropy of 3 bits
If the dice is unfair, then the entropy is < 3 bits
CR 30
Entropy and Coding
• Entropy quantifies Information content
“Can we encode a message M in such a way that the
average length is as short as possible and hopefully
equal to H(M)?”
Huffman Codes :
allocate more bits to least probable events
allocate less bits to popular events
CR 31
Example
• S = {A, B, C, D} are 4 symbols Encoding
A : 111
• Probability of Occurrence is : B:0
P(A) = 1/8, P(B) = ½, P(C) = 1/8, P(D) = 1/4 C : 110
D: 10
0 1
1/2 1/2 To decode, with each bit
B traverse the tree from
0 1 root until you reach a
leaf.
1/4 1/4
D
0 1 Decode this?
1101010111
C A
1/8 1/8
CR 32
Example :
Average Length and Entropy
• S = {A, B, C, D} are 4 symbols Encoding
A : 111
• Probability of Occurrence is : B:0
p(A) = 1/8, p(B) = ½, p(C) = 1/8, p(D) = ¼ C : 110
D: 10
• Average Length of Huffman code :
3*p(A) + 1*p(B) + 3*p(C ) + 2*p(D) = 1.75
• Entropy H(S) =
-1/8 log2(8) – ½ log2(2) – 1/8 log2(8) – ¼ log2(4)
= 1.75
CR 33
Measuring the Redundancy in a
Language
• Let S be letter in a language (eg. S = {A,B,C,D})
• S = S × S × S × S × S × S (k times) is a set representing messages of
length k
• Let S(k) be a random variable in S
• The average information in each letter is given by the rate of
S(k).
H (S (k ) )
rk =
k
• rk for English is between 1.0 and 1.5 bits/letter
CR 34
Measuring the Redundancy in a
Language
• Absolute Rate : The maximum amount of information per
character in a language
– the absolute rate of language S is R = log2 |S|
– For English, |S| = 26, therefore R = 4.7 bits / letter
• Redundancy of a language is
D = R – rk
– For English when rk = 1, then D = 3.7 around 79% redundant
CR 35
Example (One letter analysis)
• Consider a language with 26 letters of the set S = {s1, s2, s3,
….., s26}. Suppose the language is characterized by the
following probabilities. What is the language redundancy?
1 1 Absolute Rate
P ( s1 ) = , P ( s2 ) =
2 4
R = log 26 = 4.7
1
P ( si ) = for i = 3,4,5,6,7,8,9,10
64
1
P ( si ) = for i = 11,12,...,26
128
Rate of the Language for 1 letter analysis Language Redundancy
r1 = H ( S (1) )
26
1
D = R − r1 = 4.7 − 2.625 = 2.075
= ∑ P( si ) log P ( si )
i =1
Language is ~70% redundant
1 1 1 1
= log 2 + log 4 + 8 log 64 + 16 log 128
2 4 64 128
1 1 6 7
= + + + = 2.625
2 2 8 8
CR 36
Example (Two letter analysis)
• In the set S = {s1, s2, s3, ….., s26}, suppose the diagram
probabilites is as below. What is the language redundancy?
1 P ( s1 , s2 ) = P( s2 | s1 ) × P( s1 ) = 1 / 4 ; P ( s1 , s3 ) = P ( s3 | s1 ) × P ( s1 ) = 1 / 4
P( si +1 | si ) = P( si + 2 | si ) = for i = 1 to 24 P ( s2 , s3 ) = P ( s3 | s2 ) × P( s2 ) = 1 / 8; P( s2 , s4 ) = P( s4 | s2 ) × P( s2 ) = 1 / 8
2
1 P ( si , si +1 ) = P ( si +1 | si ) P( si ) = 1 / 128 for i = 3, 4, ......,10
P( s26 | s25 ) = P( s1 | s25 ) = P( s1 | s26 ) = P( s2 | s26 ) =
2 P ( si , si + 2 ) = P( si + 2 | si ) P ( si ) = 1 / 128 for i = 3, 4, ......,10
all other probabilities are 0 P ( si , si +1 ) = P ( si +1 | si ) P( si ) = 1 / 256 for i = 11,12, ......, 24
P ( si , si + 2 ) = P( si + 2 | si ) P ( si ) = 1 / 256 for i = 11,12, ......, 24
P ( s25 , s26 ) = P ( s25 , s1 ) = P ( s26 , s1 ) = P( s26 , s2 ) = 1 / 256
Rate of the Language for 2 letter analysis Language Redundancy
r2 = H ( S ( 2) ) / 2
1 26 1
D = R − r2 = 4.7 − 1.8125 = 2.9
= ∑ P( si , s j ) log
2 i , j =1 P ( si , s j ) Language is ~60% redundant
1 1 1 1 1
= 2 log 4 + 2 log 8 + 16 log 128 + 32 log 256
2 4 8 128 256
1 3 7 3.625
= 1 + + + 1 = = 1.8125
2 4 8 2
CR 37
Observations
Single letter analysis : r1 = H ( S (1) ) = 2.625; D = 2.075
Two letter analysis : H ( S ( 2 ) ) = 3.625; r2 = 1.8125; D = 2.9
• H(S(2)) – H(S(1)) = 1 bit
– why?
• As we increase the message size
– Rate reduces; inferring less information per letter
– Redundancy increases
CR 38
Conditional Entropy
• Suppose X and Y are two discrete random variables,
then conditional entropy is defined as
1
H ( X | Y ) = ∑ p ( y )∑ p ( x | y ) log 2
y x p( x | y )
p( x)
= ∑∑ p( y ). p( x | y ) log 2
x y p ( x, y )
• Conditional entropy means ….
– What is the remaining uncertainty about X given Y
– H(X|Y) ≤ H(X) with equality when X and Y are independent
CR Derive using the fact that p(a|b) = p(a,b) / p(b) 39
Joint Entropy
• Suppose X and Y are two discrete random variables, and p(x,y)
the value of the joint probability distribution when X=x and
Y=y
• Then the joint entropy is given by
1
H ( X ,Y ) = ∑ ∑x p ( x , y ) log 2
p ( x, y )
y
• The joint entropy is the average uncertainty of 2 random
variables
CR 40
Entropy and Encryption
n: length of message/ciphertext K distribution
k
Mn distribution Cn distribution
m
E c
• There are three entropies: H(P(n)), H(K), H(C(n))
• Message Equivocation :
If the attacker can view n ciphertexts, what is his
uncertainty about the message
1
H (M (n) | C (n) ) = ∑ p (c ) ∑ p (m | c) log 2
c∈C n
m∈M n p (m | c )
CR 41
Entropy and Encryption
n: length of message/ciphertext K distribution
k
Mn distribution Cn distribution
m
E c
• Key Equivocation :
If the attacker can view n ciphertexts, what is his
uncertainty about the key
1
H (K | C ) =
( n)
∑ p (c ) ∑ p(k | c) log 2
c∈C n m∈M n p (k | c)
CR 42
Unicity Distance
1
H ( K | C ( n) ) = ∑ p (c ) ∑ p(k | c) log 2
c∈C n
m∈M n p (k | c)
• As n increases, H(K|C(n)) reduces…
– This means that the uncertainty of the key reduces as the attacker
observes more ciphertexts
• Unicity distance is the value of n for which H ( K | C ( n) ) ≈ 0
– This means, the entire key can be determined in this case
CR 43
Unicity Distance and Classical Ciphers
Cipher Unicity Distance (for English)
Caesar’s Cipher 1.5 letters
Affine Cipher 2.6 letters
Simple Substitution Cipher 27.6 letters
Permutation Cipher 0.12 (block size = 3)
0.66 (block size = 4)
1.32 (block size = 5)
2.05 (block size = 6)
Vigenere Cipher 1.47d (d is the key length)
CR 44
Product Ciphers
• Consider a cryptosystem where P=C (this is an endomorphic system)
– Thus the ciphertext and the plaintext set is the same
• Combine two ciphering schemes to build a product cipher
K1 ||K2
Given two endomorphic crypto-systems
K1 K2
S1 : x = d K1 (eK1 ( x))
S 2 : x = d K 2 (eK 2 ( x)) P C 1 = P2 C
Resultant Product Cipher E1 E2
S1 × S 2
e( K1 , K 2 ) ( x) = eK 2 (eK1 ( x))
d ( K1 , K 2 ) ( x) = d K 2 ( d K1 ( x))
Resultant Key Space K1 × K 2
Ciphertext of first cipher fed as
CR input to the second cipher
45
Product Ciphers
• Consider a cryptosystem where P=C (this is an endomorphic system)
– Thus the ciphertext and the plaintext set is the same
• Combine two ciphering schemes to build a product cipher
K1 ||K2
Given two endomorphic crypto-systems
K1 K2
S1 : ( P, P, K1 , E1 , D1 )
S 2 : ( P, P, K 2 , E2 , D2 ) P C 1 = P2 C
Resultant Product Cipher E1 E2
S1 × S 2 : ( P, P, K1 × K 2 , E , D)
Resultant Key Space K1 × K 2
Ciphertext of first cipher fed as
CR input to the second cipher
46
Affine Cipher is a Product Cipher
Multiplicative Cipher Shift Cipher
• P = C = {0, 1, 2, … 25}
Affine Cipher = M x S
Encryption (ea(x)) : y = ax mod 26 Encryption (eb(x)) : y = x+b mod 26
Decryption (da(x)) : x = a-1y mod 26 Decryption (db(x)) : x = y-b mod 26
• Affine cipher : y = ax + b mod 26
• Size of Key space is
– Size of key space for Multiplicative cipher * Size of keyspace for shift
cipher
– 12 * 26 = 312
CR 47
Is S x M same as the Affine Cipher
• S x M : y = a(x + b) mod 26
= ax + ba mod 26
• Key is (b,a)
• ba mod 26 is some b’ such that
a-1b’ = b mod 26
• This can be represented as an Affine cipher,
y = ax + b’ mod 26
Thus affine ciphers are commutable (i.e. S x M = M x S)
Create a non-commutable product ciphers
CR 48
Idempotent Ciphers
• If S1 : ( P, P, K , E1 , D1 ) is an endomorphic cipher
• then it is possible to construct product ciphers of the
form S1 x S1, denoted S 2 : ( P, P, K × K , E , D)
• If S 2 = S then the cipher is called idempotent cipher
Show that the simple substitution cipher is idempotent
Does the security of the newly formed cipher increase?
In a non-idempotent cipher, however the security may increase.
CR 49
Iterative Cipher
• An n-fold product of this is S x S x S … (n times) = Sn is an
iterative cipher
All modern block ciphers like DES, 3-DES, AES, etc. are
iterative, non-idempotent, product ciphers.
We will see more about these ciphers next!!
CR 50