Drareviewps
Drareviewps
Monograph:
Clark, John Andrew orcid.org/0000-0002-9230-9739 and Jacob, Jeremy Lawrence
orcid.org/0000-0003-4806-7426 (1997) A survey of authentication protocol literature:
Version 1.0. Report. Citeseer
Reuse
Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless
indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by
national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of
the full text version. This is indicated by the licence information on the White Rose Research Online record
for the item.
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.
eprints@whiterose.ac.uk
https://eprints.whiterose.ac.uk/
A Survey of Authentication Protocol
Literature: Version 1.0
John Clark and Jeremy Jacob
17 November 1997
1
Contents
1 Introduction 5
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 A Protocols Resource . . . . . . . . . . . . . . . . . . . . . . . 6
2 Cryptographic Prerequisites 7
2.1 General Principles . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Symmetric Key Cryptography . . . . . . . . . . . . . . . . . . 7
2.2.1 Classical Cryptography . . . . . . . . . . . . . . . . . 8
2.2.2 Modernday Cryptography . . . . . . . . . . . . . . . 8
2.2.3 Modes of Block Cipher Usage . . . . . . . . . . . . . . 9
2.2.4 Stream Ciphers . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Public Key Cryptography . . . . . . . . . . . . . . . . . . . . 13
2.4 One-way Hash Algorithms . . . . . . . . . . . . . . . . . . . 15
2.5 Notational Conventions . . . . . . . . . . . . . . . . . . . . . 16
3 Protocol Types 17
3.1 Symmetric Key Without Trusted Third Party . . . . . . . . . 17
3.2 Symmetric Key With Trusted Third Party . . . . . . . . . . . 18
3.3 Public Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Hybrid Protocols . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Other Forms of Protocol . . . . . . . . . . . . . . . . . . . . . 22
3.6 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2
5.2.2 Other Logics . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3 Expert Systems and Algebraic Rewriting Systems . . . . . . 42
6 A Library of Protocols 44
6.1 Symmetric Key Protocols Without Trusted Third Party . . . 44
6.1.1 ISO Symmetric Key One-Pass Unilateral Authenti-
cation Protocol . . . . . . . . . . . . . . . . . . . . . . 44
6.1.2 ISO Symmetric Key Two-Pass Unilateral Authenti-
cation Protocol . . . . . . . . . . . . . . . . . . . . . . 44
6.1.3 ISO Symmetric Key Two-Pass Mutual Authentication 44
6.1.4 ISO Symmetric Key Three-Pass Mutual Authentication 45
6.1.5 Using Non-Reversible Functions . . . . . . . . . . . . 45
6.1.6 Andrew Secure RPC Protocol . . . . . . . . . . . . . . 45
6.2 Authentication Using Cryptographic Check Functions . . . . 46
6.2.1 ISO One-Pass Unilateral Authentication with CCFs . 46
6.2.2 ISO Two-Pass Unilateral Authentication with CCFs . 46
6.2.3 ISO Two-Pass Mutual Authentication with CCFs . . . 46
6.2.4 ISO Three-Pass Mutual Authentication with CCFs . . 46
6.3 Symmetric Key Protocols Involving Trusted Third Parties . . 46
6.3.1 Needham Schroeder Protocol with Conventional Keys 46
6.3.2 Denning Sacco Protocol . . . . . . . . . . . . . . . . . 47
6.3.3 Otway-Rees Protocol . . . . . . . . . . . . . . . . . . . 47
6.3.4 Amended Needham Schroeder Protocol . . . . . . . . 48
6.3.5 Wide Mouthed Frog Protocol . . . . . . . . . . . . . . 48
6.3.6 Yahalom . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.3.7 Carlsen’s Secret Key Initiator Protocol . . . . . . . . . 50
6.3.8 ISO Four-Pass Authentication Protocol . . . . . . . . 50
6.3.9 ISO Five-Pass Authentication Protocol . . . . . . . . . 50
6.3.10 Woo and Lam Authentication Protocols . . . . . . . . 50
6.3.11 Woo and Lam Mutual Authentication protocol . . . . 53
6.4 Signatures with Conventional Key Encryption . . . . . . . . 54
6.4.1 Needham-Schroeder Signature Protocol . . . . . . . . 54
6.5 Symmetric Key Repeated Authentication protocols . . . . . . 55
6.5.1 Kerberos Version 5 . . . . . . . . . . . . . . . . . . . . 55
6.5.2 Neuman Stubblebine . . . . . . . . . . . . . . . . . . . 57
6.5.3 Kehne Langendorfer Schoenwalder . . . . . . . . . . 58
6.5.4 The Kao Chow Repeated Authentication Protocols . 58
6.6 Public Key Protocols without Trusted Third Party . . . . . . 59
6.6.1 ISO Public Key One-Pass Unilateral Authentication
Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3
6.6.2 ISO Public Key Two-Pass Unilateral Authentication
Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.6.3 ISO Public Key Two-Pass Mutual Authentication Pro-
tocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.6.4 ISO Three-Pass Mutual Authentication Protocol . . . 60
6.6.5 ISO Two Pass Parallel Mutual Authentication Protocol 60
6.6.6 Bilateral Key Exchange with Public Key . . . . . . . . 60
6.6.7 Diffie Hellman Exchange . . . . . . . . . . . . . . . . 60
6.7 Public Key Protocols with Trusted Third Party . . . . . . . . 61
6.7.1 Needham-Schroeder Public Key Protocol . . . . . . . 61
6.8 SPLICE/AS Authentication Protocol . . . . . . . . . . . . . . 61
6.8.1 Hwang and Chen’s Modified SPLICE/AS . . . . . . . 62
6.9 Denning Sacco Key Distribution with Public Key . . . . . . . 63
6.9.1 CCITT X.509 . . . . . . . . . . . . . . . . . . . . . . . . 63
6.10 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.10.1 Shamir Rivest Adelman Three Pass protocol . . . . . 64
6.10.2 Gong Mutual Authentication Protocol . . . . . . . . . 64
6.10.3 Encrypted Key Exchange – EKE . . . . . . . . . . . . 65
6.10.4 Davis Swick Private Key Certificates . . . . . . . . . . 66
7 Acknowledgements 67
4
1 Introduction
1.1 Background
The past two decades have seen an enormous increase in the development
and use of networked and distributed systems, providing increased func-
tionality to the user and more efficient use of resources. To obtain the ben-
efits of such systems parties will cooperate by exchanging messages over
networks. The parties may be users, hosts or processes; they are generally
referred to as principals in authentication literature.
Principals use the messages received, together with certain modelling
assumptions about the behaviour of other principals to make decisions on
how to act. These decisions depend crucially on what validity can be as-
sumed of messages that they receive. Loosely speaking, when we receive
a message we want to be sure that it has been created recently and in good
faith for a particular purpose by the principal who claims to have sent it.
We must be able to detect when a message has been created or modified
by a malicious principal or intruder with access to the network or when
a message was issued some time ago (or for a different purpose) and is
currently being replayed on the network.
An authentication protocol is a sequence of message exchanges be-
tween principals that either distributes secrets to some of those principals
or allows the use of some secret to be recognised [26]. At the end of the
protocol the principals involved may deduce certain properties about the
system; for example, that only certain principals have access to particular
secret information (typically cryptographic keys) or that a particular prin-
cipal is operational. They may then use this information to verify claims
about subsequent communication, for example, a received message en-
crypted with a newly distributed key must have been created after distri-
bution of that key and so is timely.
A considerable number of authentication protocols have been speci-
fied and implemented. The area is, however, remarkably subtle and many
protocols have been shown to be flawed a long time after they were pub-
lished. The Needham Schroeder Conventional Key Protocol was pub-
lished in 1978 [87] and became the basis for many similar protocols in
later years. In 1981, Denning and Sacco demonstrated that the protocol
was flawed and proposed an alternative protocol [42]. This set the gen-
eral trend for the field. The authors of both papers suggested other pro-
tocols based on public key cryptography (see section 2). In 1994 Martin
Abadi demonstrated that the public key protocol of Denning and Sacco
was flawed [1]. In 1995, Lowe demonstrated an attack on the public key
5
protocol of Needham and
Schroeder (seventeen years after its publication). In the intervening years
a whole host of protocols have been specified and found to be flawed (as
demonstrated in this report).
This report describes what sorts of protocols have been specified and
outlines what methods have been used to analyse them. In addition, it
provides a summary of the ways in which protocols have been found to
fail. There is a large amount of material in the field and the main body of
this document is intended as a concise introduction to and survey of the
field. Some types of protocol are given little detailed attention, particu-
larly those which rely on number-theoretic properties for their security. It
is envisaged that future editions of this report will provide a complete cov-
erage. An annotated bibliography is included to guide the reader. Since
authentication relies heavily on encryption and decryption to achieve its
goals we also provide a brief review of elements of cryptography.
1
jac@cs.york.ac.uk, jeremy@cs.york.ac.uk
6
2 Cryptographic Prerequisites
2.1 General Principles
Cryptographic mechanisms are fundamental to authentication protocols.
Suppose that we have some message text P which we wish to transmit
over the network. P is generally referred to as plaintext or a datagram. A
cryptographic algorithm converts P to a form that is unintelligible to any-
one monitoring the network. This conversion process is called encryption.
The unintelligible form is known as ciphertext or a cryptogram. The precise
form of the cryptogram C corresponding to a plaintext P depends on an
additional parameter K known as the key.
The intended receiver of a cryptogram C may wish to recover the origi-
nal plaintext P. To do this, a second key K 1 is used to reverse the process.
-1
Key = K Key = K
Encryption Decryption
ally they are identical and we shall generally assume that this is the case.
The key K is used by a pair of principals to encrypt and decrypt messages
to and from each other. Of course, anyone who holds the key can create
ciphertexts corresponding to arbitrary plaintexts and read the contents of
7
arbitrary ciphertext messages. To ensure security of communication this
key is kept secret between the communicating principals. Following estab-
lished convention we shall use the notation Kab to denote a key intended
for communication between principals A and B using a symmetric key
cryptosystem.
8
Other examples of block ciphers are MADRYGA (efficient for software
implementation and operates on 8-bit blocks), NEWDES (operates on 64-
bit blocks but with a 120-bit key), FEAL-N, RC2 and RC4 (by Ronald
Rivest) and IDEA (by Lai and Massey). Schneier has written a readable
account of the IDEA algorithm [98]. A very good overview of block ci-
phers (and others) can be found in Schneier’s general cryptography text
[99].
E(K : P1 P2 : : : Pn ) = C0 C1 C2 : : : Cn
where
C0 = I
8 ; >
i i 0 Ci = e(K : (Ci 1 Pi ))
9
I
P1 Encrypt C1
Key
P2 Encrypt C2
Key
P3 Encrypt C3
Key
Here, e(K :) is the block encryption function used with key K. The en-
cryption process is shown in figure 2.
Successive ciphertext blocks are decrypted using the keyed block func-
tion d(K :) according to the rule
8 i i
; >
0 Pi = Ci 1 d(K : Ci )
Thus, for any successive pair of ciphertext blocks we can recover the plain-
text block corresponding to the second (provided we have the key).
If we choose a different initial block I in each case then even identical
plaintext messages will have different ciphertexts. It is widely acknowl-
edged that non-repeating initial blocks are essential for adequate preser-
vation of confidentiality (unless the first block in a message is always dif-
ferent in which case it is known as a confounding block). Authors differ as
to whether they should be passed between communicating parties in the
clear (which Schneier [99] thinks is fine) or encrypted ( as recommended
by Davies and Price [39]). Voydock and Kent [112] address many aspects of
initial block usage insisting that they should be pseudo-random for CBC.
The rationale given there and in various other texts is incomplete or sim-
ply wrong. For example Schneier states that an initial block can be a serial
10
number that increments after each message but does not repeat. Clark and
Jacob [36] have shown that such an approach is potentially disastrous; they
show how for the most celebrated authentication protocol of all, adoption
of this approach would allow a third party to create the ciphertext for an
arbitrary message without having access to the key!
In certain network applications it is useful to be able to transmit, re-
ceive and process data chunks of size less than the block size (e.g. the pro-
cessing of character-sized chunks from a terminal). In such cases Cipher
Feedback mode (CFB) might be used. Figure 3 is based on a figure by
Schneier [99] and shows an 8-bit CFB with a 64-bit block algorithm. Here
the contents of a shift register are initialised with some value. The contents
of the shift register are encrypted as a block, and the leftmost byte of the
result is XORed with the next plaintext byte to produce a ciphertext byte.
The contents of the register are now shifted left by 8 bits and the most re-
cently created ciphertext byte is placed in the rightmost byte of the register
and the procedure repeats. The decryption procedure is easily obtained.
Shift Register
Key Encrypt
Cipher
Feedback
Leftmost byte
Bi
Pi Ci
XOR
11
Schneier states that the initialisation vectors for CFB and OFB should
be different for each message encrypted, though there is no additional ben-
efit from sending them encrypted [99]. Voydock and Kent disagree [112].
The error propagation properties of the different modes of encryption
vary but are not detailed here. The reader is referred to Schneier [99] or
Davies and Price [39] for details.
Other modes are possible, e.g. Counter mode (like OFB but with the
contents of the register simply incremented each time, i.e. no feedback),
Block Chaining mode (where the input to the encryption is the XOR of
all previous ciphertext blocks and the current plaintext block) and Prop-
agating Cipher Block Chaining (where the input to the encryption is the
XOR of the current and the immediately previous plaintext blocks and all
previous ciphertext blocks). There are a variety of other modes which are
somewhat esoteric; we shall not describe them here.
Shift Register
Key
Encrypt
Output
Feedback
Leftmost byte
Bi
Pi Ci
XOR
12
of plaintext. Clearly we should wish the bit-stream produced to be as
random as possible. Indeed, a vast amount of work into pseudo-random
stream generation has been carried out (see [99]). The streams produced
depend on a key in some way (if identical streams were produced each
time then cryptanalysis becomes easy). A keystream generator comprises
a finite state machine and an output function. Figure 5 shows two ba-
Next-State Next-State
Key Key
Function Function
Ki Ki
Pi C i
Pi C i
13
each principal A is associated with some key pair (Ka Ka 1). The public key
;
Ka is made publicly available but the principal A does not reveal the pri-
vate key Ka 1 . Any principal can encrypt a message M using Ka and only
to A can be ensured.
Some public key algorithms allow the private key to be used to encrypt
plaintext with the public key being used to decrypt the corresponding
ciphertext. If a ciphertext C decrypts (using Ka) to a meaningful plaintext
message P then it is assumed that the ciphertext must have been created
by A using the key Ka 1 . This can be used to guarantee the authenticity
of the message. The most widely known public key algorithm that allows
such use was developed by Rivest, Shamir and Adleman [92] and is uni-
versally referred to as RSA. Such algorithms are often said to provide a
digital signature capability. Simply encrypting using a private key does not
constitute a signature. Various checks must also be made by the receiver
(see Gollman [49]).
The RSA algorithm [92] works as follows:
3. use Euclid’s algorithm to generate a d such that e d = 1 mod (n)
4. make the pair (n e) publicly available – this is the public key. The
;
private key is d.
Here encryption and decryption are the same operation (modular expo-
nentiation).
A sender A can communicate with B preserving secrecy and ensuring
authenticity by first signing a message using his own private key Ka 1 and
then encrypting the result using B’s public key Kb. B uses his private key
to decrypt and then uses A’s public key to obtain the original message.
Public key algorithms tend to rely on the (supposed) computational
difficulty of solving certain problems (e.g. finding discrete logarithms for
the Diffie Hellman algorithm and finding prime factors for RSA). Again,
key length is an issue. Computing power is increasing rapidly and there
have been significant advances. For example, ten years ago 512 bit keys for
14
RSA were thought to be very secure; today 512 bits is considered a mini-
mum requirement (and 1024 bits is often recommended). Sheer processing
capability also affects the usability of public key encryption. Public key al-
gorithms are generally much slower than symmetric key algorithms.
Schneier [99] gives a good account of relative speeds of algorithms.
There are some very useful and informative papers that deal (at least
in part) with public key cryptography. Hellman provides an excellent in-
troduction to public key cryptography and the underlying mathematics
[58]. Willet provides a much higher level view [115]. Gordon [56] pro-
vides a good but simple introduction. Diffie provides an exciting account
of the first decade of public key cryptography [43] with a particularly good
account of the attacks on knapsacks. Brickell and Odlyzko provide an ac-
count of various attacks on public key systems (and others) [25]. Other
aspects are covered in Massey’s informative general paper on cryptology
[81].
We shall often require evidence that a message that has been sent has not
been subject to modification in any way. Typically this is carried out using
a hash function. A hash function H when applied to a message M yields
a value H(M) of specific length known as the hash value of that message.
H(M) is often referred to as a message digest. The mapping of messages to
digests is one-way; given M and H(M) it should be computationally in-
feasible to find M’ such that H(M’)=H(M). The digest is a form of reduced
message calculated using a publicly known technique. A receiver of a
message can check whether a message and a corresponding digest agree.
Hash functions are largely intended for use in conjunction with cryptog-
raphy to provide signatures.
If M is a message then A can provide evidence to B that he created
it, and that it has not been tampered with, by calculating E(Kab : H(M))
and sending the message M together with the newly calculated encrypted
hash value. On receiving the message, B can calculate H(M) and then
E(Kab : H(M)) and check whether the value agrees with the encrypted
hash value received. Since the amount of encryption is small this is a quite
efficient means to demonstrate authenticity (assuming A and B do not fake
messages from the other).
15
2.5 Notational Conventions
In this report we shall use the notation E(K : M) to denote the result of
encrypting message plaintext M with key K.
A protocol run consists of a sequence of messages between principals
and will be described using the standard notation. Principals are generally
denoted by capitals such as A, B and S (for a server). The sequence of
messages
(1) A ! B : M1
(2) B ! S : M2
(3) S ! B : M3
denotes that in the first message of the protocol A sends to B the mes-
sage whose components are a principal identifier A together with an en-
crypted nonce E(Kab : Na).
16
3 Protocol Types
In this section we provide an overview of various forms of authentica-
tion protocol in use today. At the highest level we have categorised them
according to the principal cryptographic approach taken, i.e. symmetric
key or public key. We distinguish also between those that use (one or
more) trusted third parties to carry out some agreed function and those
that operate purely between two communicating principals that wish to
achieve some mode of authentication. There are further distinctions that
can be made: the number of messages involved in the protocols (e.g. one-
pass, two-pass, three-pass etc.) and whether one principal wishes to con-
vince the second of some matter (one-way or unilateral authentication) or
whether both parties wish to convince each other of something (two-way
or mutual authentication). These distinctions are also made by the ISO
entity authentication standards (see [61]).
Here the text fields shown are optional; their use is implementation
specific (and we shall ignore them in this discussion). We can see that the
claimant A (i.e. the one who wishes to prove something) sends an en-
crypted message containing a nonce and the identifier of the verifier (i.e.
the principal to whom the claim is made). The nonce may be a timestamp
Ta or a sequence number Na depending on the capabilities of the envi-
ronment and the communicating principals. On receiving this message,
B, who believes that the key Kab is known only to himself and A, may
deduce that A has recently sent this message if the sequence number is
appropriate or if the timestamp has a recent value. Note here that if a ma-
licious principal has unfettered access to the network medium then use
of sequence numbers will be insufficient (since he can record message (1),
prevent B from receiving it, and replay it to B at a later time).
The best-known protocols that do not use a trusted third party are sim-
ple challenge-response mechanisms. One principal A issues data to a sec-
ond principal B. B then carries out some transformation and sends the
result to A who checks to see if the appropriate transformation has oc-
curred. Figure 6 shows a simple challenge-response protocol. In this case
17
the nonce Na should be random. If the nonce were a sequence number,
or were otherwise predictable, a malicious principal could issue the next
nonce value to B and record the response. When A genuinely issued the
0
same nonce value at a later date the intruder could replay B s earlier re-
sponse to complete the protocol. A could conclude only that the message
he receives was created at some time by B (but not necessarily in response
to his most recent challenge).
Principal A Principal B
Na
E(Kab: Na)
Symmetric key protocols that use a trusted third party (TTP) are by far the
most numerous in the literature. The most celebrated protocol of all time,
the Needham Schroeder Symmetric Key Authentication protocol [87] is
18
described below:
(1) A ! S : A B Na
; ;
as message (3).
Principal B decrypts this message to discover the key Kab and that it
is to be used for communication with A. He then generates a nonce Nb,
encrypts it (using the newly obtained key), and sends the result to A as
message (4).
Principal A, who possesses the appropriate key Kab , decrypts it, forms
Nb 1, encrypts it and sends the result back to B as message (5). B de-
crypts this and checks the result is correct. The purpose of this exchange
is to convince B that A is genuinely operational (and that message 3 was
not simply the replay of an old message).
At the end of a correct run of the protocol, both principals should be
in possession of the secret key Kab newly generated by the server S and
should believe that the other principal has the key. Rather, this is what the
protocol is intended to achieve. We shall show in section 4.1 that it is in
fact flawed.
There have been many other protocols that have used a trusted third
party to generate and distribute keys in a similar way: the Amended
Needham-Schroeder Protocol [88] (see 6.3.4), the Yahalom Protocol (see
6.3.6), the Otway-Rees Protocol [91] (see also 6.3.3) which is essentially
the same as the Amended Needham-Schroeder Protocol. Woo and Lam
provide several authentication protocols [116, 117] (6.3.10). Other exam-
ples include those by Gong and Carlsen’s secret key initiator protocols
(for mobile phone networks) (6.10.2 and 6.3.7) and the ISO Four- and Five
Pass Mutual Authentication Protocols [62] (6.3.8 and 6.3.9).
Denning and Sacco suggested fixing problems in the Needham
Schroeder protocol using timestamps. The Denning Sacco Conventional
19
Key Protocol replaces the first three messages of the Needham Schroeder
protocol with:
(1) A ! S : A B
;
20
hardware implementation of RSA has a throughput of 64 Kbaud). How-
ever, exchanging symmetric encryption keys using public key cryptogra-
phy provides an excellent use of the technology and several such distribu-
tion schemes have been created.
Needham and Schroeder proposed the following protocol in their clas-
sic work [87]:
(1) A ! S : A B ;
(2) S ! A : E(Ks 1 : Kb B)
;
(4) B ! S : B A ;
(5) S ! B : E(Ks 1 : Ka A)
;
21
private key. It can also be used to sign a hash value of a complete mes-
sage. The actual message can also be sent in the clear with the encrypted
hash value appended. A major alternative to the use of RSA developed,
amid some controversy, by the United States National Security Agency
(NSA) is the Digital Signature Algorithm. It is based on El Gamal encryp-
tion. Schneier provides a good account of the algorithm [99] and a good
journalistic account of the controversy can be found in the paper by Adam
[2]. Other digital signatures schemes include ESIGN, McEliece (based on
algebraic coding theory). Akl provides a good tutorial guide to digital
signatures in general [3].
3.6 General
There are many applications of authentication technology that are not dis-
cussed above. Simmons provides an example of the need for authenticity
in the face of a very hostile enemy for the purposes of verifying nuclear
test ban treaties [100]. Anderson provides an indication of how electronic
payment systems work [9]. The same author discusses societal and legal
aspects of cryptographic technology [8], [7].
22
4 Attacking Authentication protocols
In this section we detail various ways in which protocols fail and give
examples.
!
0
(4) B Z(A) : E(K ab : Nb)
!
0
(5) Z(A) B : E(K ab : Nb 1)
B believes he is following the correct protocol. Z is able to form the cor-
0
rect response in (5) because he knows the compromised key K ab. He can
now engage in communication with B using the compromised key and
masquerade as A. Denning and Sacco suggested that the problem could
be fixed by the use of timestamps [42]. The original authors suggested
an alternative fix to this problem by means of an extra handshake at the
beginning of the protocol [88].
23
key). The message is represented at the concrete level as a sequence of bits.
A type flaw arises when the recipient of a message accepts that message as
valid but imposes a different interpretation on the bit sequence than the
principal who created it.
For example, consider the Andrew Secure RPC Protocol
(3) A ! B : E(Kab : Nb + 1)
!
0 0
(4) B A : E(Kab : K ab N b);
0
E(Kab : Nb + 1) to B. B now creates a session key K ab and distributes
0
it (encrypted) together with a sequence number identifier N b for future
communication.
However, if the nonces and keys are both represented as bit sequences
of the same length, say 64 bits, then an intruder could record message (2),
intercept message (3) and replay message (2) as message (4). Thus the
attack looks like:
(1) A !
B : A E(Kab : Na)
;
24
Intended
Na + 1 Nb Interpretation
of Creator
1001101100111100 1101101100010010
Encryption
Ciphertext
Decryption
1001101100111100 1101101100010010
(1) A ! B : M ; A B E(Kas : Na M A B)
; ; ; ; ;
The above protocol causes a key Kab created by the trusted server S to
be distributed to principals A and B. M is a protocol run identifier.
After initiating the protocol A expects to receive a message back in (4)
that contains the nonce Na used in (1) together with a new session key Kab
created by S. If M is (say) 32 bits long, A and B each 16 bits long and Kab
is 64 bits then an intruder Z can simply replay the encrypted component
of message (1) as the encrypted component of message (4). Thus
(1) A !
Z(B) : M A B E(Kas : Na M A B)
; ; ; ; ; ;
(4) Z(B) !
A : M E(Kas : Na M A B)
; ; ; ;
Na and accepts (M,A,B) as the new key. M, A and B are all publicly known
25
(since they were broadcast in the clear). Similarly, it is clear that an in-
truder can play the role of S in messages (3) and (4) simply replaying the
encrypted components of message (2) back to B. The attack is:
(1) A B ! : M ; A B E(Kas : Na M A B)
; ; ; ; ;
(2) B !
Z(S) : M ; A B E(Kas : Na M A B) E(Kbs : Nb M A B)
; ; ; ; ; ; ; ; ;
(4) B A ! : M ; E(Kas : Na M A B)
; ; ;
Further examples of type flaws are given by Syverson [109] and Hwang
et al [60].
Here A initiates the first protocol with message (1.1). Z now pretends
to be B and starts the second protocol run with message (2.1), which is
simply a replay of message (1.1). A now replies to this challenge with
message (2.2). But this is the precise value A expects to receive back in the
26
first protocol run. Z therefore replays this as message (1.2). At the very
least A believes that B is operational. In fact, B may no longer exist. The
attack is illustrated in figure 8. Solid arrows indicate messages of the first
protocol run, broken arrows indicate messages of the second protocol run.
E(Kab:Na)
E(Kab:Na)
Principal Intruder
A E(Kab:Na+1) Z
E(Kab:Na+1)
Here, each principal (A and B in the above) shares a key with the server
S. If A wishes to communicate with a principal B then he generates a key
Kab and a timestamp Ta and forms message (1) which is sent to S.
On receiving message (1) S checks whether the timestamp Ta is "timely"
and, if so, forwards the key to B with its own timestamp Ts. B checks
whether message (2) has a timestamp that is later than any other message
it has received from S (and so will detect a replay of this message).
The first way it can be attacked is by simply replaying the first mes-
sage within an appropriate time window - this will succeed since S will
27
0
produce a new second message with an updated timestamp. If S s no-
0
tion of timeliness is the same as B s (i.e. it accept messages only if the
timestamp is later than that of any other message it has received from the
sender) then this attack will not work.
The second method of attack allows one protocol run to be recorded
and then the attacker continuously uses S as an oracle until he wants to
bring about re-authentication between A and B.
!
0
( 1 ) Z(B) S : B E(Kbs : Ts A Kab)
; ; ;
!
0 0
(2 ) S Z(A) : E(Kas : T s B Kab)
; ;
!
00 0
(1 ) Z(A) S : A E(Kas : T s B Kab)
; ; ;
!
00 00
(2 ) S Z(B) : E(Kbs : T s A Kab)
; ;
28
of an encryption algorithm. The next section shows that the naïve use of
certain algorithms (that are generally considered strong) in the context of
specific protocols may produce insecure results.
Nb is odd then the final plaintext bit (assumed to be the least significant
bit) will be 1 and Nb 1 will differ only in that final bit. On a bit by bit
encryption basis, the cipherstream for message (5) can be formed simply
by flipping the value of the final bit bn . On average the nonce will be odd
half of the time and so this form of attack has a half chance of succeeding.
This form of attack was originally described by Boyd [21]. It appears that
this form of attack is not limited to stream ciphers. Analysis reveals that
similar attacks can also be mounted against certain uses of cipher feedback
mode for block ciphers. Furthermore, if the element that is subject to bit
flipping represents a timestamp then the scope for mischief seems greater
(but seems unrecorded in the literature).
It is interesting to note that under the same set of assumptions a much
more virulent attack can be carried out by A. Message (3) of the protocol
is given below:
Flipping the final bit of this message could turn the A into a C under
decryption. Since A knows the key Kab he could fool B into believing he
shared this key with C and effectively masquerade as C.
29
have the key). Suppose that E(K : P1 P2 P3 ) = IC1 C2 C3 Then C1 C2 C3 looks
like a ciphertext that begins with initialisation block C1 , and decrypts to
P2 P3 . Similarly C1 C2 decrypts to P2 (it uses C1 as an initialisation block)
and C2 C3 decrypts to P3 .
Thus we can see that without appropriate additional protection valid
messages may be created if their contents are subsequences of generated
messages. To distinguish this form of attack from those that follow we
shall call this form of flaw a subsequence flaw.
Consider again message (2) of the Needham Schroeder protocol of sub-
section 4.1.
Suppose that this has ciphertext C0 C1 C2 C3 and that all components have
:::
of the form A might expect to receive in message (3) when B has initiated
the protocol. Thus, he can be fooled into accepting the publicly known Na
as a key. Thus use of CBC mode of encryption with this protocol will not
suffice.
Stubblebine and Gligor [106] have demonstrated attacks via cut and
paste methods where the ciphertexts of messages are split and conjoined
appropriately to form the ciphertexts of other messages (which should
only be formable by those in possession of the appropriate key). This is
illustrated in figure 9.
We see that the spliced ciphertext message decrypts to appropriate
plaintext except for the block immediately after the join. Denoted by X
in the figure, it is likely that it is random gibberish but in some cases that
may be precisely what is expected (e.g. if the block is expected to contain
a random number). Mao and Boyd have also highlighted the dangers of
CBC use [79], pointing out that in many cases it will be possible to deter-
mine precisely what value X takes if the intruder has knowledge of the
plaintext block corresponding to the ciphertext immediately after the ci-
phertext join. In the example shown in figure 9, we have
0
X = C3 d K (C2 )
0 0
X = C3 (C1 P2 )
0
and so if P2 is known then so is X since the ciphertext blocks are publicly
broadcast.
It is dangerous to believe that attacks of the above form lose their power
if the plaintext block is not publicly known or guessable; such blocks will
30
Message 1 Message 2
P1 P2 P3 P4 P5 P1’ P2 ’ P3 ’ P4’
Ciphertexts
C0 C1 C2 C 3 C4 C5 C’0 C’1 C’2 C’3 C’4
Under Key K
X = C d (C ’ )
3 K 2
31
generally be known to the parties communicating in a protocol who may
misuse their knowledge (see below).
Of particular note are initialisation attacks — attacks that involve modu-
lation of the initialisation vector C0 . Consider a ciphertext that starts with
C0 C1 and suppose that we know that the initial plaintext block was P1 .
Then
P1 = C0 d K (C1 )
W=W P1 P1
and so
0
W = C0 d K (C1 )
0 0
where C0 = W P1 C0 and so C0 C1 is the ciphertext corresponding to
plaintext W. In this fashion we can replace the initial known plaintext
block P1 with our own choice W. This is potentially very disturbing since
the rest of the message is unaffected.
As an example of the danger of this attack, consider again message
(2) of the Needham Schroeder protocol. We can record message (2) of a
previous run of this protocol between A and B. In particular we can replay
the old message (2) after modifying the initial block from the old (and
known) value of the nonce Na with the new one issued in the current run
of the protocol. Thus, we can impersonate the trusted server S. Now
consider the contents of message (3) of that protocol:
Since A knows the key in message (3), he can create a new message (3)
whenever he likes for any key value he likes. One might argue that if A
wants to misbehave he can do so much more simply than this but this
misses the point: B works on the assumption that the contents of message
(3) were created by the trusted server S. This is clearly not the case.
We have illustrated these attacks using the Needham Schroeder pro-
tocol simply because it is the best known and simple to understand. The
above forms of attack present problems with a good number of protocols.
32
We have illustrated various forms of cryptoalgorithm dependent flaws.
The above description is by no means exhaustive. Indeed, other modes of
encryption have given rise to problems in implemented protocols. In par-
ticular, Propagating Cipher Block Chaining (PCBC) mode was shown to
be deficient and led to the Kerberos V.5 protocol adopting CBC mode (V.4
used PCBC). Criticisms of the Kerberos protocols were given by Bellovin
and Merritt [14]. Other aspects relating to Cipher Block Chaining can be
found in the recent paper by Bellare et al [13].
1
!
(2) AS C : AS E(Kas ; : AS C Nc Ks)
; ; ;
(2 : 1) Z(C) ! AS : C Z Nc
; ;
1
!
(2 : 2) AS Z(C) : AS E(Kas
; : AS C Nc Kz)
; ; ;
1
!
(1 : 2) Z(AS) C : AS E(Kas
; : AS C Nc Kz)
; ; ;
33
Here the intruder Z intercepts the initial message from C to AS and sim-
ply replaces the identifier of the intended server S with his own identifier
Z and sends the result to AS as message 2.1. AS, believing that C has re-
:
quested Z’s public key, replies with message (2.2). Z simply allows this to
be received by C as message (1.2). C performs appropriate decryptions and
checks and believes that he has received the public key of S. This attack
(and a similar one) was identified by Hwang and Chen [59]. They suggest
that this problem can be solved by explicitly including the identifier of the
requested server S in message (2). The protocol then becomes:
(1) C !
AS : C S Nc ; ;
1
!
(2) AS C : AS E(Kas ; : AS C Nc S Ks)
; ; ; ;
Problems with signing after encryption arose some time ago with the
draft CCITT X.509 standard. L’Anson and Mitchell [12] showed certain
deficiencies in the protocols as did Burrows, Abadi and Needham [26] (see
6.9.1).
In this protocol all participants share keys with the trusted server T.
The server acts as intermediary. A accepts message (3) as proof that B sent
the message msg to him via T. The reversal of principal and message com-
ponents appears to be made to introduce asymmetry (and hence protect
against reflections). However, if msg begins with a principal identifier C
then message (3) may be passed off as a message (1) but originated by A
and intended for C. Since B chooses the contents of msg he can arrange
this. Can we protect against this by means of some integrity check? Gen-
erally the answer will be yes but this is not without its pitfalls. If messages
are of variable length then in many cases, it may be possible to embed a
whole message (including CRC check say) in the msg component. If CBC
34
mode of encryption is used then a perfectly formed encrypted message
could be extracted (this depends on how initial vectors are chosen).
Note that user data is very common, typically in the form of principal
identifiers or nonces and the like. Thus if a plaintext message P were 3
blocks long (including checks) and another message had a freely chosen
nonce N then if this nonce is 3 or more blocks in length then a CBC encap-
sulation attack becomes possible. Similar attacks will hold when a stream
cipher is used (but here it will generally have to be the initial segment of a
message).
35
4.8 Conclusions
Protocol construction might seem a simple task; protocols often comprise
only a few messages. This is, however, clearly deceptive and the examples
we have shown above indicate that the invention of secure protocols is a
remarkably subtle affair.
The current explosion in distributed system development and network
usage means that there is a pressing need for a framework and tool sup-
port for the rigorous development and analysis of new security protocols.
Although significant advances have been made in recent years, there is
clearly some way to go! As Lowe has shown [76] the same mistakes seem
to be made time after time.
There are, however, signs that the community is getting to grips with
the matter at hand. There is a gradual realisation that it is the whole sys-
tem that is important and that a considerable number of factors need to
be taken into account. Anderson emphasises the management aspects in
banks [5]. Abadi and Needham take a strong practical engineering ap-
proach providing ten useful rules of thumb in their excellent general de-
sign guide [1].
The subtlety of some attacks indicates that a systematic (and auto-
mated) approach to analysis is essential. The next section indicates some
of the methods and tools that have been used to date.
36
5 Formal Methods for Analysis
In this section we review the major approaches to the specification and
analysis of authentication protocols. Several methods have been tried,
each with their strengths and weaknesses. We address them as follows:
This is the classification used by Rubin and Honeyman [95] in their re-
view article. As indicated by Rubin and Honeyman, the above methods as
implemented are all independent of the cryptographic mechanism used.
This is of course a strength since in producing a protocol specification we
might not yet wish to specify a particular implementational mechanism.
However, it also highlights a gap in the formal support for protocol de-
velopment: tool support for the identification of cryptosystem dependent
insecurities.
37
language has been used in many areas of security outside of authentica-
tion. Within the UK it is often the language of choice for specification for
Governmental agencies. Details of these uses are omitted here. More re-
cently, the B notation has been used to specify authentication systems [17].
This method shows some promise as tool support emerges.
The use of such state-based techniques seems of limited use. There is
little or no attempt to model an attacker (Kemmerer models a passive in-
truder, Mao and Boyd model none). There is an implicit assumption that
the functionality specified is sufficient to maintain security. Without an ex-
plicit statement of what attacks are possible it is impossible to see whether
the specified operations actually do maintain security. Such methods are
primarily concerned with the preservation of correctness rather than secu-
rity.
Other formal specification techniques have been used for authentica-
tion protocols, e.g. LOTOS has been used to specify the X.509 directory
framework. Finite-state machines have also been used by several authors
for the specification and analysis of protocols. None of these uses provides
analytical support for security in the face of an active intruder. Rubin and
Honeyman [95] provides some details.
Recent work by the Formal Systems Europe and Programming Re-
search Group at Oxford [94] has used verification techniques for process
algebras to analyse security protocols. In particular work has been carried
out using CSP. Principals in the protocol are specified as CSP processes op-
erating in parallel. In addition, a general attacker is added that can carry
out actions that may reasonably be expected of an attack (listening, faking,
replaying etc.)
An authentic run of the protocol is specified (the protocol terminates
with success only if the message sequence is what the protocol intended).
The implementation of the protocol which comprises the various princi-
pals as agents must now be shown to satisfy the specification. The Fail-
ures Divergences Refinement (FDR) tool is used to check possible traces of
the implementation against the specification. Roscoe and Gardiner have
created a variety of heuristics to prune down the search space to make the
model checking feasible.
The results have been very promising (subtle and hitherto unknown
protocol flaws have been discovered using the approach). For example,
17 years after its publication a flaw was found in the Needham Schroeder
Public Key protocol by Lowe using the FDR tool [75]. See also [77]. Roscoe
and Gardiner provide an account of the initial results of their research in
[94]. The extension of the work to handle algebraic elements is also avail-
able [47] [48]. A particularly pleasing part of the work is the willingness to
38
investigate the operation of protocols under the relaxation of trust in prin-
cipals (or the weakening of assumptions). Further work on the use of CSP
to capture and verify security properties has been carried out by Schnei-
der [97]. An overview of the CSP approach to authentication protocols is
given by Ryan [96].
Traditionally, issues of trust have been dealt with using belief logics and
issues of security have been dealt with using knowledge logics. Syverson
[107] provides a good overview of how logics can be used for the analysis
of authentication protocols. He indicates that it is possible to reason about
both trust and security using either approach but that in practice he has
found that epistemic logics are more efficient. The greatest amount of ef-
fort has been expended in the use of belief logics and it is to this that we
turn our attention first.
39
under a private key and broadcast to the network. Since the correspond-
ing public key is generally known, the message can be decrypted by all
to obtain the secret shared key. In their rejoinder [27] the BAN authors
point out that their logic dealt with trust and not confidentiality, stating
that the obvious publication of the shared key in the indicated manner
contradicted a belief in its suitability for use.
This would appear correct, but the situation is still rather unsatisfying.
Additional problems have been identified. Snekkenes [102] showed that
permutations of protocol steps left the results unaffected.
It is possible that a principal may decrypt some random text to obtain
some putative ”formula” using some key that he holds. For the decryp-
tion to succeed the result of decryption must be meaningful in some way.
Gong, Needham and Yahalom [55] introduce the notion of recognisability
in their logic (general referred to as GNY) to cater for this. Also, the orig-
inal BAN logic assumes that there is sufficient redundancy in a message
for a principal to detect a message he himself originated (thus reflection
attacks are assumed to be catered for outside of BAN analysis). GNY logic
makes origination explicit. GNY allows preconditions to be attached to
rules to achieve different levels of belief. Thus, different levels of trust
are allowed by the logic. Most BAN work concentrates on the analysis of
protocols. When used for development, problems may arise because com-
pletely infeasible protocols may be specified that nevertheless achieve the
desired goals according to the protocol (e.g. by specifying that principals
send messages that contain information they simply do not have). This
is dealt with by Gong [52] whose extended logic requires that principals
make use only of information that is legitimately available to them.
Boyd and Mao [24] provide many criticisms of BAN logic (and other
descendants): the formalisation approach is somewhat vague; it allows
beliefs that may legitimately be regarded as nonsensical (e.g. belief in a
nonce) and the method of determining assumptions is ad hoc. Instead they
provide a language for describing protocols and a partially mechanised
approach to idealisation. As pointed out by Rubin and Honeyman [95]
there is still informal judgement at work in the idealisation process. The
reasoning process is backwards (rather than forwards as in BAN logic),
thus the reasoning proceeds from the desired conclusion to derive initial
beliefs.
There have been other belief-logic approaches. Boyd and Mao have in-
troduced a non-monotonic logic of belief (i.e. one which allows previously
held beliefs to be revoked) [24]. Campbell et al [30] introduce the notion
of uncertainty into BAN by assigning probabilities to assumptions and to
rules of inference. This allows conclusions drawn to be treated as uncer-
40
tain. Linear programming methods are used to determine the precise
bounds of probabilities.
Kessler and Wedel modify BAN to allow the incorporation of plaintext
messages [71]. This widens the scope of what can be analysed. They also
replace the nonce-verification rule of BAN with a ”has recently said ” rule.
A recipient of a message no longer believes that the sender of a message
believes the contents, rather he now just deduces that the sender sent it
recently. A rule is introduced to allow a principal to try keys that he has
(or can generate) without actually believing that the key is appropriate for
the message in question. Kessler and Wedel’s most important suggestion
is the incorporation of a passive eavesdropper into the system. By the
determination (by closure) of information available to such an intruder,
certain types of confidentiality breaches can be detected (e.g. the the Nes-
sett flaw). The authors provide an example of BAN’s inability to deal with
a parallel session attack. Recent work by Boyd and Mao has indicated that
care needs to be taken when cleartext is omitted [23] but Oorschot disputes
the views they take [120].
Overall, BAN has proved of substantial use. It often seems like a marked
improvement on its successors which have added conceptual apparatus to
deal with its perceived deficiencies at the expense of considerable increase
in complexity. This is indeed the view of Roger Needham (commenting
on GNY logic). Kessler and Wedel note that BAN extensions tend to be
extensions to the original BAN logic, not to its successors. BAN logic has
unearthed many protocol flaws and provides a very cost-effective means
of detecting (some) flaws. In terms of value for money it has much to be
said for it. The rule would appear to be ”Try BAN first; it doesn’t cost a
great deal and it often produces results.” The method is clearly not with-
out its difficulties; it should be regarded as a useful tool. BAN logic deals
with with the beliefs of trustworthy principals; it does not deal with confi-
dentiality. Since confidentiality is essential to maintaining authentication
other methods will need to be brought to bear for system security.
An important aspect of the BAN approach is that it forces the analyst
to be precise about what the goals and assumptions of a protocol actually
are. It is often very difficult to determine these from many specifications.
41
[32] indicates how various deficiencies of the standard notation can be
overcome by providing rules for a standard protocol specification into a
CKT5 logic specification. Snekkenes has shown that the sort of analysis
carried out in the Bieber method is insufficient to detect multi-role flaws,
i.e. where a principal does not restrict himself to playing just one agent.
He also suggests how to extend Bieber’s approach to cope with the prob-
lem. Snekkenes notes in his doctoral dissertation that principal operation
is couched in rather complex formulae. Snekkenes has also carried out
significant work that uses the HOL (Higher Order Logic) specification lan-
guage and tool support to specify and prove properties about protocols
[104].
42
and fairly easy to follow. Principals possess beliefs and also know various
words which make up messages. Receipt of a message causes the state of
the system to change. Words and beliefs held by a principal occur as a
result of receiving messages. Various rewrite rules are specified as part of
the protocol (e.g. the result of encrypting and then decrypting some plain
text with the same key produces te original plaintext). The tool attempts
to find scenarios to reach an insecure state. The tool looks technically ef-
fective but Rubin and Honeyman [95] report that these types of tools are
rather difficult to use by designers. Interestingly, the tool failed to find a
flaw in the TMN protocol due to the way in which the properties of the
RSA algorithm had been couched [70]. The analysis process is not entirely
automated; lemmas for the tool to prove must be generated by the user.
43
6 A Library of Protocols
6.1 Symmetric Key Protocols Without Trusted Third Party
6.1.1 ISO Symmetric Key One-Pass Unilateral Authentication Protocol
This protocol [62] consists of a single message from one principal A to a
second B. A secret key Kab is assumed to be shared between these two
principals.
(1) A ! j
B : Text2 E(Kab : [Ta Na] B Text1)
; ; ;
The use of the text fields is application specific. There is a choice between a
sequence number Na and a timestamp Ta which ’depends on the technical
capabilities of the claimant and the verifier as well as the environment.’
(2) B ! j
A : Text4 E(Kab : [Tb Nb] A Text3)
; ; ;
44
6.1.4 ISO Symmetric Key Three-Pass Mutual Authentication
(1) B ! A : Rb Text1 ;
On receiving message (2) B checks for the presence of both B and Rb sent
in message (1). On receiving message (3) A checks both Rb and Ra are the
ones sent in message (1) and (2) respectively.
(1) B ! A : B Rb ;
(3) A ! B : E(Kab : Nb + 1)
!
0 0
(4) B A : E(Kab : K ab N b);
The problem with this protocol is that there is nothing in message (4)
that A knows to be fresh. An intruder can simply replay this message at
a later date to get A to accept it as the final message of a protocol run (i.e.
replace the final message sent by B). There is also a parallel session attack.
45
6.2 Authentication Using Cryptographic Check Functions
All ISO protocols in this section can be found in Part 4 of the ISO 9798
Standard [64]. The keyed function f Kab (X) returns a hashed value for data
X in a manner determined by the key Kab.
(2) B ! j
A : [Tb Nb] Text4 f Kab([Tb Nb] A Text3)
; ; j ; ;
This protocol is two independent uses of the single pass unilateral authen-
tication protocol.
(1) A ! S : A B Na
; ;
46
The most famous attack is by Denning and Sacco [42]. There is another
potential weakness which depends on the nature of the assumptions made
about cryptographic support.
The main problem with this protocol is that B has no way of ensuring
that the message (3) is fresh. An intruder can compromise a key and then
replay the appropriate message (3) to B and then complete the protocol.
Also, A could (should he so wish) also spoof message (3) in the same way
(causing a stale key to be reaccepted by B).
If a stream cipher is used then the difference between the ciphertexts
in (4) and (5) is very small (one bit) and this allows a simple attack to be
launched. The reader is referred to [21]. See also section 4.4.1.
so, then it generates a key Kab and sends message (3) to B which forwards
part of the message to A. A and B will use the key Kab only if the message
47
components generated by the server S contain the correct nonces Na and
Nb respectively.
An attack on the protocol is given below:
(4) Z(B) !
A : M E(Kas : Na M A B)
; ; ; ;
in fact the new key. This triple is of course public knowledge. This is an
example of a type flaw. Of course, it is also possible to wait until message
(2) of the original protocol has been sent and then reflect appropriate com-
ponents back to both A and B and then monitor the conversation between
them.
(1) A ! B : A
(2) B ! A : E(Kbs : A Nb0 ) ;
48
The first way it can be attacked is by simply replaying the first mes-
sage within an appropriate time window - this will cause re-authentication
since S will produce a new second message with an updated timestamp.
The second method of attack allows one session to be recorded and then
the attacker continuously uses S as an oracle until he wants to bring about
re-authentication between A and B.
!
0
( 1 ) Z(B) S : B E(Kbs : Ts A Kab)
; ; ;
!
0 0
(2 ) S Z(A) : E(Kas : T s B Kab) ; ;
!
00 0
(1 ) Z(A) S : A E(Kas : T s B Kab)
; ; ;
!
00 00
(2 ) S Z(B) : E(Kbs : T s A Kab) ; ;
!
0
(1) A Z(S) : E(Kas : T s B Kab) ; ;
!
00
(2) Z(S) B : E(Kbs : T s A Kab) ; ;
6.3.6 Yahalom
(1) A ! B : A Na;
(1) Z(A) B ! : A Na
;
(2) B !
Z(S) : B E(Kbs : A Na Nb)
; ; ;
(3) ! : Omitted
(4) Z(A) B ! : E(Kbs : A Na Nb) E(Na Nb : Nb)
; ; ; ;
49
6.3.7 Carlsen’s Secret Key Initiator Protocol
(1) A ! B : A Na
;
(2) B ! S : A Na B Nb
; ; ;
!
0
(4) B A : E(Kas : Na B Kab) E(Kab : Na) N b
; ; ; ;
!
0
(5) A B : E(Kab : N b)
!
0
(2) B S : R b Ra A Text2
; ; ;
!
0
(3) S B : Text5 E(Kbs : R b Kab A Text4) E(Kas : Ra Kab B Text3)
; ; ; ; ; ; ; ;
50
The protocol Π f .
(1) A ! B : A
(2) B ! A : Nb
(3) A ! B : E(Kas : A B Nb)
; ;
The protocol Π1 .
(1) A ! B : A
(2) B ! A : Nb
(3) A ! B : E(Kas : A B Nb)
; ;
The protocol Π2 .
(1) A ! B : A
(2) B ! A : Nb
(3) A ! B : E(Kas : A Nb)
;
The protocol Π3 .
(1) A ! B : A
(2) B ! A : Nb
(3) A ! B : E(Kas : Nb)
(4) B ! S : E(Kbs : A E(Kas : Nb))
;
The protocol Π.
(1) A ! B : A
(2) B ! A : Nb
(3) A ! B : E(Kas : Nb)
(4) B ! S : E(Kbs : A E(Kas : Nb))
;
51
The protocol Π can be attacked as follows:
( 1 ) Z(A) ! B : A
(2) B ! Z(A) : Nb
( 3 ) Z(A) ! B : G
(4) B ! Z(S) : E(Kbs : A G)
;
!
0
(1 ) B Z(R) : B
!
0
(2 ) Z(R) B : Z E(Kzs : Nb)
;
!
0
(3 ) B Z(R) : E(Kbs : Z E(Kzs : Nb))
;
!
0
(4 ) Z(B) S : E(Kbs : Z E(Kzs : Nb))
;
!
0
(5 ) S Z(B) : E(Kbs : Nb)
( 5 ) Z(S) ! B : E(Kbs : Nb)
Here Z waits for B to start up a protocol run at (1’) with some principal
R to complete the attack.
Alternatively it may be attacked as follows:
( 1 ) Z(A) ! B : A
!
0
(1 ) Z B : Z
(2) B ! Z(A) : Na
!
0
(2 ) B Z : Nz
( 3 ) Z(A) ! B : G
!
0
(3 ) Z B : E(Kzs : Na)
(4) B ! S : E(Kbs : A G)
;
!
0
(4 ) B S : E(Kbs : Z E(Kzs : Na))
;
0
(5 )
(5) S ! B : E(Kbs : Na)
These protocol attacks are indeed given in [117]. However, the proto-
cols would appear to be subject to some straightforward replay attacks.
For example, in Π3
(1) Z(A) !B : A
(2) B ! Z(A) : Nb
(3) Z(A) !B : Nb
(4) B ! Z(S) : E(Kbs : A Nb)
;
52
security of the protocol would still depend on the properties of the cryp-
tosystem used. Thus, Woo and Lam note that Π is not susceptible to the
above form of attack. This is not necessarily the case. If we assume that
the symmetric cipher is commutative then we can carry out the following
attack:
( 1 ) B Z ! : B
(2 1) Z(A)
: B ! : A
(2 2) B
: ! Z(A) : N
(1 2)
: Z B ! : E(N : Kzs)
(1 3)
: B Z ! : E(E(N : Kzs) : Kbs)
(2 5) Z(S)
: B ! : E(N : Kbs)
Here is a protocol due to Woo and Lam [117] that combines mutual au-
thentication and key distribution.
(1) P ! Q : P N1
;
(2) Q ! P : Q N2;
There is a novel attack on this protocol due to Clark, Jacob and Ryan
[37]. Effectively, the principal Q can launch a parallel session attack that
causes P to accept as new a previously issued key. The attack consists of
53
the following steps:
(1 : 1) P ! Q : P N1
;
(2 : 1) Q ! P : Q N1
;
(2 : 2) P ! Q : P N2
;
(1 : 2) Q ! P : Q N2;
(1 : 3) P ! Q : E(Kps : P Q N1 N2)
; ; ;
(1 : 7) P ! Q : E(Kpq : N2)
(2 : 3) Q ! P : E(Kqs : Q P N1 N2)
; ; ;
(2 : 7) Q ! P : E(Kpq : N2)
54
digest of the message received and checks the result against the CS value
returned to him by S.
(1) C ! A : U G L1 N1
; ; ;
where
In message (1) the client C informs the key distribution centre A that
he wishes to communicate on behalf of user U with the ticket granting
server G. A lifetime L and a nonce N1 are sent too. A generates a new
key Kcg for this purpose and encrypts it under the key Ku it shares (or
will share when U enters his password) with C. It also forms a ’ticket’
55
G
C obtains ticket
for use with
server S
(3) (4)
(2) (5)
A C S
(1) (6)
C obtains ticket Mutual authentication
for use with ticket of C and S
granting server T
Tcg that contains the user identity, the client identity, the identity of G,
the new key Kcg together with timestamp information. The Tstart Texpire ;
limit the interval over which the ticket is considered as valid. This ticket
is encrypted using the key Kag shared between A and G. C uses the key
Ku to decrypt the third component of message (2) and obtains the key Kcg
which it can now use to communicate with G. This is carried out in the
second part of the protocol described below:
(3) C ! G : S L2 N2 Tcg Acg
; ; ; ;
!
0 0
(4) G C : U Tcs E(Kcg : S Kcs T start T expire N2)
; ; ; ; ; ;
where
Acg = E(Kcg : C T) ;
0 0
Tcs = E(Kcg : U C S Kcs T start T expire)
; ; ; ; ;
The result of the above is that G issues C with a ticket Tcs and a key Kcs to
communicate with S. The authenticator Acg ensures timeliness of message
(3).
In the third part of the protocol C uses the newly obtained key Kcs and
ticket Tcs to obtain the services of S.
(5) C ! S : Tcs Acs ;
!
0
(6) S C : E(Kcs : T )
56
where
0
Acs = E(Kcs : C T ) ;
He forms further authenticator Acs, and sends the result to S together with
the newly acquired encrypted ticket as message (5). S carries out decryp-
tion on the ticket to obtain the session key Kcs and then uses this key to
obtain the authentication information. If everything is in order, message
(6) is returned.
This protocol contains two parts: one to bring about the exchange of some
ticket and the second is a protocol for multiple authentications. We call
these protocols repeated authentication protocols. In the Neuman Stubblebine
Protocol [90] given below, the first four messages are the initial protocol.
(1) A ! B : A Na
;
!
0
(1) A B : N a E(Kbs : A Kab tb)
; ; ;
!
0 0
(2) B A : N b E(Kab : N a)
;
!
0
(3) A B : E(Kab : N b)
!
0
(1 ) Z(A) B : A Na ;
!
0
(2 ) B Z(S) : B E(Kbs : A Na tb) Nb
; ; ; ;
0
(3 )
!
0
(4 ) Z(A) B : E(Kbs : A Na(= Kab) tb) E(Na : Nb)
; ; ;
!
0 0
(1 ) Z(A) B : N a E(Kbs : A Na(= Kab) tb)
; ; ;
!
0 0 0
(2 ) B Z(A) : N b E(Kab : N a) ;
!
0 0
(3 ) Z(A) B : E(Kab : N b)
57
The following parallel session attack can be used:
!
0
( 1 ) Z(A) B : N a E(Kbs : A Kab tb)
; ; ;
!
0 0
(2) B Z(A) : N b E(Kab : N a)
;
!
0 0
(1 ) Z(A) B : N b E(Kbs : A Kab tb)
; ; ;
!
0 00 0
(2 ) B Z(A) : N b E(Kab : N b)
;
!
0
( 3 ) Z(A) B : E(Kab : N b)
In this attack the initial ticket is recorded from a previous legitimate run
of the protocol.
(1) A ! B : Na A ;
(2) B ! S : Na A Nb B ; ; ;
!
0 0
(1 ) A B : N a E(Kbb : tb A Kab); ; ;
!
0 0 0
(2 ) B A : N b E(Kab : N a) ;
!
0 0
(3 ) A B : E(Kab : N b)
(1) A ! S : A B Na
; ;
58
This protocol suffers when a session key is compromised (as in the Den-
ning Sacco attack on the Needham Schroeder protocol). The authors there-
fore proposed (in the same paper) to use a different key purely for the
handshake. The protocol now becomes:
(1) A ! S : A B Na
; ;
(1) A ! S : A B Na
; ;
E(Kas : A B Na Kab)
(3) B ! A :
; ; ; ;
(1) B ! A : Rb Text1 ;
59
6.6.4 ISO Three-Pass Mutual Authentication Protocol
(1) B ! A : Rb Text1;
0
(2 ) A ! B : Ra Rb B
; ; ; Text4 E(Ka 1 : Ra Rb B Text3)
;
; ; ;
(1) B ! A : B E(Ka : Nb B)
; ;
(1) A ! B : X = G x modN
(2) B ! A : Y = G y modN
60
6.7 Public Key Protocols with Trusted Third Party
6.7.1 Needham-Schroeder Public Key Protocol
This protocol appears in the classic paper [87]. It has recently been shown
to contain a flaw by Gavin Lowe as part of the project research work.
(1) A ! S : A B
;
(2) S ! A : E(Ks 1 : Kb B)
;
(3) A ! B : E(Kb : Na A) ;
(4) B ! S : B A
;
(5) S ! B : E(Ks 1 : Ka A)
;
(3) A Z! : E(Kz : Na A) ;
!
0
(3 ) Z(A) B : E(Kb : Na A) ;
!
0
(6 ) B Z(A) : E(Ka : Na Nb) ;
(1) C !
AS : C S N1 ; ;
(4) S !
AS : S C N3 ; ;
(6) S !
C : S C E(KC : S N2 + 1)
; ; ;
This protocol has been shown to be flawed (in different ways) by Hwang
and Chen [59] and also Gavin Lowe.
61
In the first attack it is possible to impersonate a client:
(1) Z ! AS : Z S N1
; ;
(4) S ! Z(AS) : S C N3
; ;
!
0
(4 ) Z(S) AS : S Z N3
; ;
!
0
(1 ) Z(C) AS : C Z N1
; ;
(2) AS !C : AS E(KAS1 : AS C N1 KZ )
;
; ; ;
(4) Z ! AS : Z C N3
; ;
(5) AS !Z : AS E(KAS1 : AS Z N3 KC )
;
; ; ;
( 6 ) Z(S) !C : S C E(KC : S N2 + 1)
; ; ;
In the third attack (by Gavin Lowe) message (3) is replayed within the
possible time window to re-achieve authentication.
(4) S !AS : S C N3 ; ;
(6) S !
C : S C E(KC : S N2 + 1)
; ; ;
For the purposes of the attack we need only consider messages (3) and
(6) and so we assume that all public keys are appropriately obtained or
possessed.
(3) C ! Z(S) : C
; S ; E(KC 1 : C T L E(KS : N2 ))
; ; ;
0
(3 ) Z! S : Z ; S ; E(KZ 1 : Z T L E(KS : N2 ))
; ; ;
!
0
(6 ) S Z : S
; Z ; E(KZ : S N2 + 1) ;
( 6 ) Z(S)! C : S
; C ; E(KC : S N2 + 1) ;
62
The problem arises because the server S is fooled as to the origin of the
encrypted nonce in (3’). It is created by C in (3) but is used by Z in (3’)
who pretends to have created it himself.
(1) A ! S : A B ;
Attacks have been found by L’Anson and Mitchell [12] and by the Bur-
rows Abadi and Needham [26]. The problem is that there is signing af-
ter encryption. If an encrypted message has a component that is itself
encrypted under a public key then it cannot be deduced that the sender
actually knows the contents of that component.
63
6.10 Miscellaneous
6.10.1 Shamir Rivest Adelman Three Pass protocol
The following protocol differs in that the participants share no secrets. It
was suggested as a means of transmitting data over an insecure channel.
It assumes that encryption is commutative. It is known to be subject to a
variety of attacks.
(1) A ! B : E(Ka : M)
(2) B ! A : E(Kb : E(Ka : M))
(3) A ! B : E(Kb : M)
(2) B ! S : A B Na Nb
; ; ;
(3) S ! B : Ns f (Ns Nb A Pb )
; ; ; ; (k ha hb ) g(k ha hb Pb )
; ; ; ; ; ;
(4) B ! A : Ns h b
;
(5) A ! B : ha
64
In message (3) (k ha hb ) = f (Ns Na B Pa) is calculated by the server S. k
; ; ; ; ;
that tampering has not taken place. After receiving message (4). A com-
putes f (Ns Na B Pa) to get (k ha hb ) . If the value of hb matches the one
; ; ; ; ;
sent by B then A replies with message (5). The literature surveyed has not
indicated that the protocol is flawed.
This is an unusual protocol due to Bellovin and Merritt [15] and has the
following steps:
(1) A ! B : E(P : Ka )
(2) B ! A : E(P : E(Ka : R))
(3) A ! B : E(R : Na)
(4) B ! A : E(R : Na Nb)
;
(1 : 1) A !
Z(B) : E(P : Ka )
(2 : 1) Z(B) A ! : E(P : Ka )
(2 : 2) A !
Z(B) : E(P : E(Ka : R))
(1 : 2) Z(B) A ! : E(P : E(Ka : R))
(1 : 3) A !
Z(B) : E(R : Na)
(2 : 3) Z(B) A ! : E(R : Na)
(2 : 4) A !
Z(B) : E(R : Na Nb) ;
(1 : 5) A B! : E(R : Nb)
(2 : 5) Z(B) A ! : E(R : Nb)
65
6.10.4 Davis Swick Private Key Certificates
The first protocol given by Davis and Swick [40] is for key translation via
a trusted translator T. The protocol is given below:
0
Here Kt is T s master key used to signed the key containing tickets. La
0
and Lb are lifetimes. E(Kat : Kat A La) is A s private key certificate created
; ;
by T.
There is a key distribution protocol:
!
0 0 0 0
(2) T B : E(kt : K bt B L b) E(Kbt : K bt T L b checksum)
; ; ; ; ; ;
66
7 Acknowledgements
This survey was carried out as part of a Strategic Research Plan managed
by Peter Ryan of the Defence Evaluation Research Agency. The authors
would like to express their thanks for supporting this work. In addi-
tion, we would like to thank Peter Ryan, Irfan Zakuiddin, Gavin Lowe
and Colin Runciman for their helpful comments. John Clark is Lecturer in
Critical Systems and Jeremy Jacob is Lecturer in Computer Science at the
University of York.
67
References
[1] Martin Abadi and Roger Needham. Prudent Engineering Practice
for Cryptographic Protocols. Technical report, SRC DIGITAL, June
1994.
This paper sets out several heuristic rules which enable people to write
good protocols, though the report says that they are neither sufficient or
necessary. Rather, extant practice has shown their utility.
The paper is an excellent piece of work and will prove of considerable use
to protocol designers. The paper does not go into detail as to what the
“goals” of a protocol are or should be, how this should be stated or verified.
It is an engineering account of protocol development whose principles, if
applied judiciously, will lead to better protocols. The guidelines serve as a
“have you remembered this sort of attack” list for the designer.
There are two overarching principles.
68
of he message. On the other hand, it is proper to infer that the prin-
cipal that signs a message and then encrypts it for privacy knows the
content of that message.
Failures arising from ignoring this principle are given by reference to
the CCITT.509 one message protocol.
Principle 6 Be clear what you are assuming about nonces. What may do
for avoiding temporal succession may not do for ensuring associa-
tion, and perhaps association is best established by other means.
Principle 7 The use of a predictable quantity such as the value of a counter
can serve in guaranteeing newness, through a challenge response ex-
change. But if a predictable quality is to be effective it should be pro-
tected so that an intruder cannot simulate a challenge and later replay
a response.
Principle 8 If timestamps are used as freshness guarantees by reference
to absolute time then the difference between local clocks at various
machines must be much less than the allowable age of a message
deemed to be valid. Furthermore the time maintenance mechanism
everywhere becomes part of the trusted computing base.
Principle 9 A key may have been used recently, for example to encrypt a
nonce, yet be quite old, and possibly compromised. Recent use does
not make the key look any better than it would otherwise.
Principle 10 If an encoding is used to present the meaning of a message,
then it should be possible to tell which encoding is being used. In the
common case where the encoding is protocol dependent, it should be
possible to deduce that the message belongs to this protocol, and in
fact to a particular run of the protocol, and to know its number in the
protocol.
69
[4] Ross Anderson. UEPS – A Second Generation Electronic Wallet. In
Y. Deswarte, G. Eizenberg, and J.-J. Quisquater, editors, Proceedings
ESORICS 92. Springer LNCS 648, 1992.
In this early paper, the author describes an electronic smartcard applica-
tion for the banking industry – the Universal Electronic Payments System
(UEPS). The approach uses key chaining. An outline is given of attempts
to apply BAN logic to the analysis of UEPS is described.
70
[8] Ross Anderson. Crypto in Europe – Markets, Pol-
icy and Law. A paper from RA’s WWW Page —
http://www.cl.cam.ac.uk/users/rja14/.
This presents an overview of policy on matters cryptographic in various
European states. It indicates how the cryptographic debate has become
hooked on confidentiality. Authenticity and integrity are much more im-
portant problems. There’s a good list of real-word applications of cryptog-
raphy (ranging from ATMs and utility tokens to lottery ticketing terminals
and postal franking machines). There are quite a few juicy examples of
dire failure of cryptosystems (of which the author has a very large col-
lection). The paper ends with an attack on the currently expressed views
about Government restrictions on cryptography. The author argues that
villains will in general not use cryptography, that the use of wiretaps varies
enormously between between countries, that maintaining wiretaps in dig-
ital exchanges is very expensive and that the real threat to the individual
arises from unauthorised access to data and has little to do with listening
in on encrypted communications.
71
[11] Ross Anderson and Roger Needham. Robustness Principles
for Public Key Protocols. A paper from RA’s WWW Page —
http://www.cl.cam.ac.uk/users/rja14/, 1995.
This paper provides many principles or rules of thumb that should be
used when developing cryptographic protocols. The principles are gener-
ally motivated by some good wholesome attacks. There is a general Princi-
ple of Explicitness (that one must be explicit about any properties that may
be used to attack a public key primitive as well as typing freshness and any
other assumptions being made).
[12] Colin l’Anson and Chris Mitchell. Security Defects in the CCITT
Recommendation X.509 — The Directory Authentication Frame-
work. Computer Communication Review, 20(2):30–34, April 1990.
This paper presents several problems in the proposed CCITT X509 stan-
dard and offers some solutions. The problems indicated are:
A good read. Furthermore, the signing after encryption problem has reared
its head in other places (for example, Clark and Jacob have recently shown
[34] that the SPLICE/AS protocol and an improved version of it, both de-
scribed in [59], have this flaw.
[13] Mihir Bellare, Joe Kilian, and Phillip Rogaway. The Security of Ci-
pher Block Chaining. In Yvo G. Desmedt, editor, Advances in Cryp-
tology - Crypto 94, number 839 in LNCS, pages 341–358. Springer
Verlag, 1994.
This paper assesses the usefulness of using CBC approaches for message
authentication codes. One of the few papers on the subject. A good read.
See also [106].
72
[14] S. M. Bellovin and M. Merritt. Limitations of the Kerberos Authen-
tication System. Computer Communication Review, 20(5):119–132, Oc-
tober 1990.
In this paper the authors describe several weaknesses of the then cur-
rent Kerberos authentication system. The authors address potential replay
attacks, secure time service provision, password guessing attacks, login
spoofing, chosen plaintext attacks inter alia. Solutions are suggested and
assessed. Some interesting remarks are made concerning susceptibility to
ciphertext splicing type attacks. A very good read.
73
in Cryptology, number 576 in Lecture Notes in Computer Science,
pages 44–61. Springer Verlag, 1991.
This paper provides a loose heuristic method for searching for flaws in
cryptographic protocols. It describes the derivation and analysis of a two-
party protocol that is claimed to be as secure as the cipher block chaining
encryption that supports it. The protocol looks secure and compact.
74
namely that of the assumptions that protocol designers (implicitly) make
about implementations.
[22] Colin Boyd. A Class of Flexible and Efficient Key Management Pro-
tocols. In Proceedings 9th IEEE Computer Security Foundations Work-
shop, pages 2–8. IEEE Computer Society Press, 1996.
This paper gives a novel scheme for the distribution of session keys be-
tween communicating principals. A trusted server S is used to distribute a
key for use in a cryptographic hashing function. The principals exchange
nonces which are concatenated and then hashed. The resulting hashed
value is the session key. Key compromise is an issue and Boyd suggests
ways in which this can be handled. The scheme can be used to create very
efficient protocols.
[23] Colin Boyd and Wenbo Mao. On a Limitation of BAN Logic. In Tor
Helleseth, editor, Eurocrypt ’93, number 765 in LNCS, pages 240–247.
Springer Verlag, 1993.
Boyd and Mao identify some more limitations of the BAN logic type ap-
proach to analysis. The paper presents two examples of ’flawed’ protocols.
The first is of interest in that it presents a plausible (but faulty) implementa-
tion of the intent of the protocol. The authors conclude that post-hoc anal-
ysis is not the thing to do; rather correctness by design should be the aim.
The criticisms made in this paper were countered at the rump session at
the same conference by van Oorschot [120].
[24] Colin Boyd and Wenbo Mao. Designing Secure Key Exchange Pro-
tocols. In Dieter Gollmann, editor, Computer Security—ESORICS ’94,
pages 93–106. Springer-Verlag, 1994.
This paper takes the view that program derivation techniques ought to
be applied to security protocol derivation, i.e. start with an abstract model
and refine it. It introduces a notation for passing restricted sets of messages
between principals. A formal model is outlined in the formal specification
notation Z. The model aims to provide a moderately general notion of what
the effects of sending and receiving messages should be. In brief, there is an
abstract model of all (key principal) pairs generated by trusted principals
;
and each principal has his local view of such pairs known to him. When
a new key is generated for a set of principals U then all pairs of the form
2
(k u), with u U, are added to the global store. The server may send a
;
75
The paper identifies issues that are not dealt with (such as key revocation,
the effect of untrustworthy behaviour of a principal and more complex
distribution protocols).
The formal model as it currently stands needs some adjustment. The prob-
lem lies with their definition of security for states of the system. Whilst it
would appear true that given a suitable initial state and the definitions of
the send and receive operations the resulting operation will be “secure”—
the notion of suitable definition of initial state must be addressed. It is per-
fectly feasible to set up an initial state (and in general such a state will not
be a pathological one, i.e. it will have some keys) that is useless. For ex-
ample, a state where the only pair in the whole system is (k1 Charles) and
;
this pair is in the local store of both Alice and Bob. It is a relatively trivial
matter to alter the definition of the security criterion to rule out this sort of
possibility though. A good paper and one of the few to talk about deriving
protocols rather than post-hoc verifying them.
76
The authors are fairly limited in what they claim for the logic that follows:
Our goal, however, is not to provide a logic that would explain every
authentication method, but rather a logic that would explain most of
the central concepts in authentication.
If you’ve sent Joe a number that you have never used for this purpose
before and if you subsequently receive from Joe something that de-
pends on knowing that number then you ought to believe that Joe’s
message originated recently — in fact, after yours.
If you believe that only you and Joe know K then you ought to believe
that anything you receive encrypted with K as key comes originally
from Joe.
If you believe that Kis Joe’s public key, then you should believe that
any message you can decrypt with K comes originally from Joe.
If you believe that only you and Joe know X then you ought to believe
that any encrypted message that you receive containing X comes
originally from Joe.
77
is to have enough machinery to carry out some realistic examples and to
explain the method”.
Some of the constructs introduced by the authors in this section have a
notion of trust involved. The nature of this trust is not made explicit. From
the examples, however, it can be seen that trusting a principal to know a
shared secret means that he will not release it himself; if this were not the
case then many of the later postulates do not make sense. The formalism
assumes that all sessions with a shared key are between two parties P and
Q. Multiple party sessions are of course a practical possibility.
In the description of the nonce verification rule (if I believe that X is fresh
and that you have uttered X, then I should believe that you believe X, be-
cause you must have uttered X recently and hence still believe it) the au-
thors suggest that they could introduce a “recently said” operator to over-
come the restriction that X must be cleartext. This idea will be taken up by
other authors.
The authors then present the notion of an idealised protocol. Standard de-
scription of protocols give a fairly concrete description of what bits go
where in a message. This is not particularly useful for logical manipula-
tion and so the authors transform each protocol message into a formula.
Parts of the formula which do not contribute to the beliefs are omitted;
thus there is no cleartext in BAN messages. Each protocol is a sequence
of encrypted formulæ. The authors claim that their idealised formulæare
clearer and more complete than other traditional descriptions. They also
state that deriving an encoding from an idealised protocol is far less time
consuming and error prone than understanding the meaning of a particu-
lar informal encoding. Omitting cleartext gives rise to some problems, e.g.
the direct leakage of information.
Loosely speaking, a message m can be interpreted as a formula X if when-
ever the recipient gets m he may deduce that the sender must have believed
X when he sent the message. This process is fairly controversial. There
would appear to be an implicit assumption that we choose the strongest
feasible formula for X. Failure to do this may require the addition of initial
assumptions that would not be necessary under an alternative idealisation.
It seems that in addition to iteration of initial beliefs for the purposes of
proof, as suggested by the authors, one might well iterate over idealisation
too.
The protocol analysis takes the following steps
78
4. The logical postulates are applied to the assumptions and the asser-
tions, in order to discover beliefs held by the parties in the protocol.
Section 3 The authors state that there is room for debate as to what the
goals of an authentication protocol should be. Several plausible candidates
are suggested, the actual goals will of course be system specific.
Sections 4–11 These sections apply the BAN logic presented to several
protocols:
Sections 12–13 The remaining sections show how the logic can be ex-
tended to handle hashing and provide a more formal semantics for the
logic.
This paper is essential reading. Most security protocol papers reference it,
or one of its other forms, and several criticise it (some more fairly than
others). The paper is well written and provides the basis for numerous
extensions.
79
authors state that Nessett’s example “accurately points out an intended
limitation or our logic” but indicate that the assumption by principal B that
the published key is in fact good is contradictory. Though this is allowed
by the formalism they claim “though not manifested by our formalism, it is
not beyond the wit of man to notice. From this absurd assumption, Nessett
derives an equally absurd conclusion”.
This rejoinder is pretty much to the point! Part of BAN logic folklore.
[28] Michael Burrows, Martin Abadi, and Roger M. Needham. The Scope
of a Logic of Authentication. Revised Research Report 39, Digital
Systems Research Center, 1994.
This is intended as an annex to the original BAN report [26].
80
Freshless flaws identified by the inability of one principal to detect
whether a message has been created recently or not. The Denning-
Sacco attack [42] on the conventional key Needham Schroeder proto-
col is a famous example. Burrows Abadi and Needham demonstrate
a freshness flaw in the Andrew Secure RPC protocol [26].
Oracle flaws a principal inadvertently acts as a decryption agent for a
penetrator. There are examples of single-role and multi-role oracle
attacks (i.e. when a principal is limited to one role or may take part
in several roles). The three-pass protocol of Rivest, Shamir and Adle-
man is the subject of these oracle attacks. The three-pass protocol has
the following steps:
(1) A ! B : E(Ka : M)
(2) B ! A : E(Kb : E(Ka : M))
(3) A ! B : E(Kb : M)
After receiving the message at line 2 A decrypts with key ka and as-
suming commutativity sends the result back to B in line 3. The single
role oracle flaw is for the intruder simply to pretend to be B and re-
f g
turn M ka back to A in line 2 and hence obtain M in line 3. The au-
thor suggests that a “typing” check (to see whether the third message
is really an encrypted one) might solve this problem. As presented
there is a more obvious problem that shows that this will not work,
namely there is nothing to stop the intruder simply acting as B using
a key kc that she knows in place of kb. The protocol then works as
normal. Have I missed something?
Type flaws a subclass of oracle flaws where in addition to using a frag-
ment of the protocol as an oracle, the penetrator exploits the inability
of at least one principal to associate a word (or message) with a par-
ticular state of a particular protocol. Five different “types” of infor-
mation can be distinguished:
cryptographic protocol;
protocol run;
transmission step;
message (sub) components;
primitive types.
The paper then gives example type flaws exposed in the Neuman-
Stubblebine protocol and the Otway-Rees protocol.
Internal Flaws these are due to a principal failing to carry out the neces-
sary internal actions correctly (e.g. failing to make a check). The paper
states that a common feature of many formal and semi-formal crypto-
graphic protocol specification methods is their lack of stating internal
actions.
81
Cryptosystem related flaws these arise due to the interactions of proto-
cols and the particular cryptographic methods employed.
The paper is well-written and very useful. Of particular note is the discus-
sion of type flaws.
[34] John Clark and Jeremy Jacob. On The Security of Recent Protocols.
Information Processing Letters, 56(3):151–155, November 1995.
In this paper the authors describe some attacks on recently published pro-
tocols highlighting assumptions about cipher block chaining use but also
a flaw in a (corrected) version of the SPLICE authentication protocol (also
independently discovered by Lowe of the Programming Research Group
at Oxford).
82
[35] John Clark and Jeremy Jacob. Attacking authentication protocols.
High Integrity Systems, 1(5):465–474, August 1996.
This paper provides a summary of ways in which protocols fail and pro-
vides many examples of such flaws. Methods of attack include: freshness
attacks, type flaws, parallel session attacks, binding attacks and some im-
plementation dependent attacks (e.g. Boyd’s bit flipping with stream ci-
phers and cipher block chaining mishaps). Appraisal is probably best left
to the reader!
[37] John Clark and Jeremy Jacob. Freshness is Not Enough: Note on
Trusted Nonce Generation and Malicious Principals.
In this paper the authors demonstrate an unusual attack on a mutual au-
thentication protocol by Woo and Lam [117] described in section 6.3.11.
Malicious choice of a nonce by one principal can cause a previously issued
key to be accepted as fresh by the other principal.
83
A private key certificate is effectively a ticket published by a server to it-
self. The ticket contains a key, principal identifier and lifetime. The identi-
fied principal may supply the ticket and use the corresponding key until
the ticket expires. Various applications are suggested (key translation and
key distribution). On close analysis it would appear that two of the sug-
gested protocols can be attacked: the initial key translation protocol makes
assumptions about the content of the user supplied component of a mes-
sage (if it starts with a principal identifier then fraudulent messages can
be created using the translator as an oracle). The key distribution between
server domains using public key allows one of the servers to masquerade
as the other.
[43] Whitfield Diffie. The First Ten Years in Public Key Cryptography.
Proceedings of the IEEE, 76(5):560–577, May 1988.
An excellent survey of public key cryptography. The paper provides a tech-
nical introduction to the various advances in the area (exponential key ex-
changes, knapsacks, RSA, the rise-fall cycle of knapsacks etc). The paper
84
than addresses implementation issues and where public key cryptography
is going. A good read generally.
[47] Paul Gardiner, Dave Jackson, Jason Hulance, and Bill Roscoe. Se-
curity Modelling in CSP and FDR: Deliverable Bundle 2. Technical
report, Formal Systems (Europe) Ltd, April 1996.
This report indicates how algebraic techniques can be incorporated within
the CSP/FDR approach to security protocol analysis. Such algebraic ma-
nipulation is necessary if the approach is to discover attacks which utilise
for example particular properties of encryption (commutativity of encryp-
tions etc.). Implementation details (code) are given in this report. The re-
port describes how algebra may be modelled within an extended form of
CSP (that used by FDR2) with results of initial evaluation. Later sections
address how the approach can be applied to the analysis of some well-
known protocols. Implementation attacks arising due to particular modes
of encryption (e.g. CBC, CFB etc) are identified as highly troublesome; the
state space becomes enormous very quickly.
[48] Paul Gardiner, Dave Jackson, and Bill Roscoe. Security Modelling
in CSP and FDR: Deliverable Bundle 3. Technical report, Formal
Systems (Europe) Ltd, July 1996.
This represents a continuation and enhancement of the work reported in
[47]. The refined approach is used to detect a well-known flaw in the
CCITT protocol. One enhancement is the use of a ”lazy spy” — consid-
ering only those behaviours of an intruder which are reachable given the
specific history of values observed in a sequence of protocol runs (rather
than the whole behaviour space of the intruder).
85
[49] Dieter Gollman. What do we mean by Entity Authentication? In
Proceedings 1996 IEEE Symposium on Research in Security and Privacy,
pages 46–54. IEEE Computer Society, may 1996.
The goals of authentication provide the basis for much controversy. In this
paper Gollman identifies 4 notions of authentication. He points out that
some ”attacks” on protocols really depend on what you think the pro-
tocol is intended to achieve. Examples are given. He maintains that the
description of authentication in user-oriented anthropomorphic language
may at times be harmful. We need to be aware of the gap between the
user-oriented descriptions and the electronic message passing that actu-
ally occurs. The paper also addresses the misuse of encryption by protocol
specifiers.
86
only messages they can realistically expect to. This is done via the notion
of eligibility. The method ensures that before a message can be sent, the
sender must be in possession of the bit strings to be transmitted and it
must hold the beliefs implied by transmission of the message. Inference
rules to accommodate the changes are presented.
The paper addresses the use of timestamps, truly random numbers, coun-
ters, pseudorandom numbers, synchronised counters and pseudorandom
number generators and fresh encryption keys. The paper presents a table
indicating which mode of use is secure for a particular freshness approach
indicating whether the prover is to be trusted or not. A brief categorisation
of message replays is then given.
87
This paper presents an extension to the original BAN logic [26]. Interesting
extensions include the notion of recognisability (the use of typing would
prevent many identified protocol flaws), the notion of possessions (for-
mulæ that a principal can posses, because he has seen them, and so on).
In distinction to BAN-logic a principal does not have to believe in a for-
mula in order to include it in a message (he merely has to possess it). Also
included are explicit “not-originated here” indications for message compo-
nents allowing the principal to detect replays of messages he himself has
created. There are also message extensions by which preconditions for ac-
tually sending a message are attached to it. The derivation rules given are
much more numerous (over 40) than those given in BAN.
88
this). The authentication server will either store the “old” password or the
“new” password with the user depending on whether the previous at-
tempt succeeded or not. Decrypting an appropriate ticket ought to give
sufficient proof of identity. If the server already has carried out the update
then there is no change and a success response is given, otherwise the up-
date process goes ahead. Switching the order of the tickets (which would
allow an intruder to reverse the change) is protected against by the inclu-
sion of nonces (and functions thereof) to ensure that the ordering of the
messages is determinable (one is actually a timestamp).
It is hard to say how much of a problem this paper addresses. Intuitively
it would seem far from “crucial” as stated by the authors. The proposed
solution is quite neat though and certainly it seems efficient. Worth a read.
89
initial segment of the first message; this wouldn’t be the case if there was
some plaintext permutation before encryption.
The improvement for the subsequent authentication protocol also depends
on implementation for security. Examination shows that if cipher block
chaining is used then there is a problem with the improved solution. Thus,
the offered solution is implementation dependent.
90
[66] A Jiwa, J Seberry, and Y Zheng. Beacon Based Authentication. In
Dieter Gollmann, editor, Computer Security — ESORICS ’94, number
875 in Lecture Notes in Computer Science, pages 125–142. Springer,
1994.
The paper provides a good introduction to Beacon-based authentication.
Conventions in cryptography are explained and an outline of Rabin’s (the
originator) approach to beacon use is given. A beacon emits at regular in-
tervals a random integer in the range 1 to N where N is publicly known.
The use of this emitted token is given with respect to the contract signing
problem. (How do we solve the problem of one party receiving a commit-
ment from another and yet not being committed him/herself?)
Simplifying, the parties exchange preliminary contracts and commitments
to sign (conditional) followed by random integers I1 and I2. Let I = (I1 +
I2) mod N. They now exchange messages committing them to the contract
if the next beacon token is I. A party may not commit. If so, he has a 1 in
N chance of getting the other party at a disadvantage (and an N 1 in N
chance of not getting away with it and having to explain his/her lack of
commitment).
A beaconised version of the Needham Schroeder protocol is then given.
The paper is well written and addresses an approach that has not been
given much attention.
[67] I Lung Kao and Randy Chow. An Efficient and Secure Authenti-
cation Protocol Using Uncertified Keys. Operating Systems Review,
29(3):14–21, July 1995.
This presents various repeated authentication protocols. The authors in-
dicate how the use of uncertified keys, i.e. whose validity is not ensured
when they are first used may bring performance benefits.
91
There are some problems with the protocol, as pointed out by Syverson
[109]. It might be argued that there is a flaw in the idealisation of the proto-
col. Indeed this is argued by Neuman and Stubblebine [90] that the fresh-
ness beliefs in the repeated authentication protocol are invalid. Syverson
disagrees with this view and indicates that it is perfectly possible to take
another interpretation of “run of the protocol”, namely that a run is a sin-
gle initial protocol and all subsequent runs of the repeated authentication
protocol. It would appear that BAN as it currently stands does not handle
repeated authentications using tickets.
For the authentication goals they set the authors argue that their initial
protocol is minimal with respect to the number of messages.
92
[89]). The paper concludes with discussion about the inability of the logic
to handle parallel runs.
[75] Gavin Lowe. Breaking and fixing the needham schroeder public-
key protocol using fdr. In Proceedings of TACAS, volume 1055, pages
147–166. Springer Verlag, 1996.
In this paper Lowe describes how the CSP refinement checker FDR was
used to identify a hole in the security of the well known Needham
Schroeder Public Key Protocol 6.7.1. He presents an account of how princi-
pals and intruder communications are modelled in CSP (with a restricted
number of principals) and presents an argument to show that the analysis
performed is sufficient to guarantee its correctness when more principals
are added to the system.
93
[76] Gavin Lowe. Some New Attacks upon Security Protocols. In Proceed-
ings of the Computer Security Foundations Workshop VIII. IEEE Com-
puter Society Pres, 1996.
This paper records a number of attacks on protocols. The aim is largely to
show that the same mistakes in protocol design are being made time and
time again. The paper contains a more vicious attack on the Woo and Lam
Mutual Authentication Protocol than that identified by Clark and Jacob (a
public nonce is accepted as a key) (see 6.3.11). This new attack requires
a principal to accept messages he has created. Woo and Lam actually state
that reflections are detected by principals and so the protocol has no means
of enforcing this. Lowe believes that such functionality should be captured
by the protocol and not be left as an implementation dependency. Attacks
on the KSL protocol 6.5.3 and on the TMN protocol are given.
[77] Gavin Lowe. SPLICE-AS: A Case Study in Using CSP to Detect Er-
rors in Security Protocols. Technical report, Programming Research
Group, Oxford, 1996.
In this paper the author indicate show the CSP refinement checker FDR is
used to analyse a recently published protocol (which is actually a correc-
tion of a previous one).
[78] Wenbo Mao and Colin Boyd. Towards the Formal Analysis of Se-
curity Protocols. In Proceedings of the Computer Security Foundations
Workshop VI, pages 147–158. IEEE Computer Society Press, 1993.
In this paper authors examine some of the defects of BAN logic. After not-
ing that BAN logic passes as secure some patently flawed and insecure
protocols they address some specific weakness. First the idealisation pro-
cess is examined (does not take into account context-specific information),
then elements of the nature of belief are examined (such as the senseless-
ness of believing in a nonce). The elicitation of assumptions is also a very
difficult area. The authors then go on to provide their own formalism in-
tended to cater for flaws identified. An element of preprocessing is car-
ried out to identify the implicit use in the protocol description of various
elements (e.g. nonces are identified as challenges, response are identified
etc.). A set of BAN-like inference rules are given. Two protocols are then
analysed using the system.
94
This paper addresses two important points regarding authentication pro-
tocols. The first is that non-secret data is often encrypted by a principal in
order to be retrieved by the intended recipient through decryption. Boyd
and Mao argue convincingly that a more desirable way of proceeding is
to rely on the one-way service of cryptographic systems rather than the
secrecy service. Thus, use of hash functions can be used in order not to
provide to much cryptoanalytic information.
The second misconception relates to implementation — the choice of cryp-
tographic algorithm. The authors indicate that the use of cipher block
chaining for all cryptographic services in authentication protocols may be
dangerous and give an example of how a “cut and paste” attack can be
mounted on the Otway-Rees protocol.
The third point attacked by the authors is the misuse of redundancy, which
can lead to significant provision of cryptoanalytic information.
The authors state that the use of a single notation for all cryptographic
services gives a lack of precision that has lead to many weaknesses and
provide a notation that distinguishes between encryption for confidential-
ity and one way services. Their method requires that only secret data be
subject to confidentiality encryption.
Overall, the paper is well-written, varied in its scope and very useful.
95
too. The way CBC works allows Ci and Pi + C(i 1) to be ciphertext plaintext
pairs. Since the session key is different in each run, this merely ensures that
different runs allow different plaintext ciphertext pairs to be created.
A description of the KryptoKnight authentication system is then given.
This too is subject to an attack generating plaintext ciphertext pairs. Reme-
dies are provided. In one a nonce is exchanged encrypted which then forms
part of the MAC generation key. There is effectively a one-time channel
and so only one plaintext ciphertext pair is possible on each run. Another
scheme using exponentiation key exchange is given.
96
This paper gives a description of the Interrogator tool. The user guide [83]
is a better place to find detailed information.
97
A few example consequences of the axioms are then given. The text con-
cludes that there is a need for quantification to be included in the logic.
Future plans include also the use of nested unless predicates. Combining
the logic with a temporal logic is the final suggestion. Currently no tool
support is available for the logic (but the authors consider it essential).
The characterisation seems rather complex and as indicated above the ac-
tual axioms might usefully be examined. But there is a logic at work here
with a sound semantics. An important paper, and worth a read.
[87] Roger Needham and Michael Schroeder. Using Encryption for Au-
thentication in Large Networks of Computers. Communications of the
ACM, 21(12), December 1978.
One of the classic authentication papers. In this paper the authors ad-
dress issues of establishing interactive communication between principals,
authenticated one-way communication (for example, mail systems) and
signed communication.
Two protocols for establishing interactive communication are presented:
one using conventional symmetric key encryption and the other using
public key encryption. The former contains the classic replay flaw[42]. It is
is usually referred to as “The Needham Schroeder Protocol” but this could
equally apply to any of the three protocols presented in this paper. The au-
thors were aware of many possible attacks and provide a rationale for their
protocols. The public key protocol has recently been shown to be suscepti-
ble to a parallel session attack by Gavin Lowe. The final protocol described
is a means of obtaining digital signatures via a third party using symmetric
key encryption.
The authors assume that keys are not readily discoverable by exhaustive search
or cryptanalysis. As we were later to find out, more restrictive assumptions
would be needed.
The paper ends with the well-known quote:
98
In this paper the authors revisit their famous conventional (symmetric) key
authentication protocol and show how an extra exchange between the two
authenticating principals can be used to overcome the freshness deficiency
identified by Denning and Sacco (who overcame the problem by the use
of timestamps) [42]. The extra exchange includes a nonce from the second
principal B to be provided to the authentication server. This is then in-
cluded by the authentication server in the authentication ticket passed to
B as part of the protocol, thereby ensuring freshness.
99
The paper criticises the application of BAN logic [26] to the KLS protocol
[68], stating that it violates the notion of freshness. Further modifications to
this protocol are suggested. The tradeoffs involved in adopting particular
approaches to authentication (e.g. timestamps or nonces) are examined.
The protocol has certain flaws as exposed by Hwang et al [60].
100
[94] Bill Roscoe and Paul Gardiner. Security Modelling in CSP and FDR:
Final Report. Technical report, Formal Systems Europe, oct 1995.
This report summarise how CSP can be used to model principals’ be-
haviour in security protocols. Authentication is couched as a refinement
problem and the refinement checker FDR is used to carry out a state space
exploration to determine whether a proposed ’implementation’ actually
satisfies the specification of authentication. This report is the initial out-
put of the Formal Systems work indicating that CSP/FDR could be a very
promising means of analysing security protocols. The work was carried
out under the Strategic Research Plan managed by the DERA.
[96] Peter Ryan. The Design and Verification of Security Protocols. Tech-
nical report, Defence Evaluation Research Agency, August 1996.
This report provides a readable summary of the modelling of authentica-
tion protocols using CSP and how the FDR tool can be used to verify that a
protocol is secure. The approach models principals, a general intruder and
the network medium as communicating CSP processes. The approach has
the merit that several themes of security can be addressed (confidentiality,
integrity, availability, etc). Examples of where the approach has discovered
new attacks are given. The considerable promise shown by the approach
is indicated but at the same time current limitations and possible develop-
ments are clearly addressed and discussed.
101
[98] B. Schneier. The IDEA Encryption Algorithm. Dr. Dobb’s Journal,
pages 50–56, December 1993.
This article provides a good, easily understood description of the IDEA
conventional key encryption algorithm. This is a 64-bit block algorithm
with a 128-bit key. The main diagram seems slightly out of kilter with the
text though.
Software implementations are about 1.5 to 2 times as fast as correspond-
ing DES implementations. The author cites a VLSI implementation that
encrypts at 177MBits/s when clocked at 35MHz.
[100] G.J. Simmons. How to Insure that Data Aquired to Verify Treaty
Compliance are Trustworthy. Proceedings of the IEEE, 76(5):621–627,
May 1988.
A cold war summary paper! Rather less flippantly, Simmons provides a
readable account on the work of treaty verification (for nuclear tests) car-
ried out at Sandia Laboratories. Descriptions are given of both symmetric
key and public key approaches.
[101] Miles E. Smid and Dennis K. Branstad. The Data Encryption Stan-
dard: Past and Future. Proceedings of the IEEE, 76(5):550–559, May
1988.
This is a good paper to read. It examines the history of DES, why it was
produced, who were the major stakeholders and how it was taken up by
various bodies. An overview of its applications is given. Not much techni-
cal information but a good overview of the state of play in 1988.
102
protocols. A class of protocols is introduced called terminating protocols
and it is shown that Dan Nesset’s flawed protocol [89] belongs to this class.
103
The author is concerned with placing the use of logics for analysing secu-
rity protocols on a formal footing. In particular there is a worry that the
capabilities and limitations of particular logics are not really understood
and that some tool is needed to allow analysis of the logics. He proposes
possible worlds semantics as that tool.
104
Section 4: Semantics The author examines the role of semantics. One
of the major roles of a semantics is to give a means of evaluating log-
ics. Generally, we would want to show soundness and completeness. Al-
though soundness is often the principal concern, for security applications
completeness is seen as being of “utmost importance”. A formal semantics
provides a precise structure with respect to which such completeness and
soundness can be proven. The author argues that if a semantics takes its
structure directly from the logic, then no assurance is gained about the ad-
equacy of the logic from soundness or completeness theorems (indeed they
should be trivial). A further view on formal semantics is that it supplies an
alternative view (diversity).
The author introduces possible world semantics offering this as a means of
exposing the Nessett flaw.
Overall: The paper is well written and is well worth a read.
105
useful taxonomies indicates perhaps that protocol development and anal-
ysis has come of age?
[110] Paul Syverson and Catherine Meadows. A Logic language for Spec-
ifying Cryptographic Protocol Requirements. In Proceedings of the
1993 IEEE Symposium on Research in Security and Privacy, pages 165–
177. IEEE Computer Society Press, May 1993.
As yet unread. Here for completeness.
106
This paper addresses in considerable detail a design for a pipelined key-
search machine for DES. A very good paper. Probably the most detailed
hardware paper to reach a conference ever!
[114] Michael Willet. Cryptography Old and New. Computers and Security,
Vol 1:177–186, 1982.
This is a simple introduction to cryptography assuming no maths whatso-
ever. It gives a good introductory account to the history of cryptography
and introduces various types of encryption algorithm. It is of course a lit-
tle dated. It takes the reader from Caesar ciphers to DES and public key
cryptography (but no details on the latter).
107
by application to a mutual authentication protocol. It would appear that
there is a problem with the description given in this paper. The transition
to insecurity occurs in the step before the one identified by the authors.
Effectively a parallel session attack can be mounted to enable a malicious
agent to start and complete an authentication exchange with a server with-
out the principal whose identity he claims knowing that such an authenti-
cation has taken place.
[119] Alec F Yasinsac and William A Wulf. A Formal Semantics for Evalu-
ating Cryptographic Protocols. The paper has been superceeded by
a later version published in the IEEE Symposium on Security and
Privacy 1994., 1993.
This paper provides a good introduction to some relevant issues for au-
thentication protocols. It provides an overview of the historic development
of protocols. After supplying an appraisal (albeit brief) of the capabilities
of current approaches to protocol verification the authors go on to suggest
an approach based on the notion of weakest precondition calculus. They
say:
The general idea is that protocol steps are viewed in terms of their results,
i.e. they are effectively state transformers, with the sending and receiving
of messages modelled as memory accesses.
A language CPAL (Cryptographic Protocol Analysis Language) is given
in which the goals of authentication can be specified. Whereas BAN logic
models the evolution of principals’ beliefs, the aim of the current paper
is to model the actions a user can take. CPAL provides a language to de-
scribe those actions (send/receive messages on a network, encryption and
decryption, creating keys, timestamps and nonces, cox/different comput-
ing functions and making comparisons and simple decisions). Note that
the approach taken is that a protocol does not require a notion of looping
(effectively only assignment and alternation are needed).
108
With the simplified execution model the idea is that the goals of the proto-
col are stated as a postcondition and then wp-calculus is used to derive the
preconditions for success of the protocols (i.e. the initial assumptions).
The appendix to this paper gives a description of the CPAL language
and an indication of its use to specify various protocols (Needham and
Schroeder Private Key Protocol, Denning and Sacco Private Key Protocol,
and the Otway and Rees Private Key Protocol).
The ideas expressed in this paper are useful but the particulars seem a little
light at the moment.
109