0% found this document useful (0 votes)
112 views11 pages

The Consensus Number of A Cryptocurrency: (Extended Version) Rachid Guerraoui Petr Kuznetsov Matteo Monti

This document presents research on determining the "consensus number" of cryptocurrencies. It introduces the concept of consensus number from distributed computing theory and examines whether consensus is truly necessary to prevent double spending in cryptocurrency systems. Through modeling cryptocurrency transactions as a concurrent object, the researchers show that a single-user cryptocurrency system has a consensus number of 1, meaning consensus is not required. They also find that a multi-user cryptocurrency system has a consensus number equal to the number of users that can withdraw from the same account simultaneously. This has implications for the design of more efficient cryptocurrency systems that do not require consensus.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views11 pages

The Consensus Number of A Cryptocurrency: (Extended Version) Rachid Guerraoui Petr Kuznetsov Matteo Monti

This document presents research on determining the "consensus number" of cryptocurrencies. It introduces the concept of consensus number from distributed computing theory and examines whether consensus is truly necessary to prevent double spending in cryptocurrency systems. Through modeling cryptocurrency transactions as a concurrent object, the researchers show that a single-user cryptocurrency system has a consensus number of 1, meaning consensus is not required. They also find that a multi-user cryptocurrency system has a consensus number equal to the number of users that can withdraw from the same account simultaneously. This has implications for the design of more efficient cryptocurrency systems that do not require consensus.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

The Consensus Number of a Cryptocurrency∗

(Extended Version)

Rachid Guerraoui Petr Kuznetsov Matteo Monti


rachid.guerraoui@epfl.ch petr.kuznetsov@telecom-paristech.fr matteo.monti@epfl.ch
EPFL LTCI, Télécom Paris, IP Paris EPFL
Lausanne, Switzerland Paris, France Lausanne, Switzerland

Matej Pavlovič Dragos-Adrian Seredinschi†


arXiv:1906.05574v1 [cs.DC] 13 Jun 2019

matej.pavlovic@epfl.ch dragos-adrian.seredinschi@epfl.ch
EPFL EPFL
Lausanne, Switzerland Lausanne, Switzerland
ABSTRACT 1 INTRODUCTION
Many blockchain-based algorithms, such as Bitcoin, implement a The Bitcoin protocol, introduced in 2008 by Satoshi Nakamoto, im-
decentralized asset transfer system, often referred to as a cryptocur- plements a cryptocurrency: an electronic decentralized asset trans-
rency. As stated in the original paper by Nakamoto, at the heart of fer system [38]. Since then, many alternatives to Bitcoin came
these systems lies the problem of preventing double-spending; this to prominence. These include major cryptocurrencies such as
is usually solved by achieving consensus on the order of transfers Ethereum [47] or Ripple [40], as well as systems sparked from re-
among the participants. In this paper, we treat the asset transfer search or industry efforts such as Bitcoin-NG [18], Algorand [22],
problem as a concurrent object and determine its consensus num- ByzCoin [32], Stellar [37], Hyperledger [4], Corda [26], or Sol-
ber, showing that consensus is, in fact, not necessary to prevent ida [2]. Each alternative brings novel approaches to implementing
double-spending. decentralized transfers, and sometimes offers a more general in-
We first consider the problem as defined by Nakamoto, where terface (known as smart contracts [43]) than the original protocol
only a single process—the account owner—can withdraw from each proposed by Nakamoto. They improve over Bitcoin in various as-
account. Safety and liveness need to be ensured for correct account pects, such as performance, energy-efficiency, or security.
owners, whereas misbehaving account owners might be unable to A common theme in these protocols, whether they are for basic
perform transfers. We show that the consensus number of an asset transfers [33] or smart contracts [47], is that they seek to imple-
transfer object is 1. We then consider a more general k-shared asset ment a blockchain—a distributed ledger where all the transfers in
transfer object where up to k processes can atomically withdraw the system are totally ordered. Achieving total order among mul-
from the same account, and show that this object has consensus tiple inputs (e.g., transfers) is fundamentally a hard task, equiva-
number k. lent to solving consensus [25, 27]. Consensus [19], a central prob-
We establish our results in the context of shared memory with lem in distributed computing, is known for its notorious difficulty.
benign faults, allowing us to properly understand the level of diffi- It has no deterministic solution in asynchronous systems if just
culty of the asset transfer problem. We also translate these results a single participant can fail [19]. Partially synchronous consen-
in the message passing setting with Byzantine players, a model sus algorithms are tricky to implement correctly [1, 12, 15] and
that is more relevant in practice. In this model, we describe an face tough trade-offs between performance, security, and energy-
asynchronous Byzantine fault-tolerant asset transfer implementa- efficiency [5, 8, 23, 46]. Not surprisingly, the consensus module is
tion that is both simpler and more efficient than state-of-the-art a major bottleneck in blockchain-based protocols [26, 42, 46].
consensus-based solutions. Our results are applicable to both the A close look at Nakamoto’s original paper reveals that the
permissioned (private) and permissionless (public) setting, as nor- central issue in implementing a decentralized asset transfer sys-
mally their differentiation is hidden by the abstractions on top of tem (i.e., a cryptocurrency) is preventing double-spending, i.e.,
which our algorithms are based. spending the same money more than once [38]. Bitcoin and nu-
merous follow-up systems typically assume that total order—and
CCS CONCEPTS thus consensus—is vital to preventing double-spending [20]. There
• Theory of computation → Distributed algorithms. seems to be a common belief, indeed, that a consensus algorithm is
essential for implementing decentralized asset transfers [9, 23, 31,
KEYWORDS 38].
As our main result in this paper, we show that this belief is false.
distributed computing, distributed asset transfer, blockchain, con-
We do so by casting the asset transfer problem as a sequential object
sensus
type and determining that it has consensus number 1 in Herlihy’s
∗ This is an extended version of a conference article, comprising an additional section hierarchy [27].1
(Section 6). The conference version of this article appears in the proceedings of the
2019 ACM Symposium on Principles of Distributed Computing (PODC’19), July 29–Au- 1 The consensus number of an object type is the maximal number of processes that can
gust 2, 2019, Toronto, ON, Canada, https://doi.org/10.1145/3293611.3331589. solve consensus using only read-write shared memory and arbitrarily many objects
† This work has been supported in part by the European ERC Grant 339539 - AOC. of this type.
1
Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi

The intuition behind this result is the following. An asset trans- 2 SHARED MEMORY MODEL AND
fer object maintains a set of accounts. Each account is associated ASSET-TRANSFER OBJECT TYPE
with an owner process that is the only one allowed to issue trans-
We now present the shared memory model (Section 2.1) and pre-
fers withdrawing from this account. Every process can however
cisely define the problem of asset-transfer as a sequential object
read the balance of any account.
type (Section 2.2).
The main insight here is that relating accounts to unique own-
ers obviates the need for consensus. It is the owner that decides on
the order of transfers from its own account, without the need to 2.1 Definitions
agree with any other process—thus the consensus number 1. Other Processes. We assume a set Π of N asynchronous processes that
processes only validate the owner’s decisions, ensuring that causal communicate by invoking atomic operations on shared memory
relations across accounts are respected. We describe a simple as- objects. Processes are sequential—we assume that a process never
set transfer implementation using atomic-snapshot memory [3]. invokes a new operation before obtaining a response from a previ-
A withdrawal from an account is validated by relating the with- ous one.
drawn amount with the incoming transfers found in the memory
snapshot. Intuitively, as at most one withdrawal can be active on a Object types. A sequential object type is defined as a tuple T =
given account at a time, it is safe to declare the validated operation (Q, q 0, O, R, ∆), where Q is a set of states, q 0 ∈ Q is an initial state,
as successful and post it in the snapshot memory. O is a set of operations, R is a set of responses and ∆ ⊆ Q × Π ×
We also present a natural generalization of our result to the set- O × Q × R is a relation that associates a state, a process identifier
ting in which multiple processes are allowed to withdraw from and an operation to a set of possible new states and corresponding
the same account. A k-shared asset-transfer object allows up to responses. We assume that ∆ is total on the first three elements.
k processes to execute outgoing transfers from the same account. A history is a sequence of invocations and responses, each invo-
We prove that such an object has consensus number k and thus cation or response associated with a process identifier. A sequential
allows for implementing state machine replication (now often re- history is a history that starts with an invocation and in which ev-
ferred to as smart contracts) among the k involved processes using ery invocation is immediately followed with a response associated
k-consensus objects [30]. We show that k-shared asset transfer has with the same process. A sequential history is legal if its invoca-
consensus number k by reducing it to k-consensus (known to have tions and responses respect the relation ∆ for some sequence of
consensus number k) and reducing k-consensus to asset transfer. state assignments.
Having established the relative ease of the asset transfer prob-
lem using the shared memory model, we also present a practical Implementations. An implementation of an object type T is a dis-
solution to this problem in the setting of Byzantine fault-prone pro- tributed algorithm that, for each process and invoked operation,
cesses communicating via message passing. This setting matches prescribes the actions that the process needs to take to perform it.
realistic deployments of distributed systems. We describe an asset An execution of an implementation is a sequence of events: invo-
transfer implementation that does not resort to consensus. Instead, cations and responses of operations or atomic accesses to shared
the implementation relies on a secure broadcast primitive that en- abstractions. The sequence of events at every process must respect
sures uniform reliable delivery with only weak ordering guaran- the algorithm assigned to it.
tees [35, 36], circumventing hurdles imposed by consensus. In the
k-shared case, our results imply that to execute some form of smart Failures. Processes are subject to crash failures (we consider
contract involving k users, consensus is only needed among these k more general Byzantine failures in the next section). A process may
nodes and not among all nodes in the system. In particular, should halt prematurely, in which case we say that the process is crashed.
these k nodes be faulty, the rest of the accounts will not be affected. A process is called faulty if it crashes during the execution. A pro-
To summarize, we argue that treating the asset transfer problem cess is correct if it is not faulty. All algorithms we present in the
as a concurrent data structure and measuring its hardness through shared memory model are wait-free—every correct process eventu-
the lense of distributed computing help understand it and devise ally returns from each operation it invokes, regardless of an arbi-
better solutions to it. trary number of other processes crashing or concurrently invoking
The rest of this paper is organized as follows. We first give the operations.
formal definition of the shared memory model and the asset trans-
fer object type (Section 2). Then, we show that this object type has Linearizability. For each pattern of operation invocations, the
consensus number 1 (Section 3). Next, we generalize our result by execution produces a history, i.e., a sequence of distinct invoca-
proving that a k-shared asset transfer object has consensus number tions and responses, labelled with process identifiers and unique
k (Section 4). Finally, we describe the implications of our results in sequence numbers.
the message passing model with Byzantine faults (Sections 5 and 6) A projection of a history H to process p, denoted H |p is the sub-
and discuss related work (Section 7). sequence of elements of H labelled with p. An invocation o by a
process p is incomplete in H if it is not followed by a response in
H |p. A history is complete if it has no incomplete invocations. A
completion of H is a history H̄ that is identical to H except that
every incomplete invocation in H is either removed or completed
by inserting a matching response somewhere after it.
2
The Consensus Number of a Cryptocurrency

An invocation o 1 precedes an invocation o 2 in H , denoted 3 ASSET TRANSFER HAS CONSENSUS


o 1 ≺H o 2 , if o 1 is complete and the corresponding response r 1 pre- NUMBER 1
cedes o 2 in H . Note that ≺H stipulates a partial order on invoca-
In this section, we show that the asset-transfer type can be wait-
tions in H . A linearizable implementation (also said an atomic ob-
free implemented using only read-write registers in a shared mem-
ject) of type T ensures that for every history H it produces, there
ory system with crash failures. Thus, the type has consensus num-
exists a completion H̄ and a legal sequential history S such that
ber 1 [27].
(1) for all processes p, H̄ |p = S |p and (2) ≺H ⊆≺S .
Consider an asset-transfer object associated with a set of ac-
Consensus number. The problem of consensus consists for a set counts A and an ownership map µ where ∀a ∈ A, |µ(a)| ≤ 1.
of processes to propose values and decide on the proposed values so Our implementation is described in Figure 1. Every process p is as-
that no two processes decide on defferent values and every correct sociated with a distinct location in an atomic snapshot object [3]
process decides. The consensus number of a type T is the maximal storing the set of all successful transfer operations executed by p
number of processes that can solve consensus using atomic objects so far. Since each account is owned by at most one process, all out-
of type T and read-write registers. going transfers for an account appear in a single location of the
atomic snapshot (associated with the owner process). This princi-
2.2 The asset transfer object type ple bears a similarity to the implementation of a counter object.
Recall that the atomic snapshot (AS) memory is represented as a
Let A be a set of accounts and µ : A → 2Π be an “owner” map that
vector of N shared variables that can be accessed with two atomic
associates each account with a set of processes that are, intuitively,
operations: update and snapshot. An update operation modifies the
allowed to debit the account. We define the asset-transfer object
value at a given position of the vector and a snapshot returns the
type associated with A and µ as a tuple (Q, q 0, O, R, ∆), where:
state of the whole vector. We implement the read and transfer op-
• The set of states Q is the set of all possible maps q : A → N.
erations as follows.
Intuitively, each state of the object assigns each account its
• To read the balance of an account a, the process simply takes
balance.
a snapshot S and returns the initial balance plus the sum of
• The initialization map q 0 : A → N assigns the initial bal-
incoming amounts minus the sum of all outgoing amounts.
ance to each account.
We denote this number by balance(a, S). As we argue below,
• Operations and responses of the type are defined as O =
the result is guaranteed to be non-negative, i.e., the opera-
{transfer(a, b, x) : a, b ∈ A, x ∈ N} ∪ {read(a) : a ∈ A}
tion is correct with respect to the type specification.
and R = {true, false} ∪ N.
• To perform transfer(a, b, x), a process p, the owner of a,
• ∆ is the set of valid state transitions. For a state q ∈ Q, a
takes a snapshot S and computes balance(a, S). If the amount
process p ∈ Π, an operation o ∈ O, a response r ∈ R and a
to be transferred does not exceed balance(a, S), we add the
new state q ′ ∈ Q, the tuple (q, p, o, q ′, r ) ∈ ∆ if and only if
transfer operation to the set of p’s operations in the snap-
one of the following conditions is satisfied:
shot object via an update operation and return true. Other-
– o = transfer(a,b, x)∧p ∈ µ(a) ∧ q(a) ≥ x ∧ q ′(a) = q(a)−x
wise, the operation returns false.
∧ q ′(b) = q(b) + x ∧ ∀c ∈ A \ {a, b} : q ′(c) = q(c) (all
other accounts unchanged) ∧ r = true;
– o = transfer(a, b, x) ∧ (p < µ(a) ∨ q(a) < x) ∧ q ′ = q ∧
r = false;
– o = read(a) ∧ q = q ′ ∧ r = q(a). Shared variables:
In other words, operation transfer(a, b, x) invoked by process AS , atomic snapshot, initially {⊥} N
p succeeds if and only if p is the owner of the source account a
Local variables:
and account a has enough balance, and if it does, x is transferred
opsp ⊆ A × A × N, initially ∅
from a to the destination account b. A transfer(a, b, x) operation is
called outgoing for a and incoming for b; respectively, the x units Upon transfer(a, b, x )
are called outgoing for a and incoming for b. A transfer is successful 1 S = AS .snapshot()
if its corresponding response is true and failed if its corresponding 2 if p < µ(a) ∨ balance(a, S ) < x then
response is false. Operation read(a) simply returns the balance of 3 return false
a and leaves the account balances untouched. 4 opsp = opsp ∪ {(a, b, x )}
As in Nakamoto’s original paper [38], we assume for the mo- 5 AS .update(opsp )
ment that an asset-transfer object has at most one owner per ac- 6 return true
count: ∀a ∈ A : |µ(a)| ≤ 1. Later we lift this assumption and con-
sider more general k-shared asset-transfer objects with arbitrary Upon read(a)
7 S = AS .snapshot()
owner maps µ (Section 4). For the sake of simplicity, we also restrict
8 return balance(a, S )
ourselves to transfers with a single source account and a single des-
tination account. However, our definition (and implementation) of
the asset-transfer object type can trivially be extended to support
transfers with multiple source accounts (all owned by the same Figure 1: Wait-free implementation of asset-transfer: code
sequential process) and multiple destination accounts. for process p
3
Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi

Theorem 1. The asset-transfer object type has a wait-free imple- Shared variables:
mentation in the read-write shared memory model. R[i], i ∈ 1, . . . , k, k registers, initially R[i] = ⊥, ∀i
AT , k-shared asset-transfer object containing:
Proof. Fix an execution E of the algorithm in Figure 1. Atomic – an account a with initial balance 2k
snapshots can be wait-free implemented in the read-write shared owned by processes 1, . . . , k
memory model [3]. As every operation only involves a finite num- – some account s
ber of atomic snapshot accesses, every process completes each of
Upon propose(v):
the operations it invokes in a finite number of its own steps. 1 R[p].wr it e(v)
Let Ops be the set of: 2 AT .t r ansf er (a, s, 2k − p))
• All invocations of transfer or read in E that returned, and 3 return R[AT .r ead(a)].r ead()
• All invocations of transfer in E that completed the update
operation (line 5).
Figure 2: Wait-free implementation of consensus among k
Let H be the history of E. We define a completion of H and, for
processes using a k-shared asset-transfer object. Code for
each o ∈ Ops, we define a linearization point as follows:
process p ∈ {1, . . . , k }.
• If o is a read operation, it linearizes at the linearization point
of the snapshot operation in line 7.
• If o is a transfer operation that returns false, it linearizes at Corollary 1. The asset-transfer object type has
the linearization point of the snapshot operation in line 1. consensus number 1.
• If o is a transfer operation that completed the update oper-
ation, it linearizes at the linearization point of the update 4 k-SHARED ASSET TRANSFER HAS
operation in line 5. If o is incomplete in H , we complete it CONSENSUS NUMBER k
with response true.
We now consider the case with an arbitrary owner map µ. We show
Let H̄ be the resulting complete history and let L be the sequence
that an asset-transfer object’s consensus number is the maximal
of complete invocations of H̄ in the order of their linearization
number of processes sharing an account. More precisely, the con-
points in E. Note that, by the way we linearize invocations, the
sensus number of an asset-transfer object is maxa ∈A |µ(a)|.
linearization of a prefix of E is a prefix of L.
We say that an asset-transfer object, defined on a set of accounts
Now we show that L is legal and, thus, H is linearizable. We pro-
A with an ownership map µ, is k-shared iff maxa ∈A |µ(a)| = k. In
ceed by induction, starting with the empty (trivially legal) prefix
other words, the object is k-shared if µ allows at least one account
of L. Let L ℓ be the legal prefix of the first ℓ invocations and op be
to be owned by k processes, and no account is owned by more than
the (ℓ + 1)st operation of L. Let op be invoked by process p. The
k processes.
following cases are possible:
We show that the consensus number of any k-shared asset-
• op is a read(a): the snapshot taken at the linearization point
transfer object is k, which generalizes our result in Corollary 1. We
of op contains all successful transfers concerning a in L ℓ .
first show that such an object has consensus number at least k by
By the induction hypothesis, the resulting balance is non-
implementing consensus for k processes using only registers and
negative.
an instance of k-shared asset-transfer. We then show that k-shared
• op is a failed transfer(a, b, x): the snapshot taken at the lin-
asset-transfer has consensus number at most k by reducing it to k-
earization point of op contains all successful transfers con-
consensus, an object known to have consensus number k [30].
cerning a in L ℓ . By the induction hypothesis, the resulting
balance is non-negative. Lemma 1. Consensus has a wait-free implementation for k pro-
• op is a successful transfer(a, b, x): by the algorithm, before cesses in the read-write shared memory model equipped with a single
the linearization point of op, process p took a snapshot. Let k-shared asset-transfer object.
Lk , k ≤ ℓ, be the prefix of L ℓ that only contain operations
linearized before the point in time when the snapshot was Proof. We now provide a wait-free algorithm that solves con-
taken by p. sensus among k processes using only registers and an instance of
We observe that Lk includes a subset of all incoming trans- k-shared asset-transfer. The algorithm is described in Figure 2. In-
fers on a and all outgoing transfers on a in L ℓ . Indeed, as tuitively, k processes use one shared account a to elect one of them
p is the owner of a and only the owner of a can perform whose input value will be decided. Before a process p accesses the
outgoing transfers on a, all outgoing transfers in L ℓ were shared account, p announces its input in a register (line 1). Process
linearized before the moment p took the snapshot within p then tries to perform a transfer from account a to another ac-
op. Thus, balance(a, Lk ) ≤ balance(a, L ℓ ).2 count. The amount withdrawn this way from account a is chosen
By the algorithm, as op = transfer(a, b, x) succeeds, we have specifically such that:
balance(a, Lk ) ≥ x. Thus, balance(a, L ℓ ) ≥ x and the result- (1) only one transfer operation can ever succeed, and
ing balance in L ℓ+1 is non-negative. (2) if the transfer succeeds, the remaining balance on a will
Thus, H is linearizable.  uniquely identify process p.
2
To satisfy the above conditions, we initialize the balance of account
Analogously to balance(a, S ) that computes the balance for account a based on
the transfers contained in snapshot S , balance(a, L), if L is a sequence of operations, a to 2k and have each process p ∈ {1, . . . , k } transfer 2k −p (line 2).
computes the balance of account a based on all transfers in L . Note that transfer operations invoked by distinct processes p, q ∈
4
The Consensus Number of a Cryptocurrency

{1, . . . , k } have arguments 2k − p and 2k − q, and 2k − p + 2k − q ≥ Shared variables:


2k − k + 2k − (k − 1) = 2k + 1. The initial balance of a is only 2k AS , atomic snapshot object
and no incoming transfers are ever executed. Therefore, the first for each a ∈ A:
transfer operation to be applied to the object succeeds (no transfer R a [i], i ∈ Π, registers, initially [⊥, . . . , ⊥]
kCa [i], i ≥ 0, list of instances of k-consensus objects
tries to withdraw more then 2k) and the remaining operations will
have to fail due to insufficient balance. Local variables:
When p reaches line 3, at least one transfer must have succeeded: hist: a set of completed trasfers, initially empty
(1) either p’s transfer succeeded, or for each a ∈ A:
(2) p’s transfer failed due to insufficient balance, in which case committed a , initially ∅
some other process must have previously succeeded. round a , initially 0
Let q be the process whose transfer succeeded. Thus, the balance
of account a is 2k − (2k − q) = q. Since q performed a transfer Upon transfer(a, b, x ):
operation, by the algorithm, q must have previously written its 1 if p < µ(a) then
proposal to the register R[q]. Regardless of whether p = q or p , q, 2 return false
3 t x = (a, b, x, p, rounda )
reading the balance of account a returns q and p decides the value
4 R a [p].wr it e(t x )
of R[q].  5 collected = collect(a) \ committeda
6 while t x ∈ collected do
To prove that k-shared asset-transfer has consensus number at 7 req = the oldest transfer in collected
most k, we reduce k-shared asset-transfer to k-consensus. A k- 8 prop = proposal(req, AS .snapshot ())
consensus object exports a single operation propose that, the first k 9 decision = kCa [rounda ].pr opose(pr op)
times it is invoked, returns the argument of the first invocation. All 10 hist = hist ∪ {decision}
subsequent invocations return ⊥. Given that k-consensus is known 11 AS .update(hist)
to have consensus number exactly k [30], a wait-free algorithm 12 committeda = committeda ∪ {t : decision = (t, ∗)}
implementing k-shared asset-transfer using only registers and k- 13 collected = collected \ committeda
14 rounda = rounda + 1
consensus objects implies that the consensus number of k-shared
15 if (t x, success) ∈ hist then
asset-transfer is not more than k. 16 return true
The algorithm reducing k-shared asset-transfer to k-consensus 17 else
is given in Figure 3. Before presenting a formal correctness argu- 18 return false
ment, we first informally explain the intuition of the algorithm. In
our reduction, we associate a series of k-consensus objects with Upon read(a):
every account a. Up to k owners of a use the k-consensus objects 19 return balance(a, AS .snapshot ())

to agree on the order of outgoing transfers for a.


We maintain the state of the implemented k-shared asset- collect(a):
20 collected = ∅
transfer object using an atomic snapshot object AS. Every process
21 for all i = Π do
p uses a distinct entry of AS to store a set hist. hist is a subset of
22 if R a [i].r ead() , ⊥ then
all completed outgoing transfers from accounts that p owns (and 23 collected = collected ∪ {R a [i].r ead()}
thus is allowed to debit). For example, if p is the owner of accounts 24 return collect ed
d and e, p’s hist contains outgoing transfers from d and e. Each el-
ement in the hist set is represented as ((a,b, x, s, r ), result), where proposal((a, b, q, x ), snapshot):
a, b, and x are the respective source account, destination account, 25 if balance(a, snapshot) ≥ x then

and the amount transferred, s is the originator of the transfer, and 26 prop = ((a, b, q, x ), success)
r is the round in which the transfer was invoked by the originator. 27 else

The value of result ∈ {success, failure} indicates whether the 28 prop = ((a, b, q, x ), failure)
29 return pr op
transfer succeeds or fails. A transfer becomes “visible” when any
process inserts it in its corresponding entry of AS.
balance(a, snapshot):
To read the balance of account a, a process takes a snapshot of 30 incoming = {t x : t x = (∗, a, ∗, ∗, ∗) ∧ (t x, success) ∈ snapshot }
AS, and then sums the initial balance q 0 (a) and amounts of all suc- 31 outgoing = {t x : t x = (a, ∗, ∗, ∗, ∗) ∧ (t x, success) ∈ snapshot }
cessful incoming transfers, and subtracts the amounts of successful Í  Í 
32 return q 0 (a) + (∗,a,x,∗,∗)∈incoming x − (a,∗,x,∗,∗)∈outgoing x
outgoing transfers found in AS. We say that a successful transfer
tx is in a snapshot AS (denoted by (tx, success) ∈ AS) if there
exists an entry e in AS such that (tx, success) ∈ AS[e]. Figure 3: Wait-free implementation of a k-shared asset-
To execute a transfer o outgoing from account a, a process p first transfer object using k-consensus objects. Code for process p.
announces o in a register R a that can be written by p and read by
any other process. This enables a “helping” mechanism needed to
ensure wait-freedom to the owners of a [27].
Next, p collects the transfers proposed by other owners and tries
to agree on the order of the collected transfers and their results
5
Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi

using a series of k-consensus objects. For each account, the agree- • If o is a transfer operation that some process included in
ment on the order of transfer-result pairs proceeds in rounds. Each the update operation (line 11), it linearizes at the lineariza-
round is associated with a k-consensus object which p invokes tion point of the first update operation in H (line 11) that in-
with a proposal chosen from the set of collected transfers. Since cludes o. Furthermore, if o is incomplete in H , we complete
each process, in each round, only invokes the k-consensus object it with response true.
once, no k-consensus object is invoked more than k times and thus Let H̄ be the resulting complete history and let L be the se-
each invocation returns a value (and not ⊥). quence of complete operations of H̄ in the order of their lineariza-
A transfer-result pair as a proposal for the next instance of k- tion points in E. Note that, by the way we linearize operations, the
consensus is chosen as follows. Process p picks the “oldest” col- linearization of a prefix of E is a prefix of L. Also, by construction,
lected but not yet committed operation (based on the round num- the linearization point of an operation belongs to its interval.
ber rounda attached to the transfer operation when a process an- Now we show that L is legal and, thus, H is linearizable. We pro-
nounces it; ties are broken using process IDs). Then p takes a snap- ceed by induction, starting with the empty (trivially legal) prefix
shot of AS and checks whether account a has sufficient balance of L. Let L ℓ be the legal prefix of the first ℓ operation and op be
according to the state represented by the snapshot, and equips the (ℓ + 1)st operation of L. Let op be invoked by process p. The
the transfer with a corresponding success / failure flag. The following cases are possible:
resulting transfer-result pair constitutes p’s proposal for the next • op is a read(a): the snapshot taken at op’s linearization point
instance of k-consensus. The currently executed transfer by pro- contains all successful transfers concerning a in L ℓ . By the
cess p returns as soon as it is decided by a k-consensus object, the induction hypothesis, the resulting balance is non-negative.
flag of the decided value (success/failure) indicating the transfer’s • op is a failed transfer(a, b, x): the snapshot taken at the lin-
response (true/false). earization point of op contains all successful transfers con-
cerning a in L ℓ . By the induction hypothesis, the balance
Lemma 2. The k-shared asset-transfer object type has a wait-free corresponding to this snapshot non-negative. By the algo-
implementation in the read-write shared memory model equipped rithm, the balance is less than x.
with k-consensus objects. • op is a successful transfer(a, b, x). Let Ls , s ≤ ℓ, be the pre-
fix of L ℓ that only contains operations linearized before the
moment of time when q o has taken the snapshot just before
accessing kC o .
Proof. We essentially follow the footpath of the proof of The- As before accessing kC o , q went through all preceding k-
orem 1. Fix an execution E of the algorithm in Figure 3. Let H be consensus objects associated with a and put the decided
the history of E. values in AS, Ls must include all outgoing transfer opera-
To perform a transfer o on an account a, p registers it in R a [p] tions for a. Furthermore, Ls includes a subset of all incoming
(line 4) and then proceeds through a series of k-consensus objects, transfers on a. Thus, balance(a, Lk ) ≤ balance(a, L ℓ ).
each time collecting R a to learn about the transfers concurrently By the algorithm, as op = transfer(a, b, x) succeeds, we have
proposed by other owners of a. Recall that each k-consensus object balance(a, Lk ) ≥ x. Thus, balance(a, L ℓ ) ≥ x and the result-
is wait-free. Suppose, by contradiction, that o is registered in R a ing balance in L ℓ+1 is non-negative.
but is never decided by any instance of k-consensus. Eventually, Thus, H is linearizable. 
however, o becomes the request with the lowest round number in
R a and, thus, some instance of k-consensus will be only accessed Theorem 2. A k-shared asset-transfer object has consensus num-
with o as a proposed value (line 9). By validity of k-consensus, this ber k.
instance will return o and, thus, p will be able to complete o.
Let Ops be the set of all complete operations and all transfer op-
Proof. It follows directly from Lemma 1 that k-shared asset-
erations o such that some process completed the update operation
transfer has consensus number at least k. Moreover, it follows from
(line 11) in E with an argument including o (the atomic snapshot
Lemma 2 that k-shared asset-transfer has consensus number at
and k-consensus operation has been linearized). Intuitively, we in-
most k. Thus, the consensus number of k-shared asset-transfer is
clude in Ops all operations that took effect, either by returning a
exactly k. 
response to the user or by affecting other operations. Recall that
every such transfer operation was agreed upon in an instance of
k-consensus, let it be kC o . Therefore, for every such transfer oper- 5 ASSET TRANSFER IN MESSAGE PASSING
ation o, we can identify the process q o whose proposal has been We established our theoretical results in a shared memory system
decided in that instance. with crash failures, proving that consensus is not necessary for
We now determine a completion of H and, for each o ∈ Ops, we implementing an asset transfer system. Moreover, a natural gener-
define a linearization point as follows: alization of such a system where up to k processes have access to
• If o is a read operation, it linearizes at the linearization point atomic operations on the same account has consensus number k.
of the snapshot operation (line 19). These results help us understand the level of difficulty of certain
• If o is a transfer operation that returns false, it linearizes problems in the domain of cryptocurrencies. To achieve a practical
at the linearization point of the snapshot operation (line 8) impact, however, we need an algorithm deployable as a distributed
performed by q o just before it invoked kC o .propose(). system in a realistic setting. Arguably, such a setting is one where
6
The Consensus Number of a Cryptocurrency

processes (some of which are potentially malicious) communicate Malicious processes can perform arbitrary actions, except for ones
by exchanging messages. that involve subverting cryptographic primitives (e.g. inverting se-
In this section we overview an extension of our results to the cure hash functions). A process is called faulty if it is either crashed
message passing system with Byzantine failures. Instead of consen- or malicious. A process is correct if it is not faulty and benign if it
sus, we rely on a secure broadcast primitive that provides reliable is not malicious. Note that every correct process is benign, but not
delivery with weak (weaker than FIFO) ordering guarantees [36]. necessarily vice versa.
Using secure broadcast, processes announce their transfers to the We only require that the transfer system behaves correctly to-
rest of the system. We establish dependencies among these trans- wards benign processes, regardless of the behavior of Byzantine
fers that induce a partial order. Using a method similar to (a weak ones. Informally, we want to require that no benign process can be
form of) vector clocks [29], we make sure that each process applies a victim of a double-spending attack, i.e., every execution appears
the transfers respecting this dependency-induced partial order. In to benign processes as a correct sequential execution, respecting
a nutshell, a transfer only depends on all previous transfers outgo- the original execution’s real-time ordering [27].
ing from the same account, and on a subset of transfers incoming For the sake of efficiency, in our algorithm, we slightly relax
to that account. Each transfer operation corresponds to one invo- the last requirement—while still preventing double-spending. We
cation of secure broadcast by the corresponding account’s owner. require that successful transfer operations invoked by benign pro-
The message being broadcast carries, in addition to the transfer cesses constitute a legal sequential history that preserves the real-
itself, references to the transfer’s dependencies. time order. A read or a failed transfer operation invoked by a be-
As secure broadcast only provides liveness if the sender is cor- nign process p can be “outdated”—it can be based on a stale state
rect, faulty processes might not be able to perform any transfers. of p’s balance. Informally, one can view the system requirements
However, due to secure broadcast’s delivery properties, the correct as linearizability [28] for successful transfers and sequential consis-
processes will always have a consistent view of the system state. tency [6] for failed transfers and reads. One can argue that this re-
Every transfer operation only entails a single invocation of se- laxation incurs little impact on the system’s utility, since all incom-
cure broadcast and our algorithm does not send any additional ing transfers are eventually applied. As progress (liveness) guaran-
messages. Our algorithm inherits the complexity from the un- tees, we require that every operation invoked by a correct process
derlying secure broadcast implementation, and there is plenty of eventually completes.
such algorithms optimizing complexity metrics for various set-
tings [10, 11, 21, 25, 35, 36, 45]. In practice, as shown by a prelimi- Definition 1. Let E be any execution of an implementation and
nary deployment based on a naive quadratic secure broadcast im- H be the corresponding history. Let ops(H ) denote the set of oper-
plementation [10] in a medium-sized system (up to 100 processes), ations in H that were executed by correct processes in E. An asset-
our solution outperforms a consensus-based one by 1.5x to 6x in transfer object in message passing guarantees that each invocation
throughput and by up to 2x in latency. issued by a correct process is followed by a matching response in H ,
The implementation can be further extended to solve the k- and that there exists H̄ , a completion of H , such that:
shared asset transfer problem. As we showed in Section 4, agree- (1) Let H̄ t denote the sub-history of successful transfers of H̄
ment among a subset of the processes is necessary in such a case. performed by correct processes and ≺t be the subset of ≺H̄

We associate each account (owned by up to k processes) with a restricted to operations in H̄ t . Then there exists a legal se-
Byzantine-fault tolerant state machine replication (BFT) service ex- quential history S such that (a) for every correct process p,
ecuted by the owners [13] of that account. The BFT service assigns H̄ t |p = S |p and (b) ≺t ⊆≺S .

sequence numbers to transfers which the processes then submit to (2) For every correct process p, there exists a legal sequential his-
an extended version of the above-mentioned transfer protocol. As tory Sp such that:
long as the replicated state machine is safe and live, we guaran- • ops(H̄ ) ⊆ ops(Sp ), and
tee that every invoked transfer operation eventually returns. If an • Sp |p = H̄ |p.
account becomes compromised (i.e., the safety or liveness of the
BFT is violated), only the corresponding account might lose live- Notice that property (2) implies that every update in H that af-
ness. In other words, outgoing transfers from the compromised ac- fects the account of a correct process p is eventually included in
count may not return, while safety and liveness of transfers from p’s “local” history and, therefore, will reflect reads and transfer op-
“healthy” accounts are always guaranteed. We describe this exten- erations subsequently performed by p.
sion in more details later (Section 6).
In the rest of this section, we give details on the Byzantine mes-
sage passing model, adapt our asset-transfer object accordingly
5.2 Asset Transfer Implementation in Message
(Sec. 5.1) and present its broadcast-based implementation (Sec. 5.2). Passing
Instead of consensus, we rely on a secure broadcast primitive that
is strictly weaker than consensus and has a fully asynchronous im-
5.1 Byzantine Message Passing Model plementation. It provides uniform reliable delivery despite Byzan-
A process is Byzantine if it deviates from the algorithm it is as- tine faults and so-called source order among delivered messages.
signed, either by halting prematurely, in which case we say that the The source order property, being even weaker than FIFO, guaran-
process is crashed, or performing actions that are not prescribed by tees that messages from the same source are delivered in the same
its algorithm, in which case we say that the process is malicious. order by all correct processes. More precisely, the secure broadcast
7
Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi

primitive we use in our implementation has the following proper- Local variables:
ties [36]: seq[ ], initially seq[q] = 0, ∀q {Number of validated transfers outgoing from q}
• Integrity: A benign process delivers a message m from a rec[ ], initially rec[q] = 0, ∀q {Number of delivered transfers from q}
hist [ ], initially hist [q] = ∅, ∀q {Set of validated transfers involving q}
process p at most once and, if p is benign, only if p previously
deps, initially ∅ {Set of last incoming transfers for account of local process p}
broadcast m.
toValidate, initially ∅ {Set of delivered (but not validated) transfers}
• Agreement: If processes p and q are correct and p delivers
m, then q delivers m. 1 operation transfer(a, b, x ) where µ(a) = {p }
• Validity: If a correct process p broadcasts m, then p delivers 2 if balance(a, hist [p] ∪ deps) < x then
m. 3 return false
• Source order: If p and q are benign and both deliver m from 4 broadcast([(a, b, x, seq[p] + 1), deps])
r and m ′ from r , then they do so in the same order. 5 deps = ∅

Operation. To perform a transfer tx, a process p securely broad- 6 operation read(a)


casts a message with the transfer details: the arguments of the 7 return balance(a, hist [a] ∪ deps)
transfer operation (see Section 2.2) and some metadata. The meta-
data includes a per-process sequence number of tx and references 8 upon deliver(q, m)
to the dependencies of tx. The dependencies are transfers incoming 9 let m be [(q, d, y, s), h]
10 if s = rec[q] + 1 then
to p that must be known to any process before applying tx. These
11 rec[q] = rec[q] + 1
dependencies impose a causal relation between transfers that must 12 toValidate = toValidate ∪ {(q, m)}
be respected when transfers are being applied. For example, sup-
pose that process p makes a transfer tx to process q and q, after 13 upon (q, [t, h]) ∈ toValidate ∧ Valid(q, t, h)
observing tx, performs another transfer tx ′ to process r . q’s broad- 14 let t be (q, d, y, s)
cast message will contain tx ′ , a local sequence number, and a refer- 15 hist [q] := hist [q] ∪ h ∪ {t }

ence to tx. Any process (not only r ) will only evaluate the validity 16 seq[q] = s
of tx ′ after having applied tx. This approach is similar to using 17 if d = p then
vector clocks for implementing causal order among events [29]. 18 deps = deps ∪ (q, d, y, s)
To ensure the authenticity of operations—so that no process is 19 if q = p then
20 return true
able to debit another process’s account—we assume that processes
sign all their messages before broadcasting them. In practice, simi- 21 function Valid(q, t, h)
lar to Bitcoin and other transfer systems, every process possesses a 22 let t be (c, d, y, s)
public-private key pair that allows only p to securely initiate trans- 23 return (q = c)
fers from its corresponding account. For simplicity of presentation, 24 and (s = seq[q] + 1)
we omit this mechanism in the algorithm pseudocode. 25 and (balance(c, hist [q]) ≥ y)
Figure 4 describes the full algorithm implementing asset- 26 and (∀(a, b, x, r ) ∈ h : (a, b, x, r ) ∈ hist [a])
transfer in a Byzantine-prone message passing system. Each pro-
cess p maintains, for each process q, an integer seq[q] reflecting the 27 function balance(a, h)
number of transfers which process q initiated and which process 28 return sum of incoming minus outgoing transfers for account a in h
p has validated and applied. Process p also maintains, for every
process q, an integer rec[q] reflecting the number of transfers pro- Figure 4: Consensusless transfer system based on secure
cess q has initiated and process p has delivered (but not necessarily broadcast. Code for every process p.
applied).
Additionally, there is also a list hist[q] of transfers which involve
process q. We say that a transfer operation involves a process q if
that transfer is either outgoing or incoming on the account of q. one concurrent transfer, every process waits for delivery of its
Each process p maintains as well a local variable deps. This is a set own message before initiating another broadcast. This effectively
of transfers incoming for p that p has applied since the last success- turns the source order property of secure broadcast into FIFO order.
ful outgoing transfer. Finally, the set toValidate contains delivered Upon delivery, process p checks this message for well-formedness
transfers that are pending validation (i.e., have been delivered, but (lines 9 and 10), and then adds it to the set of messages pending
not yet validated). validation. We explain the validation procedure later.
To perform a transfer operation, process p first checks the bal- Once a transfer passes validation (the predicate in line 13 is sat-
ance of its own account, and if the balance is insufficient, returns isfied), process p applies this transfer on the local state. Applying a
false (line 3). Otherwise, process p broadcasts a message with this transfer means that process p adds this transfer and its dependen-
operation via the secure broadcast primitive (line 4). This message cies to the history of the outgoing (line 15) account. If the transfer
includes the three basic arguments of a transfer operation as well is incoming for local process p, it is also added to deps, the set of
as seq[p]+1 and dependencies deps. Each correct process in the sys- current dependencies for p (line 18). If the transfer is outgoing for
tem eventually delivers this message via secure broadcast (line 8). p, i.e., it is the currently pending transfer operation invoked by p,
Note that, given the assumption of no process executing more than then the response true is returned (line 20).
8
The Consensus Number of a Cryptocurrency

To perform a read(a) operation for account a, process p simply to toValidate set only if the previous message from q added to
computes the balance of this account based on the local history toValidate had sequence number i − 1 (line 10). Furthermore, a mes-
hist[a] (line 28). sage m = [(q, d, y, s), h] is validated at a correct process only if all
Before applying a transfer op from some process q, process p val- messages in h have been previously validated (line 26). Therefore,
idates op via the Valid function (lines 21–26). To be valid, op must  is acyclic and thus can be extended to a total order.
satisfy four conditions. The first condition is that process q (the is- Let S be the sequential history constructed from any such total
suer of transfer op) must be the owner of the outgoing account for order on messages in V in which every message m = [(q,d, y, s), h]
op (line 23). Second, any preceding transfers that process q issued is replaced with the invocation-response pair transfer(q, d, y); true.
must have been validated (line 24). Third, the balance of account q By construction, every operation transfer(q, d, y) in S is pre-
must not drop below zero (line 25). Finally, the reported dependen- ceded by a sequence of transfers that ensure that the balance of
cies of op (encoded in h of line 26) must have been validated and q does not drop below y (line 25). In particular, S includes all out-
exist in hist[q]. going transfers from the account of q performed previously by q
itself. Additionally S may order some incoming transfer to q that
Lemma 3. In any infinite execution of the algorithm (Figure 4),
did not appear at hist[q] before the corresponding (q, d,y, s) has
every operation performed by a correct process eventually completes.
been added to it. But these “unaccounted” operations may only in-
Proof. A transfer operation that fails or a read operation in- crease the balance of q and, thus, it is indeed legal to return true.
voked by a correct process returns immediately (lines 3 and 7, re- By construction, for each correct process p, S respects the order
spectively). of successful transfers issued by p. Thus, the subsequence of suc-
Consider a transfer operation T invoked by a correct process cessful transfers in H “looks” linearizable to the correct processes:
p that succeeds (i.e., passes the check in line 2), so p broadcasts a H , restricted to successful transfers witnessed by the correct pro-
message with the transfer details using secure broadcast (line 4). cesses, is consistent with a legal sequential history S.
By the validity property of secure broadcast, p eventually delivers Let p be a correct process in E. Now let Vp denote the set of all
the message (via the secure broadcast callback, line 8) and adds messages that were delivered (line 8) and validated (line 23) at p
it to the toValidate set. By the algorithm, this message includes a in E. Let p be the subset of  restricted to the elements in Vp .
set deps of operations (called h, line 9) that involve p’s account. Obviously, p is cycle-free and we can again extend it to a total
This set includes transfers that process p delivered and validated order. Let Sp be the sequential history build in the same way as S
after issuing the prior successful outgoing transfer (or since sys- above. Similarly, we can see that Sp is legal and, by construction,
tem initialization if there is no such transfer) but before issuing T consistent with the local history of all operations of p (including
(lines 4 and 5). reads and failed transfers).
As process p is correct, it operates on its own account, respects By Lemma 3, every operation invoked by a correct process even-
the sequence numbers, and issues a transfer only if it has enough tually completes. Thus, E indeed satisfies the properties of an asset-
balance on the account. Thus, when it is delivered by p, T must transfer object type. 
satisfy the first three conditions of the Valid predicate (lines 23–25).
Moreover, by the algorithm, all dependencies (labeled h in function 6 k-SHARED ASSET TRANSFER IN MESSAGE
Valid) included in T are in the history hist[p] and, thus the fourth PASSING
validation condition (line 26) also holds. Our message-passing asset-transfer implementation can be nat-
Thus, p eventually validates T and completes the operation by urally extended to the k-shared case, when some accounts are
returning true in line 20.  owned by up to k processes. As we showed in Section 4, a purely
asynchronous implementation of a k-shared asset-transfer does
Theorem 3. The algorithm in Figure 4 implements an asset-
not exist, even in the benign shared-memory environment.
transfer object type.
k-shared BFT service. To circumvent this impossibility, we as-
Proof. Fix an execution E of the algorithm, let H be the corre-
sume that every account is associated with a Byzantine fault-
sponding history.
tolerant state-machine replication service (BFT [13]) that is used
Let V denote the set of all messages that were delivered (line 8)
by the account’s owners to order their outgoing transfers. More
and validated (line 23) at correct processes in E. Every message
precisely, the transfers issued by the owners are assigned mono-
m = [(q, d, y, s), h] ∈ V is put in hist[q] (line 15). We define an
tonically increasing sequence numbers.
order ⊆ V × V as follows. For m = [(q, d,y, s), h] ∈ V and
The service can be implemented by the owners themselves,
m ′ = [(r, d ′, y ′, s ′ ), h ′] ∈ V, we have m  m ′ if and only if one of
acting both as clients, submitting requests, and replicas, reaching
the following conditions holds:
agreement on the order in which the requests must be served. As
• q = r and s < s ′, long as more than two thirds of the owners are correct, the service
• (r, d ′, y ′, s ′) ∈ h, or is safe, in particular, no sequence number is assigned to more than
• there exists m ′′ ∈ V such that m  m ′′ and m ′′  m ′. one transfer. Moreover, under the condition that the owners can
By the source order property of secure broadcast (see Sec- eventually communicate within a bounded message delay, every
tion 5.2), correct processes p and r deliver messages from any pro- request submitted by a correct owner is guaranteed to be even-
cess q in the same order. By the algorithm in Figure 4, a message tually assigned a sequence number [13]. One can argue that it is
from q with a sequence number i is added by a correct process much more likely that this assumption of eventual synchrony holds
9
Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi

for a bounded set of owners, rather than for the whole set of sys- protocol ensures that the history of successful transfers is lineariz-
tem participants. Furthermore, communication complexity of such able. On the liveness side, the protocol ensures that every transfer
an implementation is polynomial in k and not in N , the number of on a non-compromised account is guaranteed to complete.
processes.

Account order in secure broadcast. Consider even the case where


the threshold of one third of Byzantine owners is exceeded, where 7 RELATED WORK
the account may become blocked or, even worse, compromised. In Many systems address the problem of asset transfers, be they for a
this case, different owners may be able to issue two different trans- permissioned (private, with a trusted external access control mech-
fers associated with the same sequence number. anism) [4, 26, 31] or permissionless (public, prone to Sybil attacks)
This issue can be mitigated by a slight modification of the clas- setting [2, 16, 22, 33, 38, 44]. Decentralized systems for the pub-
sical secure broadcast algorithm [36]. In addition to the properties lic setting are open to the world. To prevent malicious parties
of Integrity, Validity and Agreement of secure broadcast, the mod- from overtaking the system, these systems rely on Sybil-proof tech-
ified algorithm can implement the property of account order, gen- niques, e.g., proof-of-work [38], or proof-of-stake [7]. The above-
eralizing the source order property (Section 5.2). Assume that each mentioned solutions, whether for the permissionless or the permis-
broadcast message is equipped with a sequence number (generated sioned environment, seek to solve consensus. They must inevitably
by the BFT service, as we will see below). rely on synchrony assumptions or randomization. By sidestepping
consensus, we can provide a deterministic and asynchronous im-
• Account order: If a benign process p delivers messages m
plementation.
(with sequence number s) and m ′ (with sequence number
It is worth noting that many of those solutions allow for more
s ′) such that m and m ′ are associated with the same account
than just transfers, and support richer operations on the system
and s < s ′, then p delivers m before m ′.
state—so-called smart contracts. Our paper focuses on the original
Informally, the implementation works as follows. The sender asset transfer problem, as defined by Nakamoto [38], and we do not
sends the message (containing the account reference and the se- address smart contracts, for certain forms of which consensus is in-
quence number) it wants to broadcast to all and waits until it re- deed necessary. However, our approach allows for arbitrary opera-
ceives acknowledgements from a quorum of more than two thirds tions, if those operations affect groups of the participants that can
of the processes. A message with a sequence number s associated solve consensus among themselves. Potential safety or liveness vi-
with an account a is only acknowledged by a benign process if olations of those operations (in case this group gets compromised)
the last message associated with a it delivered had sequence num- are confined to the group and do not affect the rest of the system.
ber s − 1. Once a quorum is collected, the sender sends the mes- In the blockchain ecosystem, a lot of work has been devoted to
sage equipped with the signed quorum to all and delivers the mes- avoid a totally ordered chain of transfers. The idea is to replace
sage. This way, the benign processes deliver the messages associ- the totally ordered linear structure of a blockchain with that of a
ated with the same account in the same order. If the owners of directed acyclic graph (DAG) for structuring the transfers in the
an account send conflicting messages for the same sequence num- system. Notable systems in this spirit include Byteball [14], Veg-
ber, the account may block. However, and most importantly, even visir [31], Corda [26], Nano [34], or the GHOST protocol [41]. Even
a compromised account is always prevented from double spend- if these systems use a DAG to replace the classic blockchain, they
ing. Liveness of operations on a compromised account is not guar- still employ consensus.
anteed, but safety and liveness of other operations remains unaf- We can also use a DAG to characterize the relation between
fected. transfers, but we do not resort to solving consensus to build the
DAG, nor do we use the DAG to solve consensus. More precisely,
Putting it all together. The resulting k-shared asset transfer sys- we can regard each account as having an individual history. Each
tem is a composition of a collection of BFT services (one per such history is managed by the corresponding account owner with-
account), the modified secure broadcast protocol (providing the out depending on a global view of the system. Histories are loosely
account-order property), and a slightly modified protocol in Fig- coupled through a causality relation established by dependencies
ure 4. among transfers.
To issue a transfer operation t on an account a it owns, a process The important insight that an asynchronous broadcast-style ab-
p first submits t to the associated BFT service to get a sequence straction suffices for transfers appears in the literature as early
number. Assuming that the account is not compromised and the as 2002, due to Pedone and Schiper [39]. Duan et. al. [17] intro-
service is consistent, the transfer receives a unique sequence num- duce efficient Byzantine fault-tolerant protocols for storage and
ber s. Note that the decided tuple (a, t, s) should be signed by a also build on this insight. So does recent work by Gupta [24] on
quorum of owners: this will be used by the other processes in the financial transfers which seems closest to ours; the proposed algo-
system to ensure that the sequence number has been indeed agreed rithm is based on similar principles as some implementations of
upon by the owners of a. The process executes the protocol in Fig- secure broadcast [35, 36]. To the best of our knowledge, however,
ure 4, with the only modification that the sequence number seq is we are the first to formally define the asset transfer problem as a
now not computed locally but adopted from the BFT service. shared object type, study its consensus number, and propose algo-
Intuitively, as the transfers associated with a given account are rithms building on top of standard abstractions that are amenable
processed by the benign processes in the same order, the resulting to a real deployment in cryptocurrencies.
10
The Consensus Number of a Cryptocurrency

REFERENCES [27] Herlihy, M. Wait-free synchronization. TOPLAS 13, 1 (1991), 123–149.


[1] Abraham, I., Gueta, G., Malkhi, D., Alvisi, L., Kotla, R., and Martin, J.-P. [28] Herlihy, M. P., and Wing, J. M. Linearizability: A correctness condition for
Revisiting fast practical byzantine fault tolerance, 2017. concurrent objects. ACM Trans. Program. Lang. Syst. 12, 3 (July 1990), 463–492.
[2] Abraham, I., Malkhi, D., Nayak, K., Ren, L., and Spiegelman, A. Solida: A [29] J. Fidge, C. Timestamps in message-passing systems that preserve partial or-
blockchain protocol based on reconfigurable byzantine consensus, 2016. dering. Proceedings of the 11th Australian Computer Science Conference 10, 1 (02
[3] Afek, Y., Attiya, H., Dolev, D., Gafni, E., Merritt, M., and Shavit, N. Atomic 1988), 56–66.
snapshots of shared memory. JACM 40, 4 (1993), 873–890. [30] Jayanti, P., and Toueg, S. Some results on the impossibility, universality, and
[4] Androulaki, E., Barger, A., Bortnikov, V., Cachin, C., Christidis, K., decidability of consensus. In Distributed Algorithms (Berlin, Heidelberg, 1992),
De Caro, A., Enyeart, D., Ferris, C., Laventman, G., Manevich, Y., Muralid- A. Segall and S. Zaks, Eds., Springer Berlin Heidelberg, pp. 69–84.
haran, S., Murthy, C., Nguyen, B., Sethi, M., Singh, G., Smith, K., Sorniotti, [31] Karlsson, K., Jiang, W., Wicker, S., Adams, D., Ma, E., van Renesse, R., and
A., Stathakopoulou, C., Vukolić, M., Cocco, S. W., and Yellick, J. Hyper- Weatherspoon, H. Vegvisir: A Partition-Tolerant Blockchain for the Internet-
ledger fabric: A distributed operating system for permissioned blockchains. In of-Things, 2018.
Proceedings of the Thirteenth EuroSys Conference (New York, NY, USA, 2018), Eu- [32] Kogias, E. K., Jovanovic, P., Gailly, N., Khoffi, I., Gasser, L., and Ford, B. En-
roSys ’18, ACM, pp. 30:1–30:15. hancing bitcoin security and performance with strong consistency via collective
[5] Antoniadis, K., Guerraoui, R., Malkhi, D., and Seredinschi, D.-A. State Ma- signing. USENIX Security, 2016.
chine Replication Is More Expensive Than Consensus. In 32nd International [33] Kokoris-Kogias, E., Jovanovic, P., Gasser, L., Gailly, N., Syta, E., and Ford,
Symposium on Distributed Computing (DISC 2018) (Dagstuhl, Germany, 2018), B. Omniledger: A secure, scale-out, decentralized ledger via sharding. IEEE S&P,
U. Schmid and J. Widder, Eds., vol. 121 of Leibniz International Proceedings in In- 2018.
formatics (LIPIcs), Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, pp. 7:1– [34] LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [On-
7:18. line resource]: https://nano. org/en/whitepaper (accessed 18.01. 2019), 2018.
[6] Attiya, H., and Welch, J. L. Sequential consistency versus linearizability. ACM [35] Malkhi, D., Merritt, M., and Rodeh, O. Secure Reliable Multicast Protocols
TOCS 12, 2 (1994), 91–122. in a WAN. ICDCS, 1997.
[7] Bentov, I., Gabizon, A., and Mizrahi, A. Cryptocurrencies without proof of [36] Malkhi, D., and Reiter, M. K. A high-throughput secure reliable multicast
work. In Financial Cryptography and Data Security (Berlin, Heidelberg, 2016), protocol. Journal of Computer Security 5, 2 (1997), 113–128.
J. Clark, S. Meiklejohn, P. Y. Ryan, D. Wallach, M. Brenner, and K. Rohloff, Eds., [37] Mazieres, D. The stellar consensus protocol: A federated model for internet-
Springer Berlin Heidelberg, pp. 142–157. level consensus. Stellar Development Foundation, 2015.
[8] Berman, P., Garay, J. A., and Perry, K. J. Towards optimal distributed con- [38] Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system, 2008.
sensus. In 30th Annual Symposium on Foundations of Computer Science (FOCS) [39] Pedone, F., and Schiper, A. Handling message semantics with generic broad-
(Research Triangle Park, NC, USA, Oct 1989), IEEE, pp. 410–415. cast protocols. Distributed Computing 15, 2 (2002), 97–107.
[9] Bonneau, J., Miller, A., Clark, J., Narayanan, A., Kroll, J. A., and Felten, [40] Rapoport, P., Leal, R., Griffin, P., and Sculley, W. The Ripple Protocol, 2014.
E. W. SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurren- [41] Sompolinsky, Y., and Zohar, A. Accelerating Bitcoin’s transaction process-
cies, 2015. ing: fast money grows on trees, not chains. IACR Cryptology ePrint Archive,
[10] Bracha, G., and Toueg, S. Asynchronous Consensus and Broadcast Protocols. 2013:881, 2013.
JACM 32, 4 (1985), 824–840. [42] Sousa, J., Bessani, A., and Vukolic, M. A byzantine fault-tolerant ordering
[11] Cachin, C., Kursawe, K., Petzold, F., and Shoup, V. Secure and efficient asyn- service for the hyperledger fabric blockchain platform. IEEE DSN, 2018.
chronous broadcast protocols. In Advances in Cryptology — CRYPTO 2001 (Berlin, [43] Szabo, N. Formalizing and securing relationships on public networks. First
Heidelberg, 2001), J. Kilian, Ed., Springer Berlin Heidelberg, pp. 524–541. Monday 2(9), 1997.
[12] Cachin, C., and Vukolić, M. Blockchains consensus protocols in the wild, 2017. [44] Team-Rocket. Snowflake to Avalanche: A Novel Metastable Consensus Pro-
[13] Castro, M., and Liskov, B. Practical byzantine fault tolerance and proactive tocol Family for Cryptocurrencies. White Paper, 2018. Revision: 05/16/2018
recovery. ACM Trans. Comput. Syst. 20, 4 (Nov. 2002), 398–461. 21:51:26 UTC.
[14] Churyumov, A. Byteball: A decentralized system for storage and transfer of [45] Toueg, S. Randomized byzantine agreements. In Proceedings of the Third Annual
value, 2016. ACM Symposium on Principles of Distributed Computing (New York, NY, USA,
[15] Clement, A., Wong, E. L., Alvisi, L., Dahlin, M., and Marchetti, M. Making 1984), PODC ’84, ACM, pp. 163–178.
Byzantine Fault Tolerant Systems Tolerate Byzantine Faults. In NSDI (Berkeley, [46] Vukolić, M. The Quest for Scalable Blockchain Fabric: Proof-of-work vs. BFT
CA USA, 2009), USENIX Association, pp. 153–168. Replication. International Workshop on Open Problems in Network Security,
[16] Decker, C., Seidel, J., and Wattenhofer, R. Bitcoin meets strong consistency. 2015.
In Proceedings of the 17th International Conference on Distributed Computing and [47] Wood, G. Ethereum: A secure decentralized generalized transaction ledger.
Networking (New York, NY, USA, 2016), ICDCN ’16, ACM, pp. 13:1–13:10. White paper, 2015.
[17] Duan, S., Reiter, M. K., and Zhang, H. Beat: Asynchronous bft made practical.
In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communi-
cations Security (New York, NY, USA, 2018), CCS ’18, ACM, pp. 2028–2041.
[18] Eyal, I., Gencer, A. E., Sirer, E. G., and Van Renesse, R. Bitcoin-NG: A Scalable
Blockchain Protocol, 2016.
[19] Fischer, M. J., Lynch, N. A., and Paterson, M. S. Impossibility of distributed
consensus with one faulty process. JACM 32, 2 (Apr. 1985), 374–382.
[20] Garay, J., Kiayias, A., and Leonardos, N. The bitcoin backbone protocol: Anal-
ysis and applications. In Advances in Cryptology - EUROCRYPT 2015 (Berlin,
Heidelberg, 2015), E. Oswald and M. Fischlin, Eds., Springer Berlin Heidelberg,
pp. 281–310.
[21] Garay, J. A., Katz, J., Kumaresan, R., and Zhou, H.-S. Adaptively secure broad-
cast, revisited. In Proceedings of the 30th Annual ACM SIGACT-SIGOPS Sympo-
sium on Principles of Distributed Computing (New York, NY, USA, 2011), PODC
’11, ACM, pp. 179–186.
[22] Gilad, Y., Hemo, R., Micali, S., Vlachos, G., and Zeldovich, N. Algorand:
Scaling byzantine agreements for cryptocurrencies. In Proceedings of the 26th
Symposium on Operating Systems Principles (New York, NY, USA, 2017), SOSP
’17, ACM, pp. 51–68.
[23] Guerraoui, R., Pavlovic, M., and Seredinschi, D.-A. Blockchain protocols:
The adversary is in the details. Symposium on Foundations and Applications of
Blockchain, 2018.
[24] Gupta, S. A Non-Consensus Based Decentralized Financial Transaction Process-
ing Model with Support for Efficient Auditing. Master’s thesis, Arizona State
University, USA, 2016.
[25] Hadzilacos, V., and Toueg, S. Fault-tolerant broadcasts and related problems.
In Distributed Systems, S. J. Mullender, Ed. Addison-Wesley, ., 1993, ch. 5, pp. 97–
145.
[26] Hearn, M. Corda: A distributed ledger. Corda Technical White Paper, 2016.

11

You might also like