0% found this document useful (0 votes)
15 views50 pages

Lecturenotes 19

Uploaded by

bittu garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views50 pages

Lecturenotes 19

Uploaded by

bittu garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

LECTURE NOTES IN COMBINATORIAL GAME THEORY,

IE619 2023

URBAN LARSSON

1. Lecture 1
Combinatorial games are games without chance and with no hidden in-
formation. The motivation of the field is traditional recreational rulesets
such as Chess, Go, Checkers, Tic Tac Toe and more. Games such as
Poker, Whist and Black Jack are disqualified because they involved
hidden cards, and for example Yahtzee, Pachisi and Monopoly are dis-
qualified because play depends on the outcome of a dice. We follow some
more axioms, as listed:
(1) There is a game board (a set of positions) and some ruleset that
determines how given pieces are played;
(2) there are two players, and one of them is the starting player;
(3) the players take turns moving;
(4) every game terminates;
(5) these are win/loss games and a player who cannot move loses.
Item (5) is usually called normal-play. This convention is based on the
goodness of movability. It is never bad to have more move options. The
axioms give us a means to predict who is winning a given game in perfect
play. Namely, we can use a method attributed to Ernst Zermelo [Z1913]
(who is also the father of set theory and the axiom of choice etc.) often
called backward induction. This method will be reviewed in Lecture 3.
The first combinatorial game that appears in the literature is Nim [B1902].
A finite number of beans are split into heaps. For example, a starting po-
sition could be four heaps of sizes 2, 3, 4 and 5 beans respectively. Let us
denote this position by (2, 3, 4, 5). The current player choses one of the
heaps and removes at lest one bean, and at most the whole heap. This is
a normal-play game, so the player with the last move wins. Bouton discov-
ered a method to find a winning move if there is one. The tool is called
nim addition, and it is performed as follows. Write the heap sizes in binary,
and add without carry, that is each column adds to 0 if and only if it con-
tains an even number of 1s. Let us compute the nim-sum of our sample game:

1
2 URBAN LARSSON

1 0 1
1 0 0
0 1 1
⊕ 0 1 0
0 0 0

The nim-sum is 0. The meaning of this in terms of Nim play is that the
player who does not start wins in optimal/perfect play. Every move is losing.
Let us say, for example, that the first player removed three beans from the
third heap. Then the new position is (2, 3, 1, 5). And, by using nim-addition
on that position, we obtain

1 0 1
0 0 1
0 1 1
⊕ 0 1 0
1 0 1

Since the nim-sum is non-zero, there is a Nim move to a position such


that the nim-sum becomes zero. That is the idea of Bouton’s theory. Here,
there is only one winning move, namely take all beans from the heap of size
five.
Bouton’s proof demonstrates that, given any starting position, and given
best play by both players, exactly one of the players is able to play to a
0-position in every move (until the game ends). Later, in Theorem 3, we
prove this in general.
It is easy to prove this in case of two heaps, and it was discovered in class,
namely, if the starting position is (m, n), with say m < n, then the winning
first move is to (m, m). The next position will be of the form (m, k), for some
0 6 k < m, which is of the ‘same form’ as the first position. Namely, the
heaps are of different sizes. Exactly one of the players can, by every move,
give the two heaps the same size. Note that this implies that the nim-sum
is 0, so indeed it is a special case of the above more general idea.
Let N = {1, 2, . . .} denote the positive integers. Let N0 = N ∪ {0} denote
the non-negative integers.
Later, we will use the common ∗-notation for Nim heaps. That is, ∗ is
a Nim heap of size one, ∗2 is a Nim heap of size two, and in general, for
n ∈ N = {1, 2, . . .}, ∗n is a Nim heap of size n.
The second ruleset is Wythoff’s variation of Nim, which is called Wythoff
Nim [W1907] or sometimes Corner the Lady or Corner the Queen. It
is played on two heaps and the rules are as in Nim, or instead, a player may
remove the same number of beans from both heaps, at least one from each
heap, and and most twice the number of beans of the smaller heap. This
game can equivalently be represented by a single Queen of Chess, which,
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 3

by each move, must reduce its distance to the lower left square, denoted by
(0, 0).
Suppose that the Queen is placed on position (4, 5). The equivalent
Wythoff Nim position is two heaps, one of size 4 and the other of size
5. The set of options is {(x, 5), (4, y), (4 − t, 5 − t) | 0 6 x 6 3, 0 6 y 6 4, 1 6
t 6 4}. See Figure 1.

Figure 1. The figures illustrate typical move options of


Corner the Queen. The lower left corner represents the
terminal position (0, 0).

An elegant method of finding the so-called P-positions (P for Previous


player wins) is to recursively paint the N -positions (N for Next or CurreNt
player wins), and fill in the smallest un-colored cells with Ps. Clearly (0, 0)
is a P-position. Thus, each position of the form (x, 0), (0, x) and (x, x) for
positive integers x will be N -colored. The method is displayed in Figure 2.

0
0 9 0 9 0 9

Figure 2. A geometric view of the losing positions of


Wythoff Nim. The N -positions are recursively painted in
red, given ‘smallest new’ P-positions. The terminal position
is to the lower left.

This method of painting reveals symmetric P-positons of the form (An , Bn )


and (Bn , An ), with the first 8 entries as in Table 1. We note that the
classical so-called Fibonacci numbers (they appear long before Fibonacci’s
study in various Sanskrit works) appear in some of the entries, namely
4 URBAN LARSSON

(1,√2), (3, 5), (8, 13), . . ..1 The Golden Section is the irrational number φ =
1+ 5
2 ≈ 1.618. It is well known that FFn−1 n
→ φ, as n → ∞. Moreover, we
note a possible pattern: for all n, Bn − An = n. The most famous solution
of the game is even more elegant.
Theorem 1 ([W1907]). For all non-negative integers n,
(An , Bn ) = (bnφc, bnφ2 c),
where bxc denotes the largest integer smaller than or equal to x.
We will prove this in a later class. And we will include other appealing
theorem statements (see Lectures 10 and 11). The main tool will be the
so-called Wythoff Properties as defined in Lecture 3.

Table 1. The first 8 P-positions of Wythoff Nim (mod-


ulo symmetry).

n An B n
0 0 0
1 1 2
2 3 5
3 4 7
4 6 10
5 8 13
6 9 15
7 11 18

2. Lecture 2
The two players of a combinatorial game are usually called Left (female
and positive) and Right (male and negative). A sum of the combinatorial
games G, H, K and M , is defined as the composite game where, at each
stage of play, the current player picks one of the components G, H, K or
M and makes a move in that component (see Lecture 4 for a more formal
treatment). We write this sum of games as G + H + K + M . For example,
if player Left starts and plays in the H component, then the next position
is G + H L + K + M . Next, player Right picks one of the components and
plays his move, for example to G + H L + K R + M , and so on. The game
continues until there is no move in either component. As usual, in normal
play, the player who cannot move, loses.
The rulesets are at the core of combinatorial game theory. A ruleset
does not need to come with a starting position (and given a ruleset one can
1"The Fibonacci numbers were first described in Indian mathematics, as early as 200
BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from
syllables of two lengths." Wikipedia.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 5

usually envision an infinite number of possible starting positions). When we


use the word “game position", or just “position”, we usually mean a ruleset
together with a starting position. The word “game” can be used freely and
the surrounding context explains its local meaning. Let us describe some
popular rulesets. A ruleset is impartial, if for every position in the ruleset,
the move options do not depend on who starts. A ruleset is partizan if there
exists a position in the ruleset for which the Left and Right options differ.
Usually partizan rulesets have most positions with differing Left and Right
options.
Here are 4 partizan rulesets and one impartial:
• Clobber: an n by m game board; Left plays black pieces and Right
plays white pieces. A neighbor stone of opposing color can be clob-
bered and removed from the game board, while your stone takes its
position. Starting position: a checker board pattern.
• Toads&Frogs: a 1 by n game board; Left plays Toads and Right
plays Frogs. Toads move to the left and Frogs move to the right.
A piece can slide to a neighboring empty cell, or jump one of the
opponent’s pieces. Starting position: t Toads to the right, and f
Frogs to the Left.
• Domineering: an n by m game board; Left places horizontal domino
tiles, and Right places vertical Domino tiles.
• Toppling Dominoes: Left plays Red dominoes and Right plays
blue dominoes. Both players can topple a green domino. Domino
pieces are placed in a sequence. Players can topple any direction.
• Non-attacking Queens: an n by m game board; Players place
a Queen of Chess in a square, such that they do not attack any
previously placed Queen.
Tournament game: G + H + K + M , where G= Toads&Frogs on a 9 by
1 strip, t = f = 3, H = Domineering on a 5 by 3 board, K = Toppling
Dominoes red blue green red blue blue, and where M = Non-attacking
Queens on an 8 by 8 board (or a Nim position). Who wins if Left starts?
Who wins if Right starts?
Variation Partizan Non-attacking Queens: this game is played with
Black Queens for Left and White Queens for Right? Black plays non-
attacking on White and vice versa. Different colored Queens do not attack
each other.

3. Lecture 3
This section is loaded with proofs. First, we prove Zermelo’s theorem in
our setting of normal-play with no draw games. This is the first fundamental
result of Combinatorial Game Theory (CGT), Theorem 2. Then we move on
to a proof of Bouton’s classical result on the game of Nim. And after that, in
Theorem 4, we prove that the ‘not so classical’ Wythoff Properties define the
6 URBAN LARSSON

P-positions of Wythoff Nim (the best reference I have is a Licentiate-thesis


that I wrote when I was a Ph. D. student, but I will explain better here).
Theorem 2 (The First Fundamental Theorem of CGT). Consider any
normal-play combinatorial game G. Exactly one of the players can force
a win.
Proof. The proof is by induction. Fix a starting player, say Left. As a base
case, suppose the starting player cannot move. Then they lose. Suppose
next that the statement holds for all games of rank at most n, and suppose
G is of rank n + 1. If all Left options of G are winning for Right, then,
by induction, Left cannot force a win (and hence Right can force a win).
Otherwise she choses an option form which she, by induction, can force a
win. 
Observe that this result implies four well defined outcome classes in com-
binatorial games. Let us denote them by L (Left wins independently of
who starts), N , P and R (Right wins independently of who starts). We
will return to these four outcome classes, but let us make more explicit first
how we prove results on the outcomes of impartial games (that have only
two outcome classes).
Suppose that we are given a candidate set of P-positions of an impartial
ruleset. To verify this set, we prove that there is no move from one candi-
date P-position to another candidate P-position. And we abbreviate this
property as “P 6→ P ”. Moreover, we must prove that each candidate N -
position has an option in the set of candidate P-positions. This we write as
“N → P ”. This way of thinking of course bases on the idea of induction,
as is often the case in CGT.
Using this idea, we prove Bouton’s theorem on the game of Nim. Recall
the nim-sum definition “⊕” from Lecture 1.
Theorem 3 ([B1902]). Let hi denote the heap sizes of a game of Nim on n
heaps, written in binary power of two expansion, and where i ∈ {1, . . . , n}.
The outcome is a P-position if and only if
L
hi = 0.
Proof. For the property “P 6→ P ”, suppose first that
M
(1) hi = 0.
We must prove that every option has non-zero nim sum. Observe that (1)
means that there is an even number of 1s in each column of the disjunctive
sum, where the rows are the heap sizes hi written in binary (as in Lecture 1).
If there is no option, we are done. Otherwise, a Nim option reduces exactly
one heap, corresponding to a row in our binary representation of the heap
sizes. Thus there must be a change of parity of the number of ones in at
least one column of (1). Thus the new nim-sum is non-zero.
For the property “N → P ”, suppose next that
M
(2) hi 6= 0,
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 7
L
and let x = hi be the result of the nim addition in (2). We must prove
that there is a heap hk that can be reduced such that the new nim-sum is
zero. P i
Write the nim sum x in its binary representation, as x = 2 xi , where,
for all i, xi ∈ {0, 1}. By (2), there is a largest index j such that xj = 1.
Thus, there must be an odd number of heaps that contain the jth power of
2, 2j . We claim that either one of these heaps, say heapPhk , suffices.
Similar to the base 2 expansion of x, write hk = 2i hk,i . Then, by
definition of j and hk , hk,j = 1. For all i < j such that xi = 1, let h0k,i = xi ,
where · is the binary complement (that is 0 = 1 and 1 = 0), and otherwise,
let h0k,i = hi . For all i > j, let h0k,i = 0. A winning move is to reduce hk to
h0k =
P i 0
2 hk,i . That this is a reduction of the heap size hj follows from the
fact that in base two expansion, for all j, 2j > i<j 2i .2
P

Probably the greatest challenge in this proof is to understand the technical
part of the last paragraph. Of course, this part can be expanded by using
more sentences in English, similar to the earlier parts of the proof. However,
it is also important sometimes to practice reading more ‘pure logic’ parts of
proofs. Why is the “binary complement” introduced in this last paragraph?
Is it the best way to do it, or can you find a way to say the same thing
without that definition? (There is no ultimate answer to such a question;
this question is more meant as a challenge to think of how we write proofs,
and why we do what we do in a proof.)
Next, we will prove that the following properties define the P-positions
of Wythoff Nim uniquely. Consider two sequences of integers (an ), (bn ),
n ∈ N0 . They satisfy the Wythoff Properties if:
(i) (a0 , b0 ) = (0, 0);
(ii) for all n, an+1 > an ;
(iii) for all n, bn − an = n;
(iv) for all n, m > 0, an 6= bm ;
(v) for all x ∈ N, there exists an n such that an = x or bn = x.
Property (ii) is called “increasing”. Property (iii) we may call a “shift”.
Together with (ii), the properties (iv) and (v) are usually called “comple-
mentarity”.3 The following result establishes that, in fact, these properties
define a unique pair of sequences.
Theorem 4. The set W = {(an , bn ), (bn , an ) | n ∈ N0 } given by the Wythoff
Properties is unique, and it is the set of P-positions of Wythoff Nim.
Proof. Observe that it suffices to prove that the set W is the set of P - posi-
tions of Wythoff Nim. It then follows that the properties define a unique
pair of sequences. Observe that by (ii) and (iii), both a = (an ) and b = (bn )
2This property is of course true in any base n exapnsion if n > 2 is an integer.
3Since we are assuming (ii) then (iv) and (v) suffice to define complementary sequences;
without (ii), in addition, we should require, for all n 6= m, an 6= am .
8 URBAN LARSSON

are strictly increasing sequences. Clearly (a0 , b0 ) = (0, 0) is the terminal


P-position. We must prove that every candidate P-position has no P-
position as an option, and we must prove that every candidate N -position
has a P-position as an option.

“P 6→ P ”: We prove that, for any n > 0, (an , bn ) does not have an option
of the same form. We use (iii), (iv) and (v) to exhaust all possibilities. Sup-
pose that a player removes from a single heap to say (ai , bn ). Then, since b
is increasing, bi 6= bn . If they remove from a single heap to (bi , bn ), then (iv)
contradicts that this be of the desired form. If they remove from a single
heap to say (an , bi ), then again, since b is increasing, bi 6= bn . If they remove
from a single heap to (an , ai ), then by (iv) this option is not of the same form.
If they remove the same number m from both heaps, then by (iii) the posi-
tion cannot be of the form (ai , bi ). Namely bn −m−an +m = n > i = bi −ai .

“N → P ”: We prove that, if a position is not of the form (an , bn ), then it


has an option of this form. Consider first (x, bn ), and x > an . Then remove
x − an > 0 from the first heap. If x < an , then, by (v), there are two cases.
1. x = ai , for some i < n;
2. x = bi for some i < n.
In case 1, since b is increasing, there is a move to (ai , bi ). In case 2, there is
a move to (bi , ai ), since, by (iii) and b increasing, ai < bi < bn .
Consider next a position of the form (an , x), x 6= bn . If x > bn , then
(an , bn ) is an option. Hence assume x < bn . Then, by (v) x = bi or x = ai ,
for some i. In the first case i < n and so (ai , bi ) is an option, by (ii). In
the second case, the position is (an , ai ). We have three cases (or two if we
regard cases 2 and 3 as the same):
1. an < ai < bn ;
2. ai < an < bi ;
3. ai < bi < an .
For case 1, observe that 0 < ai − an < bn − an = n. Therefore there exists
j < n such that ai − an = bj − aj = j. Hence (aj , bj ) is an option. For case
2, observe that 0 < an − ai < bi − ai = i. Therefore there exists j < i such
that an − ai = bj − aj = j. Hence (aj , bj ) is an option. For case 3, (ai , bi ) is
an option. 
We will use this result in a later lecture to prove Theorem 1, together
wth some other representations of Wythoff Nim’s P-positions. One more
component, called Beatty’s Theorem, will be required.

4. Lecture 4
Let us prove that normal-play games are a group structure, that is, every
game has an inverse. In the next lecture we will see that this is a main
tool for constructive game comparison. In this spirit, we will here define the
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 9

notion of game comparison in a non-constructive but exhaustive way. The


definition of game comparison takes into the account game addition, and the
inherited partial order of outcomes. Moreover it uses a recursively defined
bracket notation of a game. We use it in parallel with a standard game tree
representation, where Left options are left slanting edges and Right options
are right slanting edges. The game G = {GL | GR }, where GL and GR
represents the set of left and right options of the game G, respectively. If
GL 6= ∅, a typical Left option is GL ∈ GL , and similarly, a typical Right
optionis GR ∈ GR . By the recursive definition, we would write, for example
GL = GLL | GLR , and so on.
For example, a Nim heap of size 2 is the game {0, ∗ | 0, ∗}. The integer
games belong to the partizan theory, and they can be defined recursively
as 0 = { | }, 1 = {0 | } and n = {n − 1 | }, for n > 0. Similarly, for all
n ∈ N, −n = { | −n + 1}. Let us draw the game trees of the games ∗, ∗2, 1, 2,
{1 | −1} and {−1 | 1}.

The standard convention is the total order “Left” > “Right”, that is, Left
is the “maximizer” and Right is the “minimizer”. This induces the Outcome
Diamond

N P

with L > P, N , R and R < P, L , R but N <> P. Here ‘<>’


denotes ‘6>’ and ‘6<’ and ‘6=’. That is, the outcomes N and P are con-
fused, fuzzy or incomparable. All these three words (and more) appear in the
literature. The outcomes do not suffice to understand how to play well a
disjunctive sum of games. Table 2 illustrates that.
Suppose that we know the outcomes of the individual games G and H.
Now we wish to compute the outcome of the sum of G and H. If one
of the outcomes is a P-position, then we know the outcome of the sum;
if both outcomes are either L or R, then we know the outcome of the
sum. Otherwise we cannot yet know the outcome of the sum. The notion
of outcomes requires a refinement, where alternating play in the separate
components is not mandatory.
10 URBAN LARSSON

Table 2. Given the outcomes of G and H, when can we


know the outcome of G + H?

G\H L P N R
L L L ? ?
P L P N R
N ? N ? ?
R ? R ? R

If both outcomes are L then Left can obviously follow her winning strate-
gies in both components, individually and independently of who starts, and
analogously for Right. If one of the components is a P-position, then the
other component will determine the outcome of the sum, namely, if the first
player plays in the P-position, then the other player can respond there in a
manner that they will get the last move in that component. Hence the first
player can be forced to ‘open’ the other component. A P-position cannot
affect the outcome in a disjunctive sum.
The following example explains some question marks in the table.
Example 5. Suppose o(G) = o(H) = N . This holds if, for example G =
H = ∗, a single heap of Nim. Then o(G+H) = P. We could also have G = ∗
and H = ∗2. Then o(G + H) = N . Hence the question-mark is motivated
in this case. If G ∈ L and H ∈ N , we could have H = {0 | −100} and
G = 1, with G + H ∈ R. But we could also have H = ∗ and G = 1, which
would give G + H ∈ L . Suppose that G ∈ L and H ∈ R. We could
have G = 1 + ∗ and H = −1 which gives G + H ∈ N . On the other hand
G = 1 and H = −1 gives G + H ∈ P. Similarly G = 10 and H = −1
gives G + H ∈ L , while G = 1 and H = −10 gives G + H ∈ R. Hence, all
outcomes are possible. The other question marks are similar.
So far, we have used the notion of a sum of games in an intuitive way.
Now we will present the standard formal way. The disjunctive sum of games
is defined in a recursive manner.
Definition 6 (Disjunctive Sum). Consider games G and H. Then G + H =
G + H L , GL + H | G + H R , GR + H , where X + G = {X + G : X ∈ X },

if X is a set of games.
Let us define the partial order of games.
Definition 7 (Partial Order). Consider games G and H. Then G > H if,
for all games X, o(G + X) > o(H + X). And G = H if G > H and H > G.4
4A partial order is a relation that satisfies 1. Reflexivity (aRa); 2. Antisymmetry (if
aRb and bRa then a=b); 3. Transitivity (if aRb and bRc then aRc). It is easy to prove
that our definition satisfies these axioms. Then it easily follows that ‘=’ is an equivalence
relation, which satisfies Reflexivity, Symmetry (aRb implies bRa) and Transitivity). One
has to check first that the axioms 1-3 hold for the partial order of outcomes. But this is
also easy to check.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 11

This is the desired refinement of the partial order of the outcomes. Namely,
it assures Left that the game G is no worse for her than the game H, if
played in any arbitrary disjunctive sum. However, it might appear that
almost all games would remain incomparable with such a strong notion of a
partial order. And moreover, the definition is non-constructive, so there is no
algorithm that could determine the relation between two games, unless one
can find another equivalent way of expressing the partial order. And indeed,
that this is possible will be our second fundamental theorem of combinatorial
games. The first major tool is that the games constitute a group structure,
and we will prove that in the next lecture. The negative of a game will be
the game where the players have swapped roles. Let us give the recursive
definition here, and prove its consistency in the next lecture.
Definition 8. Consider a game G. Then the negative of G is −G =
−GR | −GL .


Similar to Definition 6, if X = {X1 , . . . , Xn } is a set of games, then


−X = {−X1 , . . . , −Xn }.

5. Lectures 5 and 6
Let us begin with an example of a negative of a game (Definition 8). In
terms of game trees, let

G=

Then

−G =

In terms of game forms, these games are G = {∗ | −1} and −G = {1 | ∗},


where, as before, ∗ = {0 | 0}, 1 = {0 | } and −1 = { | 0}. As an exercise, we
may add these two games, and expand this sum as one single game form:5
G + (−G) = {∗ − G, 1 + G | ∗ + G, −1 − G}
= {{−G, ∗ − 1 | −G, ∗ + ∗} , {G, 1 + ∗ | 1 − 1} | ·}
= {{−G, {−1 | ∗, −1} | −G, {∗ | ∗}} , {G, {∗, 1 | 1} | {−1 | 1}} | ·} ,
5This example is admittedly a bit complicated. But its purpose is also to illustrate
how easy it is to build complex game trees that are equal to the empty game 0. Take any
game G and add its inverse, and there you go, a zero-game! If you instead started with a
single complex game tree of ‘G − G’, it would usually be much harder to see that it equals
0. This is an exercise to do once, and then in a sense ‘never again’.
12 URBAN LARSSON

where we have omitted to expand the Right options since they are symmetric.
This game form can, with some patience, and as an exercise, be drawn as a
large game tree. But it should be equivalent to 0, as, starting with G+(−G),
the previous player can mimic the current player at each stage, until the
current player cannot move. This is covered by Theorem 10, which will be
our first application of Definition 7. It will establish that the set of all games
together with the disjunctive sum operator constitutes a (partially ordered)
group structure.
An abelian group, (G, +), satisfies five properties.
• Neutral Element: There exists an element, 0 ∈ G, such that for all
G ∈ G, 0 + G = G;
• Closure: for all G, H ∈ G, G + H ∈ G;
• Negative: for all G ∈ G, there exist an element ‘−G’ such that
G + (−G) = 0;
• Commutativity: for all G, H ∈ G, G + H = H + G;
• Associativity: for all G, H, K ∈ G, (G + H) + K = G + (H + K).
Suppose now that (G, +) is our set of games together with the disjunctive
sum operator. All properties, except “Negative” are easy exercises.
The following decomposition of the outcomes will be useful.
Definition 9. Let L = (L, L), P = (R, L), N = (L, R) and R =
(R, R). The first (second) coordinate declares who wins if Left (Right)
starts. Similarly, for a given game G, write o(G) = (oL (G), oR (G)), where
oL (G), oR (G) ∈ {L, R} denotes who wins in perfect play depending on who
starts.
Theorem 10. For any game G, G + (−G) = 0.
Proof. We have to demonstrate that, for any game G, for all games X,
o(G − G + X) = o(X). The proof is by induction on G − G + X. If Left
cannot play in X, then oL (X) = R. Similarly, if Left cannot play in X, then
oL (G − G + X) = R, because, if Left can play in G − G, then Right can
mimic, and so on, which ultimately leads to Right getting the last move in
G − G. The base case is analogous for the function oR .
Suppose that the statement holds for all options of G. For example, if GR
is a Right option of G, then GR − GR = 0. If Left has an option in X, then
there are two cases to consider, namely oL (X) = L or oL (X) = R.
Suppose first that oL (X) = L, and consider the game G − G + X. Suppose
that Left’s winning move in X is to X L . We claim that oR (G−G+X L ) = L.
Namely, if Right plays to G − G + X LR , then Left wins by induction (Right
does not have a winning move in X L ). And if Right plays in the ‘G−G’ part,
then Left can mimic. This would result in GR − GR = 0 or GL − GL = 0,
by induction.
Suppose next that oL (X) = R, that is, Left does not have a winning option
in X played alone. In the game G − G + X, if Left plays her losing move in
the X-component, then Right can respond locally to G − G + X LR and win
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 13

by induction. If Left starts by playing in the ‘G − G’ component, then Right


can mimic, and the argument is the same as in the previous paragraph.
The proofs for Right playing first are analogous (symmetric). 
The second fundamental theorem of combinatorial games is as follows.
We utilize that games form a group structure. That is, every game has an
inverse (Theorem 10).
Theorem 11 (The Second Fundamental Theorem). Consider games G and
H. Then G > H if and only if Left wins the game G − H playing second,
that is if and only if oR (G − H) = L.
Before the proof, we give two examples of how to use this result.
Let G = {∗ | −1} and let H = ∗2. We use Theorem 11 to show that
G 6> H. That is, it suffices to show that Left does not win the game G − H
playing second. If Right plays to −1+∗2, then Left can only respond to −1 or
−1 + ∗, and loses either way. How about the reverse inequality? Is H > G?
That is, can Left win if Right starts in the game H − G = ∗2 + {1 | ∗}?
If Right plays to ∗2 + ∗, then Left can respond to ∗ + ∗; if Right plays to
∗ + {1 | ∗} or to {1 | ∗}, then Left can respond to 1 or 1+* and wins in either
case. Altogether this proves that H > G.
Let G = {0 | 1} and H = ∗. It is easy to check that oR (G − H) = L.
Hence G > H. In addition, since oL (G − H) = L, then in fact G > H.
Proof of Theorem 11. By Theorem 10, we may study the game G − H. We
must prove that G − H > 0 is the same as Left wins G − H playing second.
By definition, G − H > 0 means that, for all X, then o(G − H + X) > o(X).
Suppose first that for all X, then o(G − H + X) > o(X). This holds in
particular for X = { | }. And then oR (X) = L implies oR (G − H + X) =
oR (G − H) = L. This proves this direction.
Next, suppose that Left wins G − H playing second. We must prove that,
for all X, then o(G − H + X) > o(X).
If oL (X) = R there is nothing to prove, so let us assume that oL (X) = L.
In particular, this means that Left has a winning move in X played alone, to
say X L . She can play this move in the game G − H + X. If Right responds
in the ‘G − H’ part, then, by assumption, Left has a winning response inside
that part. And if he plays to G − H + X LR , then Left wins by induction
(since oL (X LR ) = L). Hence, oL (G − H + X) = L.
Similarly, assume that oR (X) = L, and we must prove that oR (G − H +
X) = L, under the assumption that oR (G − H) = L. If Right starts in
the ‘G − H’ part, then Left has a winning response, by assumption, and
otherwsie, if Right starts in the ‘X’ part, then, by oR (X) = L, Left can
respond locally and win, since by induction oR (X RL ) = L. 
It is convenient to be explicit about all relations in the inherited partial
order.
Corollary 12. Consider games G and H. Then
14 URBAN LARSSON

• G = H if and only if G − H ∈ P;
• G > H if and only if G − H ∈ L ;
• G < H if and only if G − H ∈ R;
• G H if and only if G − H ∈ N .
Proof. For the first item, apply Theorem 11 for G > H and H > G. The
6 G,
other items are similar, namely apply Theorem 11 for G > H and H >
G 6> H and H > G, and G 6> H and H 6> G respectively. 
In Corollary 12, it is instructive to revisit the outcome diamond, and in-
stead of outcomes put games born by day 1 as representatives of the outcome
classes.

∗ 0

−1

A game’s birthday is defined recursively. We will do this after Theorem 15.

6. Lecture 6
Here we will insert the approved projects from this workshop type lecture.
Some of the lectures in this course are of the form workshop/tournaments etc.
Anyone has a latex version of their rulesets with some analysis? Nov 11 2024:
we still plan to do this, but it had to be after your projects presentations
are finished, and preferably (but not necessarily) before final exam. Please
prepare any latex code and submit online, if you have the time!

7. Lecture 7
Here we discuss the two reduction theorems on combinatorial games; they
concern domination and reversibility. We will show that together they imply,
for any game G, the existence of a unique reduced form, usually referred to
as the canonical form, the game value, or just the value of G. We state the
results in terms of Left options, and the symmetric statement in terms of
Right options has an analogous statement and proof. These two results are
nice applications of the Second Fundamental Theorem and its corollary.
Let us start with some examples. If G = {1, 2, 3 | ∗}, then Left should
ignore the Left options 1 and 2, and hence G = {3 | ∗}. This guess can
be verified by using Corollary 12 as follows. The previous player wins
{1, 2, 3 | ∗} + {∗ | −3}, by mimic strategy, unless Left starts and plays in
the first component to 1 − G or 2 − G. Then Right responds to 1 − 3 or
2 − 3 and wins. This is all good, but note that we found a simpler form of G
by guess work. Domination is better in that it achieves the same result but
without guessing a simpler equivalent form.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 15

Theorem 13 (Domination). Considerany game G. If there are Left options


A and B, such that A 6 B, then G = GL \ {A} | GR .
Proof. Let H = GL \ {A} | GR . By Corollary 12, it suffices to prove that

G − H ∈ P. Observe that the Left options of H are the negations of the
Right options of G. Hence any play in those options can be mimicked, and
then GR − GR = 0 ∈ P settles those cases.
Similarly, if Left plays to GL + H, where GL 6= A, then Right can respond
to GL − GL = 0 ∈ P. The only remaining case is if Left is the starting
player and she plays to A − H. But then Right can respond to A − B 6 0,
and he wins (playing second in A − B). 
By this result, we see immediately that G = {1, 2, 3 | ∗} = {3 | ∗}, because
3 > 2 > 1.
The next result concerns the reduction reversibility. We have seen already
that the game G = {∗ | ∗} = 0, and one can argue directly that it is true
because the game is a P-position. The game’s simplest reduced form is 0,
but it cannot reduce via domination, because there is only one option. We
obviously need another tool. Luckily, one can argue by using ‘reversibility’,
that this holds true: If Left plays to ∗, then Right has on option that is
no worse than the original game {∗ | ∗}, that is, it reverses Left’s move.
Therefore Left’s move is meaningless and should be reduced to whatever
remains after Right’s ‘automatic’ response. We prove that this idea holds in
general, and give some more examples after the result.
Theorem 14 (Reversibility). Consider any game  G. If there is a Left option
GL with a Right option GLR 6 G, then G = GL \ {GL }, GLRL | GR .
Proof. Let H = GL \ {GL }, GLRL } | GR . We prove that o(G − H) = P.

Similar to the proof of Theorem 13, moves that can be mimicked cannot
disprove the result. Hence it suffices to analyze two cases:
• Left starts by playing to GL − H;
• Right starts by playing to G − GLRL , for some GLRL ∈ GLRL .
Assume first that Left starts by playing to GL − H. Then Right can play
to GLR − H. By assumption, oL (GLR − G) = R. Since −H and −G have
the same Left options, Right wins if Left plays in the −H component to
GLR − H R . Thus assume she plays to GLRL − H. But then, by definition of
H, Right can respond to GLRL − GLRL = 0.
For the second item, assume that Right plays to G−GLRL . But G−GLR >
0 means that oR (G − GLR ) = L, and so oL (G − GLRL ) = L. 
Let us give two example’s of ‘similar looking’ games, one of which reduces
by reversibility but the other does not reduce. The games are G = {0, ∗ | 0}
and H = {0, ∗ | ∗}. For both games, the only Left option with a Right
option is GL = H L = ∗, and H LR = 0 6 H but GLR = 0 66 G, by
Theorem 11, namely Left wins H LR , but not GLR , playing second. Observe
that H LRL = ∅. Hence H = {0 | ∗}.
16 URBAN LARSSON

We can prove directly by using Theorem 11 that G 6= {0 | 0}; hence, the


Left option ∗ cannot be reversible. Namely Left does not win G − ∗ = G + ∗
playing second. Right starts by playing to ∗ and Left loses.

Theorem 15 (Canonical Form Game Value). Suppose that domination and


reversibility have been applied to a game G until any further reduction results
in the same literal form game. If two literal form games G0 and G00 are the
end results of such reductions then they are identical.

Proof. Suppose, by induction, that this holds true for all game trees of rank
smaller than n. Let us prove that, then it holds for a game G of rank n.
By the induction hypothesis, we may assume that every option of G is in its
unique reduced form.

Claim: We can pair the options of G0 and G00 , such that for each option G0L
there is a G00L such that G0L = G00L (and similarly for Right).

Proof of Claim: The proof is by way of contradiction. Suppose that there


is an option G0L with no equal option in G00 . We show that this contradicts
G00 − G0 ∈ P. Suppose that Right plays to G00 − G0L . Since G0 and G00 do
not have any dominated or reversible options, we get that Left must play
to either G00L − G0L , or G00 − G0LR 0 (we must have 6> since the option
is not reversible). In the second case, she cannot win playing second. Thus
she could win playing second if and only if G00L is such that G00L > G0L .
Suppose next that Left starts and plays to G00L − G0 . Since G00 and G0 are
equal, Right must have a winning move. Again, this must be of the first
form. Thus, for some Left option, we obtain inequalities G0L1 > G00L > G0L .
This contradicts that G0 does not have any dominated options.

But then, since these options are in reduced forms, by induction, since
they are equal they must be identical. Hence, G0 and G00 must also be
identical. 

Definition 16 (Birthday). A game is born by day n > 0 if every option of


its canonical form game value is born by day n − 1. A game is born by day
0 if its canonical form game value has no options. A game is born at day n
if it is born by day n but not by day n − 1.

For example, the game 0 is born by day 0, ∗, -1 and 1 are born at day
1. Together with 0, they form the same partial order as the above Outcome
Diamond. There are 22 games born by day 2 (See Figure 3). There are 1474
games born by day 3. The number of games born by day 4 is huge, recently
estimated between 1028 and 10185 , by Koki Suetsugu. The number of games
born by day 5 is unknown.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 17

1 1∗

1
2 {1 | ∗} {1 | 0}

↑ ↑∗ {1 | 0, ∗}

0 ∗ ∗2 ±1

↓ ↓ ∗ {0, ∗ | −1}

− 21 {∗ | −1} {0 | −1}

−1 −1∗

−2

Figure 3. There are 22 games born by day 2. The picture


shows the partial order of these 22 games, where an edge
represents ‘upper node > lower node’. The structure is a
lattice: every disjoint pair of nodes has a least upper bound
(a join) and a greatest lower bound (a meet).

8. Lecture 8
Chomp is an impartial game played with an m by n chocolate bar (see
Figure 4). The lower left piece is poisoned, and the player who chomps it
loses (it is a normal play game: think that the poisoned piece is not present).
The game is played as follows: point at a remaining piece and chomp off
everything above and to the right of that piece. A classical strategy stealing
argument shows that the first player has a winning strategy for Chomp
played on a rectangular grid. However, nobody fully understands optimal
play, unless the grid is a square.
18 URBAN LARSSON

Theorem 17. Chomp on a rectangular grid is a first player win.


Proof. If the grid is a square, then point at position (1, 1), and mimic the rest
of play. Otherwise, suppose that the second player has a winning strategy.
Take the upper right piece. If that is a winning move we are done. Otherwise,
wait and see what the second player does. If the first move is not winning,
then he has a winning strategy. But the first player could have played that
move in the first move. Hence she has a winning strategy. 

Figure 4. Two Chomp positions. The red piece in the lower


left is poisoned and cannot be eaten. The first player pointed
at (3, 2) and chomped off four pieces.

Pingala Nim is played on one heap of pebbles. The first player can
remove any positive number of pebbles, except the whole heap. Any other
move is restricted by taking at most twice the number of pebbles that the
previous player removed. The Pingala (Fibonacci) numbers are defined by
F0 = 0, F1 = 1, and if n > 2, Fn+2 = Fn+1 + Fn . Thus, the sequence is
0, 1, 2, 3, 5, 8, 13, 21, . . ..
Theorem 18. The first player loses Pingala Nim if the heap size is a
Pingala (Fibonacci) number.
The proof of that result uses a classical result on Pingala numbers, namely:
every positive integer decomposes uniquely as a sum of non-consecutive Pin-
gala numbers. For example 11 = 8 + 3, 23 = 21 + 2, 30 = 21 + 8 + 1. We
call such a decomposition the ZOL-decomposition of a natural number (the
result was independently discovered by Zeckendorf, Ostrowski and Lekkerk-
erker). The result has a more detailed formulation, which will be useful in
the proof.
One basic property of ZOL-numeration is that by adding 1, to a word of
alternating 1s and 0s of length k results in the word of length k+1 of the form
10 . . . 0. For example 101 + 1 = 1000 and 1010 + 1 = 10000. That is, in this
numeration, we have 10k − 1 = (10)k/2 , if k is even, and 10k = (10)(k−1)/2 ,
if k is odd. Let us call this property ZOL-carry.
Theorem 19. The first player wins if and only if they can remove the small-
est number in the ZOL-decomposition of the heap size.
Notice that this statement includes the previous one, since the starting
player is not allowed to remove the whole heap. In the example 11 = 8 + 3,
the first player removes 3, and then the second player has to move from the
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 19

Fibonacci number 8. They can remove 1 6 r 6 6. If they remove 3 or more


they lose in the next move. Otherwise the next player can play to 5, again, a
Fibonacci number. In their next move they can either win directly, or, if the
other player removed 1, they can take 1 from 4 and reach 3. Now, because
the other player can only reach 1 or 2, they win in their next move. There
is a better notation. We write (x, r) for a heap of size x, where the next
player is allowed to remove at most r pebbles. Thus, the first position is of
the form (x, x − 1), and later r < 2x is some even number. We postpone the
proof until a later lecture.

9. Lecture 9
We review the famous Sprague-Grundy Theory. It says that, for any im-
partial normal play game G, there is a nim-heap h such that played together
G + h ∈ P. Moreover, the proof provides a constructive way to find that
nim heap. The minimal exclusive function, abbreviated ‘mex’ finds the nim-
value of a given game. The mex-function is defined as follows. Let X ⊂ N0
be a strict subset of the non-negative integers. Then mexX = N0 \ X. For
example mex{0, 1, 3, 5, 17} = 2 and mex{1} = 0. As a motivation, before
the proof, let us draw a game tree and compute its equivalent nim-value
via the mex-algorithm. Recursively, it computes the nim-values on every
sub-position, via the mex-rule, until it finds the root, and assigns its nimber.
This is an impartial game so the directions of the slopes do not matter.

0 0

0 0 0 0

0 0
20 URBAN LARSSON

0 1 0

1 0 0 0 1 0

0 0

2 0 1 0 2

1 0 0 0 1 0

0 0

3 0 1

2 0 1 0 2

1 0 0 0 1 0

0 0

3 0 1

2 0 1 0 2

1 0 0 0 1 0

0 0
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 21

Definition 20 (Subgame). Consider a game G. Then H is a subgame of G


if there is a sequence of moves from G to H, not necessarily alternating and
perhaps empty.
In the literature, subgame is often called “subposition" or “follower” with
exactly the same meaning.
Theorem 21. Every impartial normal play game is equivalent to a nimber.
Proof. Consider any impartial normal play game G. The statement holds if
G does not have any options, so assume that G has options. Assign each sub
position of G without any option nim-value 0. Suppose that the statement
is true for all games of birthday less than G. That means in particular that
each option of G, say Gi , equals a nim-heap, say ∗hi . We will demonstrate
that G equals the nim-heap ∗mex{hi }, where i ranges over the options of
G. To this purpose we play the game G + ∗mex{hi }, and demonstrate that
it is a P-position. Suppose that the first player plays to ∗hj + G, for some
hj < mex{hi } (this is possible by the definition of a nim-heap). Then the
second player can respond to ∗hj + Gj = ∗hj + ∗hj ∈ P. Suppose next that
the first player plays in the G component to Gj + ∗mex{hi }. There are two
possibilities:
(i) hj > mex{hi };
(ii) hj < mex{hi }.
In case (i), the second player can respond to ∗mex{hi } + ∗mex{hi }, and win.
In case (ii), the second player can respond to ∗hj + ∗hj and win. 

10. Lecture 10
Subtraction is a generalization of Nim defined by a subtraction set S ⊂
N. The move options from a heap of size x is the set {x−s | s ∈ S, x−s > 0}.
For example, if S = {1, 3, 4, 7}, then the first few outcomes and nimbers are
as follows:

x 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
o(x) P N P N N N N N P N P N N N N N
nim(x) 0 1 0 1 2 3 2 3 0 1 0 1 2 3 2 3

We conclude that both the outcomes and the nimbers are periodic, with
period length 8, and starting at the heap of size 0. How do we know this? It
follows by observing a repetition of the content of 7 consecutive squares in
either row. Why this ‘window’ size of 7? This is because this is the maximum
size of a subtraction. This idea assists us in the proof that any subtraction
game on a finite subtraction set has an eventually periodic outcome.6
6Consider a sequence s = (s ) indexed by the non-negative integers. Then s is ‘even-
i
tually periodic’ if one can decompose the sequence in a finite part (si )i6k , the preperiod,
and an infinite part (si )i>k , the periodic part, for which there is a p such that for all
i > k, si = si+p . If k = 0 we say that the sequence is purely periodic, or just periodic.
22 URBAN LARSSON

Theorem 22. Every subtraction game S has eventually periodic outcomes.

Proof. There are 2max S possible combinations of outcomes P or N in a


‘window’ of size max S. Since this is a finite number, there must exist a
smallest heap size x0 for which the outcome window (o(x0 ), . . . , o(x0 +max S))
is the same as (o(y 0 ), . . . , o(y 0 + max S)), for some y 0 > x0 . This defines the
period. 

Similarly we can prove that the nim-sequence is eventually periodic. The


argument is the same, just replace the 2 in the number of outcomes for |S|+1
as the number of possible nimbers.

Corollary 23. Every subtraction game S has eventually periodic nimbers.

Proof. By the mex rule, since a ruleset S has (at most) |S| options, the largest
nimber that can occur is ∗(|S| + 1). The rest of the proof is analogous to
that of Theorem 22. 

Many ‘small’ rulesets have period length equal to the sum of two of the
possible subtractions. In our example: 7 + 1 = 8. But there are excep-
tions. For example, the ruleset S = {2, 5, 7} has period length 22. Moreover
many games in Subtraction have a preperiod before the eventual behavior
settles. An early example is S = {2, 4, 7}. What is its preperiod and period?
Let us return to Wythoff Nim. We start with a recap that includes a to
do list. There are several representations of the P-positions of Wythoff
Nim. Let us list a few.
(1) Geometric Approach: A recursive painting of N -positions illumi-
nates smallest missing P-positions. (done)
(2) A mex-Algorithm: This was briefly mentioned in the classes, but not
yet included in these notes. (will do)
(3) Golden Section: We have stated the result in Lecture 1, but not yet
proved it. (will do)
(4) Wythoff Properties: We listed five properties that uniquely define the
Wythoff sequences. Proof included in Lecture ?, but not yet done in
class.
(5) ZOL-numeration: The P-positions have a nice interpretation in the
ZOL-numeration mentioned in the discussion of Fibonacci Nim
(will do here)
(6) A Morphism on Words: Lecture 11.
Let us do (5). First we construct a small table of the first positive integers
in ZOL-numeration. We use the binary notation with F2 = 1 as the small-
est ‘digit’, so for example 7 = 5 + 2 = F5 + F3 = 1010 and 13 = F7 = 100000.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 23

n ZOL(n)
1 1
2 10
3 100
4 101
5 1000
6 1001
7 1010
8 10000
9 10001
10 10010

The bold numbers are those that end in an even number of 0s. That is,
1, 3, 4, 6, 8, 9, . . .. We note that those coincide with those in the A-sequence
of Wythoff Nim’s P-positions. The B-sequence is obtained by adjoining
a ‘0’ to the right of the numbers in the A-sequence. And indeed, this is our
next theorem.

Theorem 24. In ZOL-numeration An (Bn ) is the nth number that ends with
an even (odd) number of 0s, and, in this numeration, for all n, Bn = An 0.

Proof. Recall the Wythoff properties:


(i) (a0 , b0 ) = (0, 0);
(ii) for all n, an+1 > an ;
(iii) for all n, bn − an = n;
(iv) for all n, m > 0, an 6= bm ;
(v) for all x ∈ N, there exists an n such that an = x or bn = x.
By Theorem 4, it suffices to justify each item. But all items except (iii)
are immediate by definition. It remains to prove that for all n, in ZOL-
numeration, An 0 = An + n. Let us describe An as a function of n.

Claim: if ZOL(n) ends in an odd number of 0s, then ZOL(An ) = ZOL(n)0,


and otherwise, ZOL(An ) = ZOL(n − 1)1.

In these examples, instead of writing ZOL, we use quotation: 5 = “1000”,


A5 = 8 = “10000”, B5 = “100000”; 3 = “100”, A4 = 6 = “1001”, B4 = 10 =
“10010”.
Before we prove this claim, let us see that it suffices to prove the result.
In the first case, by using the definition of Pingala recurrence,

ZOL(Bn ) = ZOL(An ) + ZOL(n)


= ZOL(n)0 + ZOL(n)
= ZOL(n)00
= ZOL(An )0.
24 URBAN LARSSON

And notice that ZOL(Bn ) ends in an odd number of 0s, because ZOL(n)
does so. The second case is similar, but it requires a small trick, namely
ZOL(Bn ) = ZOL(An ) + ZOL(n)
= ZOL(n − 1)1 + ZOL(n)
= ZOL(n − 1)1 + ZOL(n − 1) + 1
= ZOL(n − 1)0 + ZOL(n − 1) + 2
= ZOL(n − 1)00 + 2
= ZOL(n − 1)10
= ZOL(An )0.
Here it is important to observe that each line is a valid ZOL-representation.
In particular, ZOL(n − 1)10 is valid, by the following implication: if ZOL(n)
ends in an even number of 0s, then ZOL(n−1) does not end in a “1”. Namely,
if ZOL(n) ends in a “1” then remove it to obtain ZOL(n − 1). Otherwise
argue by ZOL-carry as explained in this footnote.7

Proof of Claim. It is clear that all positive integers that end in an even
number of 0s in the ZOL-representation are represented. Therefore it suf-
fices to establish that going from n to n + 1 implies that the claimed ZOL-
representation of An is increasing. If ZOL(n) and ZOL(n + 1) end in the
same parity of 0s, there is nothing to prove. Thus there are two cases to
check:
(i) ZOL(n) odd and ZOL(n + 1) even;
(ii) ZOL(n) even and ZOL(n + 1) odd.
For item (i) it is immediate by the statement that An+1 − An = 1. For
item (ii), going from even to odd when n → n + 1, it must be the case that
ZOL(n) ends in a “1”, that is the rules of ZOL-carry applies. SImilar to the
second case above one can see that in this case An+1 − An = 2.
The corresponding table begins like this:

n ZOL(n) An ZOL(An )
1 1 1 1
2 10 3 100
3 100 4 101
4 101 6 1001
5 1000 8 10000
6 1001 9 10001

7One basic property of ZOL-numeration is that, by adding 1 to an alternating word of


1s and 0s of length k, this results in the word of length k + 1 of the form 100 . . . 0 and vice
versa. For example 101 + 1 = 1000 and 1010 + 1 = 10000. That is, in this numeration, we
have 10k − 1 = (10)k/2 , if k is even, and 10k = (10)(k−1)/2 1, if k is odd. Let us call this
property by ZOL-carry.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 25

11. Lecture 11
In this lecture we will start talking about game temperature.
But let us first complete the Wythoff story with items (2) and (6) in
Lecture 10.
Theorem 25. Let the A and B be the increasing sequences that define
Wythoff Nim’s P-positons. For all n ∈ N0 , let
(
an = mex{ai , bi | 0 6 i < n};
bn = an + n.
Then, for all n, an = An and bn = Bn .
Proof. The Wythoff Properties are all satisfied by definition, (i) is immediate.
That the a-sequence is increasing as in (ii) follows by the definition of mex.
Item (iii) is obvious. Item (iv) can be verified by an inductive argument.
Namely, suppose that bn−1 is the largest element in {ai , bi | 0 6 i < n}. Then
bn = an + n > bn−1 , by using also (ii). Hence there can be no collision. 
The Fibonacci Morphism ϕ : {0, 1}∗ → {0, 1}∗ is defined by,8
(
ϕ(0) = 01;
ϕ(1) = 0.
If the initial seed is 0, then ϕ generates the infinite Fibonacci word, ω.
This word is generated recursively as follows:
ϕ(0) = 01;
ϕ(01) = 010;
ϕ(010) = 01001;
ϕ(01001) = 01001010,
and so on. Note that ϕ(010) = ϕ(0)ϕ(01), and so on. That is ω =
limn→∞ ϕn (0), where, for all n > 0, for all x ∈ {0, 1}∗ , ϕn (x) = ϕn−1 (ϕ(x)),
where ϕ0 (x) = x. We index the letters in ω by N. We get ω1 = 0, ω2 =
1, ω3 = 0, ω4 = 0, ω5 = 1, and so on. The morphism ϕ has many interest-
ing properties. For example, for all n ∈ N0 , the lengths of the words ϕn (0)
correspond to the Fibonacci numbers Fn+2 .
The following result relates the occurrences of the 0s and 1s in this word
with the P-positions of Wythoff Nim.
Theorem 26. For all n ∈ N, An equals the index of the nth “0” in ω, and
for all n, Bn equals the index of the nth “1” in ω.
8The set of possible finite words on a finite set of letters X is denoted by X ∗ . A function
f : X ∗ → X ∗ is a morphism, under concatenation, if for all v, w ∈ X ∗ , f (vw) = f (v)f (w).
26 URBAN LARSSON

Proof. Set A0 = B0 = 0 to satisfy (i) in the Wythoff Properties. Clearly A


is increasing, so (ii) holds. Similarly complementarity, that is (iv) and (v)
hold by definition. It remains to justify (iii), the shift property. Let i(·) be
the index function. We must prove that i(nth 1) = i(nth 0) + n. 
Let us prove that the sequences from Wythoff Nim, (bnφc) and (bnφ2 c),
are complementary. That is every positive integer appears in exactly one of
these sequences. This can be done in full generality, by instead proving that
the sequences (bnαc) and (bnβc) are complementary whenever α and β are
irrationals with 1/α + 1/β = 1.
Theorem 27 (Beatty’s/lord Rayleigh’s Theorem). Let α and β be irrational
numbers with 1/α + 1/β = 1. Then (bnαc)n∈N and (bnβc)n∈N are comple-
mentary.
Proof. Collision: Since α and β are irrationals, if bnαc = bmβc = x then
x < nα < x + 1 and x < mβ < x + 1. Thus x/α + x/β < n + m <
(x + 1)/α + (x + 1)/β. But then x < n + m < x + 1, where all expressions
are integers, which is impossible.

Anticollision: Since α and β are irrationals, if bnαc < x < b(n + 1)αc and
bmβc < x < b(m + 1)βc, for an integer x, we get nα < x < (n + 1)α − 1 and
mβ < x < (m + 1)β − 1. Thus, n + m < x/α + x/β < n + m + 2 − 1/α − 1/β.
That is, n + m < x < n + m + 1, where all entries are integers, which is
impossible. 
Let us prove Theorem 1 by using the Wythoff Properties from page 7. Let
us recall them here:
(i) (a0 , b0 ) = (0, 0);
(ii) for all n, an+1 > an ;
(iii) for all n, bn − an = n;
(iv) for all n, m > 0, an 6= bm ;
(v) for all x ∈ N, there exists an n such that an = x or bn = x.
Proof of Theorem 1. By Theorem 1 and Theorem 4, it suffices to justify
that, for all non-negative integers n, (an , bn ) = (bnφc, bnφ2 c). Item (i) is
immediate by setting n = 0. Item (ii) is immediate, because 1 < φ. And
(iii) follows from φ2 = φ + 1. Items (iv) and (v) follow from Theorem 27 
Similarly the mex-description in Theorem 25 easily follows from the Wythoff
Properties (check this!). For Theorems 24 and 26 the interesting Wythoff
Property is the shift property (iii).
Game temperature. There are cold games, there are tepid games and there
are hot games. Hackenbush is an example of a ruleset for which all games
are cold. The game is played with red or blue pieces stacked upon each other
in various directions. Red can remove red pieces and Right can remove blue
pieces. Any piece that ceases to have a connection to the ground falls off
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 27

and is no longer part of the game. Let us list a few examples together with
their game values (they will studied later in this section).

= 1/2 = 1/4 = 3/4 = 1/2


=1 = −1

Figure 5. Some Hackenbush positions and their values.

Clobber is an example of a ruleset with only tepid positions (in fact


the games are so-called all-small). Toppling Dominoes is a ruleset with
a variety of positions, in particular, we can build arbitrarily hot positions;
imagine a stretch of pieces throughout the room, with same colored pieces
to the left and right of the middle respectively. Such positions can be made
arbitrarily hot, that is, the first player can gain a huge number of ‘free moves’.
On the other hand, we can show that Hackenbush does not even have any
N -positions. (This problem might be a quiz at some point, so you may
want to start thinking about it.)
The right most picture in Figure 5 reduces to 1/2. This can be proved in
a similar fashion to what we usually do (that is, by using domination and/or
reversibility or, by justifying this guess, by showing that G − 1/2 ∈ P.
But here we will discover an easier way that only applies to games that are
numbers.

12. Lecture 12
Numbers. Let us start by defining cold games; they are all numbers. We
use the word ‘numbers’ interchangeably in two ways, either as usual, or as
defined here, where ‘numbers’ are a special type of games. This abuse (or
double use) of language is due to that games that are numbers will inherit
their basic properties from our standard total order and arithmetics.
Definition 28 (Number). The game G is a number, if for all options GL
and GR , GL < GR , and all options are numbers.
Theorem 29. Consider a number G. Then, for all options, GL < G < GR .
Proof. Note that oL (G − GL ) = L, because she can play to GL − GL (this is
true for all games, not just numbers). We must prove that oR (G − GL ) = L,
if G is a number. Right can play to some GR − GL > 0 (a Left win), by
assumption, so assume instead that he plays to some G − GLL . Then, since
GL is a number, induction gives oR (GL − GLL ) = L. 
Consider two games G and H. Then G is simpler than H, if its canonical
form has smaller birthday than that of the canonical form of H.
28 URBAN LARSSON

We will show that the non-zero canonical form numbers have a one-to-one
correspondence with the dyadic rationals, that is all rationals of the form 2mk ,
for odd m, k ∈ N. That is, later on we will be able to identify the games
that are numbers with the dyadic rationals and 0. The dyadic game forms
are defined as follows.
Definition 30 (Dyadic Game). For all k ∈ N0 , let the game 1/2k =
k−1 . The game m/2k is m copies of 1/2k in a disjunctive sum.

0 | 1/2
If k = 0, then this definition provides all integer games (note that 0 | 1/2−1 =


{0 | 2} = 1). If k > 1, then this says that 1/2 = {0 | 1}, 1/4 = {0 | 1/2},
and so on.
We can prove a couple of nice properties of the dyadic games.
Theorem 31. For all k ∈ N,
(a) 1/2k > 0;
(b) 1/2k > 1/2` , if ` > k;
(c) 1/2k + 1/2k = 1/2 k−1
 m−1 ;
m m+1
(d) for all m, 2k = 2k | 2k
, and this game is in canonical form.
Proof. For (a) we prove that Left wins playing first or second. Playing first,
she wins in her first move. Playing second, she wins by induction, since
Right plays to 1/2k−1 . The base case is the game 1 when k = 0.
For (b), we prove that Left wins 1/2k − 1/2` , if ` > k. Left wins playing
first to 1/2k − 1/2`−1 , by induction, or by mimic. If Right starts, he loses, by
(a), by playing to 1/2k , and he loses by induction, by playing to 1/2k−1 −1/2` .
For (c), we prove that 1/2k + 1/2k − 1/2k−1 ∈ P. If Right starts, he plays
either to 1/2k + 1/2k > 0, by (a), or he plays to 1/2k + 1/2k−1 − 1/2k−1 =
1/2k > 0, by (a). If Left starts, she plays either to 1/2k − 1/2k−1 < 0, or
1/2k − 1/2k−2 < 0, both by (b).
For (d), we prove that
 
m−1 m+1 m
k
| k
− k
2 2 2
is a P-position. If Left starts by playing to m−1 − m , then Right can respond
2k  2k
to m−1
2k
− m−1
2k
= 0. If Left starts by playing to m−1 2k
| m+1
2k
− m−1
2k
1
− 2k−1 ,
m+1 m−1 1 2 1
then Right can play to 2k − 2k − 2k−1 = 2k − 2k−1 = 0. Similarly, if Right
starts by playing to m+12k
− 2mk , then Left can play to 2mk − 2mk = 0. And if Right
plays in the second sum-component, then Left responds to m−1 2k
− m−1
2k
= 0.
Next we prove that this is the canonical form. There is no domination since
there is only one option. The Left option is reversible, since its Right option
m−2 1
2k
+ 2k−1 = 2mk , but when we replace it with its set of Left options, we find
by induction that gives again the same option. Hence, no further reduction
is possible. 
Let us define the nonnegative integers and dyadic rationals recursively via
their birthdays as in Figure 6 (the negative ones are analogously defined).
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 29

2 1/2

3 3/2 3/4 1/4

4 5/2 7/4 5/4 7/8 5/8 3/8 1/8

Figure 6. The birth of integers and dyadic rationals coin-


cide with the birth of games that are numbers; the first row is
birthday 0 games, the second row is birthday 1 games, and so
on. In both cases, the negatives are symmetrically obtained.

First think of them in the usual way as we know from high school. We will
establish that this construction follows the birthdays of the corresponding
games that are numbers. We start with D0 = {0}, D1+ = {0, 1}, D2+ =
{0, 1/2, 1, 2}, D3+ = {0, 1/4/, 1/2, 3/4, 1, 3/2, 2, 3},
D4+ = D3+ ∪ {1/8, 3/8, 5/8, 7/8, 5/4, 7/4, 5/2, 4}
= {0, 1/8, 1/4/, 3/8, 1/2, 5/8, 3/4, 7/8, 1, 5/4, 3/2, 7/4, 2, 5/2, 3, 4}.
n o
+
In general Dn+ = Dn−1 ∪ n, di +d2 i+1 | di , di+1 ∈ Dn−1
+
. For all n, let Dn− =
{−x : x ∈ Dn+ }. If we let n →S∞, then this construction gives all integers
and dyadic rationals. Let D = n Dn .
Observe, that, by construction, if x, y ∈ Dn , with x < y, then there is a
unique z, such that x < z < y, and z ∈ Di for some smallest i < n. This is
the simplest dyadic between x and y.
Theorem 32 (Number Simplicity). Consider a number game G. Then G
equals the simplest dyadic x such that, for all options GL and GR , GL < x <
GR . And x is the canonical form of G.
Proof. By induction, we may assume that all options of G are simplest form
dyadic games. Then, we may use domination to single out one Left and one
Right option such that G = GL | GR . Since G is a number, we know that
GL < G < G R .
Suppose next that x is the simplest dyadic such that
(3) GL < x < G R .
We demonstrate that G − x ∈ P.
We begin by proving that oL (G − x) = R. By assumption GL − x < 0.
Hence, suppose instead that Left starts by playing to G − xR . Since xR is
simpler than x, then, by (3), either,
(a) xR 6> GL , or
30 URBAN LARSSON

(b) xR 6< GR .
Observe that all games are numbers, so no two games are fuzzy. If (a) then,
because x is a number, then x < xR 6 GL < x, which is impossible. If (b),
then Right can respond to GR − xR 6 0, and win.
Secondly, let us prove that oR (G − x) = L. If Right plays to GR − x > 0,
he loses. Suppose therefore that Right plays to G − xL . Since xL is simpler
than x, by (3) (and by no two numbers fuzzy), either
(a) xL 6 GL < x, or
(b) xL > GR > x.
In case of (a), Left can respond to GL − xL > 0, and win. The case (b)
contradicts that x is a number.
For the last part we may write x = 2mk = m−1 | m+1 , so xL = m−1

2k 2k 2k
=
n
 n−1 n+1 LR n+1 m
2j
= 2j
| 2j
, for n odd. Hence x = 2j
> 2k
, with equality only
if j = k − 1 and m − 1 = n/2, in which case xLRL = m−1 2k
. And otherwise
reversibility is not possible. And there is no domination. 

13. Lecture 13
Number avoidance means that, in a disjunctive sum of games, every player
avoids playing in a number component, unless there is nothing else to do.
There is a slick symbol for “Left wins playing first", that is “Right does not
win playing second": 0 G has the same meaning as 0 6> G. The following
are general lemmas.
Lemma 33. Let G = GL | GR , and let H = GL , A | GR , with A G.
 
Then G = H.
Proof. A mimic strategy suffices to prove that G − H ∈ P, unless Right
plays to G − A. But then the result follows by G − A 0; Left wins playing
first. 
Lemma 34. For any game G, and all left options GL , GL G.
Proof. The game GL − G 0, since Right wins playing first to GL − GL . 
We are aiming to prove a “translation property” for numbers. This is also
a strong version of “number avoidance”: you do not play in a number unless
there is nothing else to do.
Theorem 35 (Weak Number Avoidance). If Left can win G + x, where x is
a number and G is not a number, then she has a winning move of the form
GL + x.
Proof. Quiz. 
This result is a consequence of the following result, but it turns out that
we need Theorem 35 to prove an important lemma. We must be careful to
not run into cycles.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 31

Theorem (Number Translation - Number Avoidance). Suppose that x is


a number game, and G is a game that is not a number. Then, G + x =
GL + x | GR + x .

It is not possible to prove this result by using only what we learnt so


far. (We tried it out unsuccessfully last lecture.) Let us develop some more
theory to prove this result, and we will restate it as Theorem 41.

14. Lecture 14
Combinatorial game theory has min/max type functions that are also
common in classic game theory. Here they are called the Left- and Right
stops respectively:
Definition 36. The Left- and Right stops are,
(
x, if G = x is a number,
Ls(G) = L
max Rs(G ), otherwise.

(
x, if G = x is a number,
Rs(G) = R
min Rs(G ), otherwise.

These functions are useful in many way to analyze our games. In partic-
ular, we get a very slick proof of the Number Translation Theorem. It will
depend on some more lemmas though.
Lemma 37. For any game G, Ls(G) > Rs(G).
Proof. The proof is by way of contradiction. Suppose Ls(G) < Rs(G). Then
there is a dyadic number x such that Ls(G) < x < Rs(G). Here we can use
weak number avoidance. If Left starts in the game G − x, then, if she has
a winning move it is to some GL − x. Similarly, unless GL is a number, if
Right has a winning move it must be of the form GLR − x. And so on. But
then, since Ls(G) − x < 0 by assumption, Right wins when Left starts G − x.
Similarly, by using 0 < Rs(G)−x we obtain that Left wins when Right starts
G − x. Therefore G = x, a number, which contradicts Ls(G) < Rs(G). 
Lemma 38. For any games G and H, Rs(G + H) > Rs(G) + Rs(H).
Proof. In the game G + H, if Right starts by playing to GR + H, then,
unless GR is a number, Left can respond to GRL + H, and if Right starts by
playing to G + H R , then Left can respond to G + H RL . Then use induction.
If GR = x is a number, then Rs(G + H) = x + Ls(H) > Rs(G) + Rs(H), by
Lemma 37. 
Lemma 39. For any game G that is not number, there is a Left option GL
such that Rs(GL − G) > 0.
32 URBAN LARSSON

Proof. We get
(4) Rs(GL − G) > Rs(GL ) + Rs(−G)
(5) = Rs(GL ) − Ls(G)
(6) = Ls(G) − Ls(G) = 0,
where (4) is by Lemma 38, and (6) is by assuming that GL is such that
Ls(G) = Rs(GL ). 
Lemma 40. Suppose that G is such that Rs(G) > 0. Then Left wins G + x,
if x is a positive number.
Proof. Clearly this holds if G is a number. Suppose Right starts by playing
in the G component. Then, unless GR is a number, Left has a response GRL
such that Rs(GRL ) > 0. Use induction. If GR is a number, then this number
is no smaller than 0, and so Left wins G + x. If Right starts by playing in
the x component, to say G + xR , then we can use induction, since xR > x
(or use weak avoidance). If Left starts, observe that Lemma 37 implies that
Ls(G) > Rs(G) > 0. That is, she has a move such that Rs(GL ) > 0. Now
use induction. 
Let us restate the result we are aiming at.
Theorem 41 (Number Translation - Number Avoidance). Suppose that x
 aL number Rgame, and G is a game that is not a number. Then, G + x =
is
G +x|G +x .
Proof. By definition of disjunctive sum,
G + x = GL + x, G + xL | GR + x, G + xR .


Thus, by domination, it suffices to prove that there exists a GL such that


GL + x > G + xL , for any xL . We combine Lemma 39 with Lemma 40;
together they establish that there exists GL such that GL −G+x−xL > 0. 

15. Lecture 15 - In a very small world


If x is a positive number (which means that Left wins playing first or
second in x), then x > ∗.
Lemma 42. For any number game x > 0, then x > ∗.
Proof. Consider the game x + ∗, where x is a number. Left playing first to
x wins. If Right starts by playing to x, then Left wins. If Right plays to
xR + ∗, then Left wins by induction since xR > x is a number. 
Note that ∗ 0. There are also positive games so small that, however
many copies you add, you will not reach one move for Left, the game 1.
One such game is ↑ = {0 | ∗} > 0. A related game is the fuzzy game
↑∗ = ↑ + ∗ = {0, ∗ | 0}. Fuzzy means that it is an N -position, that is,
incomparable with 0.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 33

Definition 43. The game g is an infinitesimal, with respect to numbers, if


for all positive numbers x, −x < g < x.

Mostly we omit the part “with respect to numbers”, but as we will see later
there is an infinite hierarchy of games that are infinitesimals with respect to
each other. We have already observed that the the game ∗ is an infinitesimal.
However ∗ is confused with 0. The following two results are perhaps more
remarkable.

Theorem 44. There are positive infinitesimals, with respect to numbers.

Proof. The game ↑ is positive. We will demonstrate that, for all n ∈ N,


n · ↑ < 1. Suppose that Left starts in the game 1 + n · ↓. She plays to
1 + ∗ + (n − 1) · ↓. If Right responds to 1 + (n − 1) · ↓, then she wins
by induction, and if he plays to 1 + ∗ + (n − 2) · ↓, then Left can play to
1 + (n − 2) · ↓, and win by induction. Suppose next that Right starts in the
game 1 + n · ↓. Then he plays to in the game 1 + (n − 1) · ↓, and Left wins
by induction. 

In one lecture we mentioned briefly another choice, the class of so-called


uptimals, a sequence of the form ↑ > ↑2 > ↑3 > . . . > 0. They build
infinite hierarchies of yet smaller positive games. By definition, the game
↑n = 0 | ∗ − ↑ − ↑2 − · · · − ↑n−1 . For example ↑2 = {0 | ↓∗}. Moreover,
the uptimals are infinitely small with respect to each other.

Theorem 45. Fix a positive integer n. Then, for all positive integers m
↑n > m · ↑n+1 .

Proof. Exercise! 

This result motivates a special notation for uptimals. We use a standard


positional numeration system, but where the digit denotes the number of the
given uptimal, respectively. For example 0.1023 = ↑ + 2 · ↓3 + 3 · ↑3 (here we
use x to denote the negative of a positive uptimal x).
The next result concerns even smaller games, called Tiny,

1 = {0 | {0 | −1}}
and Miny,
1 = {{1 | 0} | 0}

Theorem 46. There are positive infinitesimals, with respect to any upti-
mal ↑n .

Proof. Exercise! (Try with Miny and Tiny.) 

This is the content of the next lecture. There is some overlap.


34 URBAN LARSSON

15.1. More on Infinitesimals - a toppling dominoes approach, by


Anjali.
Definition 47. A short game G is infinitesimal if x > G > −x for every
positive number x.
Consider the combinatorial game of toppling dominoes. There are
three types of dominoes in the game; red which can be toppled by Left, blue
which can be toppled by Right and green which can be toppled by both.
Take the following game with two red dominoes as in Figure 7. This game
has the value of 2.
Now, let’s add a blue domino in between the two red domino in figure 8.
Let this game be G1 . This has the value
G1 = {0, ∗ | 1}
The second left option is reversible, and hence
G1 = {0 | 1}
1
=
2

Adding one more blue domino in the middle gives us the game in Figure 9.
Let this game be G2 . This game has the value
G2 = {0, {0| − 1} | 1, ∗}
The Left option {0| − 1} is reversible and 1 < ∗. Thus,
G2 = {0 | ∗}
= ↑

Figure 7. G = 2

1
Figure 8. G1 = 2
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 35

Figure 9. G2 = ↑

We can compare this game with any number, say 1. We find that 1 > ↑. We
want to find out how many ↑s it might take for the value to be more than
1, if possible at all. Actually, by induction, we can prove that, ∀n > 1,
1 > n · ↑.
The game ↑ is an infinitesimal. It has a very small game value, infinites-
imally small with respect to the dyadic rationals. Consider the sequence
1, 1/2, 1/4, 1/8, . . .. It rapidly tends to 0, right? Well, in the amazing world
of combinatorial games, there is some space between this infinite sequence
that ‘converges to 0’, and the game 0. And now we will demonstrate what
this means.
Now, let’s add one more blue domino in the middle so that, now there
are three blue dominoes in between two red domino. See figure 10. Let this
game be G3 .
G3 = {0, {0| − 2} | {0| − 1}, ∗}
The second Left option is reversible and the second Right option is dominated
by {0| − 1}.
G3 = {0 | {0| − 1}}
= 1
This type of games are called T iny and it is a type of infinitesimal. This
particular game is Tiny(1) written as 1 .
Definition 48. For all G > 0, the games G and G are defined by
G = {0 | {0 | − G}} and − (G) = G = {{G | 0} | 0}.
We can prove that ↑ > 1 . Moreover, we have the following theorem.
Theorem 49. 1 is infinitesimally small with respect to ↑. That is, ∀n > 1,
↑ > n · 1 .
Proof. We want to find the outcome of ↑ + n · 1 where 1 = {{1 | 0} | 0}.

Case 1: Left starts. If Left moves in ↑ to go to 0 + 1 then Right would


win the game since, 1 < 0. Hence, Left moves in one of the 1 to go to
36 URBAN LARSSON

Figure 10. G3 = 1

↑ + {1 | 0} + (n − 1) · 1 . Then, Right moves to ∗ + {1 | 0} + (n − 1) · 1.


By induction we prove that ↑ + n · 1 is won Left when Left starts.

Case 2: Right starts. Right has advantage in 1 as 1 < 0 so Right moves


in ↑ to go to ∗ + n · 1 . This is a N − position since Right can undo all Left
moves in 1 . Left will not move in ∗ unless necessary and win the game. If
at certain point Right moves in ∗ then Left will gain an advantage by going
to 1 + m · 1 , where m 6 n and Left would win similarly.

Therefore, the game ↑ + n · 1 is an L − position which implies ↑ >


n · 1 , ∀n > 1. 
We can now easily make the game 2 by adding one more blue domino in
the middle, making it 4 instead of 3 blue dominoes in Figure 10, and so on.
We have the following result.
Lemma 50. The game 2 is infinitesimally small with respect to 1. That
is, ∀n > 1,
1 > n · 2.
Proof. To prove this, we first show that 1 + 2 > 0 which implies 1 > 2 .
1 = {0 || 0 | − 1}
2 = {0 || 0 | − 2}
2 = {2 | 0 || 0}
Case 1: Left moves first
Left makes a move to go to the game 1 +{2 | 0} since 1 is positive game
and it is advantage for the Left. Next, the Right’s best move to replicate
Left’s move in 1 and the game becomes {0 | − 1} + {2 | 0}. Now, the Left
can move to the game {0 | − 1} + 1 which gives advantage of free move to
the Left. This game has only one move for the Right so the game becomes
1 and thus, Left wins.
Case 2: Right moves first
Right best move is disrupt the positive part of the game by moving to the
game {0 | − 1} + 2 . Next, Left replicates the last move by Right and the
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 37

game becomes {0 | − 1} + {2 | 0}. Now, Right’s best move to have a free


move by going to −1 + {2 | 0}. Left now moves to, remove the all the Right
domino that Left is able to, in order to force Right to let go of the free move.
Thus, the Left moves to −1 + 1. Now, Right has only one move left and the
game becomes 1. Thus, Left wins.
Therefore, we see that Left can force a win regardless of who moves first.
Hence, we have
1 + 2 >0
=⇒ 1 > 2
The proof then follows through induction. 

This lemma gives the following theorem.


Theorem 51. Let G and H be two game such that G > H > 0. Then, H
is infinitesimally small with respect to G i.e.
G > n · H , ∀n > 1.
Proof. The proof follows through induction using the previous lemma. 

16. Lecture 16 - An overview of atomic weight theory


In this section we do not prove any result. But you will get to know “the
raison d’être” for atomic weight theory.
This is an introduction to the atomic weights of all-small combinatorial
games. In an all-small combinatorial game, for all subgames, either both
players can move or neither can. Let us begin with some ruleset example,
a disjunctive sum of a flower garden with a single strip of bipass. The
ruleset flower garden is a subset of hackenbush; it has a green stalk
of integer length, and on top a flower, with blue and red petal leaves. See
Figure 13 to the right.
A bi-collective of one-directional micro organisms, consisting of a red tribe
and a blue tribe, live in close proximity, and they take turns moving. The red
tribe moves by letting one of its members crawl rightwards across a number of
blue amoebae, while settling in the spot of a blue amoeba, and thus pushing
each bypassed amoeba one step to the left, whereas the blue tribe moves by
letting one of its members crawl leftwards across a bunch of red amoebae,
while shifting the position of each bypassed amoeba one step to the right.
Amoebae cannot bypass their own kind. When an amoeba reaches end of
line, it cannot be played, and thus dies (of boredom). See Figure 11. The
exception is if no more moves are possible in the full collective; in this case
the last moving tribe wins, and is rewarded eternal life, as in Figure 12. This
ruleset is called bipass.
Who wins the game in Figure 13, and why? To understand this, let us
dwell a bit on the theory of atomic weights (from the books).
38 URBAN LARSSON

−→

Figure 11. The middle amoeba crawled to the left end.


When an amoeba does not face any opponent, even at a far
distance, it gets removed, because it cannot be used in the
game by either player.

−→

Figure 12. By moving, the single white amoeba bypassed


all remaining amoebae, and will be celebrated as a hero by
its resurrected tribe.

Figure 13. A Bipass strip in disjunctive sum with a


Flower Garden.

Definition 52 (Far Star). The far star, denoted , is an arbitrarily large


Nim heap; that is both players can move to any Nim heap from . It has
the additional property that  +  = .
 is obtained as follows.
Equivalence modulo
Definition 53 (Equivalence Modulo ). Let G, H be normal play games.
Then G > H, if, for all games X, o(G + X + ) > o(H + X + ), and
G = H if G > H and H > G.
The ‘game’  may be treated as a standard combinatorial game, since,
for each game G, for any sufficiently large n, o(G + ∗n) is constant. We may
take n greater than the birthday of G.
Theorem 54 (Constructive -equivalence). Let G, H be normal play games.
Then G > H if and only if G − H > ↓ + , and G = H if and only if
↓ +  < G − H < ↑ + .
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 39

For example, we have G = 0 if and only if Left wins G + ↑ + ∗n and


Right wins G + ↓ + ∗n for some sufficiently large n, and n is sufficiently
large if ∗n does not equal any follower of G + ↑ or G + ↓. This motivates
the naming as a constructive -equivalence. This method is useful also in
proofs, for example whenever we have a good guess of one of the games G or
H, then we use this result to verify whether our candidate games are equal.
See Theorem ?? for how it applies to atomic weight theory.
Let X be a set. Then X + y = {x + y : x ∈ X}. If X is a set of all small
games, let aw(X) = {aw(x) : x ∈ X}.
The product of a game G and ↑ is: 0 · ↑ = 0; n · ↑ = ↑ + (n − 1) · ↑, in case
G = n is an integer. Otherwise G · ↑ = {GL + ⇑∗ | GR + ⇓∗}.
Lemma 55. Consider any normal play game G.
• If G · ↑ > 0 then G > 0.
• If g is all-small then there is a unique G such that g = G · ↑.
In a proof of this lemma, the first item implies the second, and we use the
uniqueness to define atomic weight.
Definition 56 (Atomic Weight). The atomic weight of an all-small game g
is the unique game G = aw(g) such that G · ↑ = g.
Example 57. Let g = ∗n. By Theorem 54, aw(∗n) = 0. Namely, we claim
that Left wins ∗n + ↑ +  (and symmetrically Right wins ∗n + ↓ + ). If
Left starts, she can play to ↑ > 0. If Right starts by playing to ∗m + ↑ + ,
then Left wins (by induction). If he plays to ∗(n + 1) + , then Left can
respond to ∗(n + 1) + ∗(n + 1) = 0. If he plays to ∗n + ↑ + ∗m 0, then Left
wins playing first (Right can win playing first if and only if ∗n + ∗m = ∗).
Example 58. Let g = ↑ and let h = ↑∗. By Theorem 54, aw(g) = aw(h) =
1. Namely  < ↑ and  < ↑∗. By using also symmetry, the verification is
similar to the one in Example 57.
In this example we had to guess the atomic weight 1 and then verify.
There is a constructive/recursive method for computing atomic weights.
Theorem 59 (Constructive Atomic Weight). Let g be an all-small game,
and let
G = aw(g L ) − 2 | aw(g R ) + 2 .


Then aw(g) = G, unless G is an integer. In this case, compare g with the


far star. If
• g , then aw(g) = 0;
• g < , then aw(g) = min{n ∈ Z : n GL };
• g > , then aw(g) = max{n ∈ Z : n GR }.
Theorem 60 (Atomic Weight Properties). Let g and h be all-small games.
Then
(i) aw(g + h) = aw(g) + aw(h);
40 URBAN LARSSON

(ii) aw(−g) = −aw(g);


(iii) If aw(g) > 1, then g 0 (Left wins playing first);
(iv) if aw(g) > 2, then g > 0 (Left wins).
As usual ‘ ’ denotes greater than or confused with. In particular (iv) is
the raison d’être for atomic weight, and it is popularly called “the two-ahead-
rule”. We will have plenty use for it.
For example, we argue that the game in Figure 13 is an L -position.
Namely, the chosen rulesets satisfy very elegant properties with respect to
the atomic weight theory. If a flowers in the garden has more red (Left plays
red) than blue petal leaves, then this flower has atomic weight one. ANd
since atomic weights are additive, we can simply compute the number of
red flowers minus the number of blue flowers to get the atomic weights of a
Flower Garden (see for example [BCG1982] or [S2013]). Bipass has the
inverse property in a sense: the more pieces of the opponent the better. Let
∆(g) be the number of blue amoebae minus the number of red amoebae in
a Bipass strip. Then aw(g) = ∆(g). See [LN2023] for a proof of this result.
By these two results, we can compute the atomic weight of the disjunctive
sum game in Figure 13, and indeed, it is two, so the two ahead rule applies,
Left is two atomic units ahead of Right, and so she will win independently
of who starts. (How?)
16.1. Quiz: Euclid’s Game. The ruleset Euclid is played on two non-
empty heaps of pebbles. A player must remove a multiple of the size of
the smaller heap from the larger heap. We represent a position by a pair
of positive integers (x, y), where say x 6 y. Note that if x = y, then the
position is terminal. Example: (2, 7) → (2, 3) → (1, 2) → (1, 1). Since
we put the requirement that (both) heaps remain non-empty, then no more
move is possible. Note that the losing moves are forced.
Optimal play reduces to minimizing
√ the relative distance of the heaps.
Recall the golden section, φ = 1+2 5 .
Theorem 61 ([CD1969]). A player wins Euclid if and only if they can
remove a multiple of the smaller heap such that the ratio of the heap sizes
(x, y), satisfies 1 6 y/x < φ.
Proof. Suppose that the current player is in a position of the form (a, b) with
b/a > φ. We must prove that they can find a move to a position (c, d) of the
form

(7) 1 6 d/c < φ.


We claim that there is a positive integer k such that either
• (c, d) = (b − ka, a), or
• (c, d) = (a, b − (k − 1)a)
a
satisfies the desired inequality (7). If 1 6 b−ka < φ, we are done, so suppose
that
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 41

a
(8) > φ.
b − ka
Then b−ka
a < φ−1 . And so
b − (k − 1)a b − ka
= +1
a a
< φ−1 + 1
= φ.
Suppose next that the current player is in a position of the form (a, b),
with 1 6 b/a < φ. Then there is only one move option, namely (b − a, a),
and it follows that
b−a b
= −1
a a
<φ−1
= φ−1 .
a
And hence, b−a > φ. 

17. Lecture 17 - An overview of reduced canonical form


Recall the definitions of Left- and Right stops, Ls and Rs respectively, in
Definition 36. A game G is hot if Ls(G) > Rs(G), it is tepid if Ls(G) =
Rs(G), and it is an infinitesimal if Ls(G) = Rs(G) = 0.9
Example 62. For example G = {{2 | 1} | −1} is hot, because 1 = Ls(G) >
Rs(G) = −1. But H = {{2 | 1} | 1} = 1 + {{1 | 0} | 0} = 1 + 1 is
tepid. On the other hand g = {{1 | ↑} | ↓} is infinitesimal, because Ls(g) =
Rs({1 | ↑}) = Ls(↑) = 0 and Ls(g) = Rs(↓) = 0.
In a sense, the “one move for Left”, which is hidden to the left of the Left
option g L , will mostly be irrelevant in play. Similar to Miny and Tiny though,
the option does not reverse out. The hinge here of course is the word “mostly”.
In an environment with similar infinitesimals, there are situations where the
hidden “one move for Left” can become alive. But those situations are rare,
and the concept of a Reduced Canonical Form removes the appearance of
infinitesimals for all sub-games. It becomes an important tool to analyze
games that otherwise were intractable.
In fact, there are two approximation techniques related to these concepts,
namely Temperature Theory, which outputs a temperature and a mean value,
and Reduced Canonical Form (rcf), which outputs a coarsening of the usual
canonical form (value) of a game. By taking the reduced canonical form
approximation of a game, the temperature and mean value remains the same.
Hence, one may view Temperature Theory as a last resort, if both the game
9A special case of infinitesimal games are the all small games. But there are any
infinitesimals that are not all small. For example, we have already seen 1 and 1.
42 URBAN LARSSON

value and the rcf are intangible for a human eye. As usual, we are looking
for information of how to play well games in a disjunctive sum. There is a
classical strategy called hotstrat, which says: play in the hottest component.
This is often the best strategy, but not always.10 A brief introduction to
Temperature Theory is the topic of the next section.
The reduced canonical forms for the games in Example 62 are rcfG =
{{2 | 1} | −1}, rcf(H) = 1, and rcf(g) = 0. We will study the meaning of
these statements in what comes, and we will define all concepts accordingly.
First we define the equivalence classes modulo inf, and state a main theorem
with tools of efficient computation pf equivalent games. Then, we define the
reductions modulo inf, and at last we state a result of uniqueness of the end
result of these reductions. By the way, there are many hot games for which
rcf(G) 6= G. For example G0 = {{2 | 1∗} | −1} is hot and in canonical form,
but rcf(G0 ) = {{2 | 1} | −1}.
The equivalence relation modulo infinitesimals is defined as follows.
Definition 63 (Equivalence Mod Inf). Consider game G and H. Then
G =inf H, if, for all positive numbers x, −x < G − H < x.
The games G and H are ‘infinitesimally close’ if G =inf H, and this is
also often called “equivalent modulo infinitesimals”. Similarly, for games G
and H, G >inf H if G − H > −x for all positive numbers x. Note that the
relation >inf is a partial order; this is the partial order modulo infinitesimals.
The first result simplifies game comparison modulo infinitesimals, since
showing G >inf H is equivalent to showing G − H >inf 0.
Theorem 64 (Constructive Rcf). The following are equivalent.
(1) G >inf 0;
(2) R(G) > 0;
(3) G >  for some infinitesimal .
The first item is Definition 63; the second and third items are efficient
constructive tools, and they often appear in proofs and examples.
Definition 65 (Inf-reduction). Let G be a game.
(1) Suppose A, B ∈ GL . Then A inf-dominates B, if A >inf B.
(2) The Left option GL is inf-reversible (through GLR ), if G >inf GLR
for some Right option GLR .
Theorem 66 (Domination Mod Inf). Assume that G is not equal to a num-
ber and suppose that G0 is obtained by removing some inf-dominated option
(either Left or Right). Then G =inf G0 .
Example: the game {1, 1∗ | 0} =inf {1 | 0}, since 1 inf-dominates 1∗.

10Good examples of when it fails uses the concept of over heating (a type of ‘inverse’
of cooling), but we do not have the time to cover that topic in this course. Please see
instead [S2013].
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 43

Theorem 67 (Reversibility Mod Inf). Let G be a game and suppose that


GL is inf-reversible through GLR . Let
G0 = {GL \ {GL }, GLRL | GR },
If G0 is not a number, then G =inf G0 .
It is important that G0 in Theorem 67 is not a number. For example, with
H = {{2 | 1} | 1}, as in Example 62, we get that H LR 6inf H, by noting
that v 6 H − 1 = 1 . And so, if the result would apply to numbers, then
H 0 would be {0 | 1} = 1/2 6=inf 1.
Luckily, we have other tools, and the Number Translation Theorem (The-
orem 41) tells us that H = 1 + =inf 1, by using also Theorem 64 (iii). Or,
we can use directly Theorem 64 (ii), by noting that Ls(H) = Rs(H) = 1.
For an example when we can use the reversibility theorem, take g =
{{1 | ↑} | ↓} as in Example 62, and we prove that and g =inf 0. Again, we
can apply either Theorem 64 (ii) or (iii). If we use (ii), we must prove that
both Ls(g) = Rs(g) = 0. But this is easy to see (why?). If we use (iii), we
must find infinitesimals 1 , 2 such that g > 1 and g 6 2 . Take 1 = ⇓ and
2 = ⇑. Please verify these choices!
Definition 68. A game G is in reduced canonical form, if, for every subgame
H of G, either H is in canonical form and is a number, or H is hot and G
does not contain any inf-dominated or inf-reversible options.
In particular, if a game is tepid, then its reduced canonical form is a
number.
Theorem 69. For every game G, there exists a unique game H in reduced
canonical form such that G =inf H.
Definition 70. The reduced canonical form of a game G, denoted rcf(G),
is the unique reduced canonical form H, such that rcf(G) = H.

18. Lecture 18 - Intuitive Temperature theory


We have already seen examples of hot games: games with some urgency to
play first. Typically one would think of the switches, games of the form ±x =
{x | −x}, for some positive number x. For example G = ±10 = {10 | −10}
is hot, and it has temperature T (G) = 10 (see Figure 16 for its so-called
thermograph). We find the temperature of a hot game, by cooling it, and
observing when the cooled game becomes a tepid game. In our example,
the smallest number t > −1, for which Gt := {10 − t | −10 + t} is tepid, is
indeed t = 10. And this defines T (G). Indeed, G10 = ∗, an infinitesimal, and
they are all tepid. We are thinking of ‘cooling’ as if each player is paying a
penalty of t. As long as they can keep paying the penalty and benefiting in
‘score = number of moves’ playing first, the game can still be cooled further.
Thus, the hot game G = {3 | 1} can be cooled by at most 1 until it freezes
and becomes the tepid game 2∗ = 2 + ∗, where no player benefits by paying
further penalty in exchange for playing first.
44 URBAN LARSSON

To begin with, by using a naïve understanding of heat, how should one


play optimally in the disjunctive sum
{10 | −10} + {3 | 1} + {1 | {0 | −100}}?
The hottest game component is {10 | −10}, and the first player has a winning
move there. It does not need to be the unique winning move though. There
is another winning move for one of the players.
The general definition of temperature is recursive in t, starting at the two
stops. It includes the possibility of cold games with negative temperature;
the coldest game has defined temperature −1, and that holds for any game
that is an integer (see Figure 14 for its thermograph). The integers cannot
be further cooled.
Suppose that we wish to ‘cool’ the game 1/2 = {0 | 1} to find its tem-
perature. We have to ‘pay’ a negative penalty (because the game is al-
ready cold). We get (1/2)−1/2 = {0 + 1/2 | 1 − 1/2} = 1/2 + ∗, but if
t < −1/2, then (1/2)t is not tepid, but in fact hot. The same idea as in the
first paragraph works for any non-integer number game, and by using that
m
= 2k | 2k , we get that the temperature of 2mk is − 21k .
 m−1 m+1
2k
The standard definition of temperature found in books is non-constructive,
but is seems a bit difficult to avoid this, since we must cool a game G
everywhere all at once, by starting the cooling at its Left and Right stops,
and continue until Ls(Gt ) = Rs(Gt ) = 0. We omit this somewhat technical
definition (see [S2013] for a rigorous treatment of temperature). An intuitive
description suffices for practical purposes, namely, to find the temperatures
and the mean values of some hot games from your rulesets.
Let us explain this concept with an example. Consider the game {3 | −2}.
Is this game ‘better’ for Left or Right? Well, anyone can see that it depends
on who starts, and it seems also that Left has a definite advantage in that
she can earn one more move than Right could, if she gets to start. The mean
value of a game G is defined as the number m(G) such that the difference
n · G − n · m is bounded by a constant, independently of the size of the
positive integer n. The mean value theorem states that such a number m
exists, and that is suffices to take either of the stops to ‘compute’ it.
Theorem 71 ([S2013]). For any given game G, the mean value m(G) exist
and equals lim Ls(n·G)
n = lim Rs(n·G)
n .
A standard tool to find both the temperature and mean of a game is via
its thermograph. The thermograph of a (hot) game G is drawn, by starting
at the Left and Right stops and gradually cooling (by increasing the penalty
t), and watching carefully that at every line drawn follows the Left and Righ
stops of the current Gt . This procedure is most easily understood by drawing
some example games. Let us explain via Figures 14 to Figures 18. The first
picture represents a cold game, the second a tepid game, and all other are
hot games. Take a look at Figures 14 and 15. They look superficially similar,
but there is an important difference. The games have different temperatures,
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 45

so their thermographs should look different, right? And they do, the first
game has temperature −1, and the mast continues down below the picture,
whereas the second picture has temperature 0, because it is a tepid game,
and that is illustrated by the fact that the bottom of the mast rests at the
horizontal line, at the top of a trivial thermograph. It is easy to see that the
location of the mast is the mean value of these two games.

Figure 14. The thermograph of the number game 1.

Figure 15. The thermograph of the tepid game {1 | 1} = 1∗.

The next three thermographs are more interesting. Of course, Figure 16


is the thermograph of the game in the first paragraph of this section. And it
illustrates nicely the idea of cooling that game until we reach mast value 0
and temperature 10. Again, the mast value is the same as the mean value of
the game. Check this by plaing a large sum of games, where each component
is of the form {10 | −10}. The first player’s advantage is quickly diminished
by the fact that the second player can, at each response cancel the first
player’s advantage. For switches, it is easy to see that Theorem 71 holds
and it is also easy to see that the mast value and the mean value is the
same. That this holds in general is another theorem proved in [S2013].
Theorem 72 ([S2013]). For any game, the mast value and the mean value
is the same.
46 URBAN LARSSON

Figure 16. The thermograph of the switch {10 | −10}.

In the next picture, we study games of the form G = {a | ±b}, where


a > b are positive numbers. There is a geometric approach to arrive at the
vertical border to the right in Figure 17. Namely use the thermograph of
the switch ±b, and turn it 45 degrees to the left, by fixing it at the point of
the Right stop. The leftmost wall of the thermograph of the Right option
becomes the rightmost wall of the game G. The slope of the leftmost wall
remains the same as in the previous picture (but both the mean value and
the temperature change).
Also in this type of games, we can justify Theorems 71 and 72 directly,
by inspection. Try this!

Figure 17. The thermograph of the game {10 | {5 | −5}}.

We will leave the justification of the thermograph in Figure 18 as an


exercise. The idea is to raise the horizontal bottom level as one cools the
game, and carefully check which option leads to the Right stop at each phase
of the cooling. As in the previous picture, the Right options have to be turned
45 degrees to the left, by fixing the Right stop of the cooled game.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 47

Figure 18. The thermograph of the game {12 | {5 | −5} , {3 | 2}}.


48 URBAN LARSSON

19. Bidding Combinatorial games, by Prem


Consider Left and Right playing a game G . Instead of the conventional
alternating play, here the move order is determined through a Discrete Rich-
man bidding. The total budget TB, available for them is fixed. Left’s budget
is p and Right’s is q such that p + q = TB. A player who wins the bid, moves
in G and shifts the winning bid to the other player. Additionally, there is
a tiebreaing marker, that is initially held by one of the players and can be
included in the bid. The tiebreaking marker have no value but in case of a
tie, the player who is currently holding the tiebreaking marker will be the
winner of the bid and the winning bid together with the tiebreaking marker
will get shifted to other player. There is a final auction at the empty game.
The player who moves last, wins the game. Moreover, similar to alternating
play, in this bidding set up: “last move wins" is same as “cannot move loses"
ˆ
Formally, given a total budget TB, let us define B = {0, . . . , TB, 0̂, . . . , TB},
the set of all feasible player budgets. Here the word “budget" includes the
information of the marker holder. A game is a triple (TB, G, p̃), where we
take a note of Left’s part of the budget, p̃ ∈ B. If TB is understood, we
write (G, p̃). V

For example, (2, ∗, 1) means the game is ∗ = {0 | 0} with total budget 2 in V

which Left player has budget 1 and the tiebreaker marker. In (2, ∗, 1), Left
V V

can bid 0 or 0 or 1 or 1, however Right can bid 0 or 1. Let’s consider Left


V

is bidding the amount 1. Then for both choices of Right, Left will win the
bid and make a move in the game ∗. The current bidding game is (2, 0, 0).
Now Left will bid 0 and for all choices of Right, Right will win the bid and
have to make a move in the game 0. Since, Right cannot move, therefore
Left wins the game.

V
Bid V
Left
(2, ∗, 1) (1, 0) (2, 0, 0)

Bid Left
Bid

V Right
(1, 1) (0, 0 or 1) LEFT

The following result ensures that by the introduction of bidding, there will
not be any mixed strategy equilibrium.

Theorem 73 (First Fundamental Theorem). [KLRU2022] Consider the bid-


ding convention where the tie-breaking marker may be included in a bid. For
any game (T B, G, p̃) there is a pure strategy subgame perfect equilibrium,
computed by standard backward induction.
LECTURE NOTES IN COMBINATORIAL GAME THEORY, IE619 2023 49

Observe that in case of a tie, the marker is transferred. Therefore, by this


automatic rule the special case TB = 0 corresponds to alternating normal
play rules.
Theorem 74. Consider TB = 0. Then bidding play is identical to alternat-
ing play. The current player is the player who holds the marker.
Next, we define outcome of the bidding game.
Definition 75 (Outcome of the game). The outcome of the game (TB, G)
is o(G), defined via the 2(TB + 1) tuple of partial outcomes as
ˆ . . . , o(G, 0̂), o(G, TB), . . . , o(G, 0)).
o(G) = (o(G, TB),
Here the first half of the outcome corresponds to when Left holds the marker
and the rest corresponds to when Right holds the marker. The length of the
outcome is 2(TB + 1).
Since this notation can be quite lengthy, we instead adopt word notation.
For example instead of (R, R, L, L) we simply write RRLL.
Definition 76 (Outcome Relation). Consider a fixed TB and the set of all
budgets B. Then for any games G and H, o(G) > o(H) if, ∀ p̃ ∈ B, o(G, p̃) >
o(H, p̃).11
Feasibility of outcome A careful observation shows that for TB = 1,
an outcome such as RLRL would be rare, since Right wins without either
money or marker, but loses if he is given a dollar. Next, we state (the proof is
straightforward) that such outcomes are impossible; outcomes are monotone.
Theorem 77 (Outcome Monotonicity). Consider p̃ ∈ B, with p < TB.
Then o(G, p̃) 6 o(G, p]
+ 1) (same marker holder in both games).
Another careful observation shows that for TB = 1, an outcome such as
LLRR is monotone but for this outcome, Left loses with a dollar budget,
but wins with the marker alone. This is also not possible. The next results
shows that marker can not be worth more than a dollar.
Theorem 78 (Marker Worth). Consider TB ∈ N. Then o(G, p̂) 6 o(G, p +
1).
We can view an outcome as a string of L’s and R’s. From Outcome
Monotonicity and Marker Worth, Theorems 77 and 78, we see that not all
such strings can appear as an outcome of a game. Thus, let us define the
notion of a feasible outcome.
Definition 79 (Feasible Outcome). An outcome is feasible if it satisfies
Outcome Monotonicity (Theorem 77) and Marker Worth (Theorem 78). For
a given TB, the set of all feasible outcomes is O = OTB .
11It is easy to check that the outcome relation is reflexive, antisymmetric and transitive.
Hence the set of all outcomes together with this relation is a poset.
50 URBAN LARSSON

The next result shows that corresponding to every feasible outcome, there
is a bidding game.
Theorem 80 (Main Theorem). Consider any total budget TB ∈ N0 . An
outcome, say ω, is feasible if and only if there is a game G such that o(G) =
ω.
For more details see [KLRU2022] and [KLRU2023].

References
[ANW2007] M. H. Albert, R. J. Nowakowski, and D. Wolfe. Lessons in Play, A K Peters,
Ltd., 2007.
[BCG1982] E. R. Berlekamp, J. H. Conway, R.K. Guy, Winning ways, 1-2 Aca-
demic Press, London (1982). Second edition, 1-4. A. K. Peters, Wellesley/MA
(2001/03/03/04).
[B1902] C. L. Bouton, Nim, A Game with a Complete Mathematical Theory Ann. of
Math. (2) 3 (1901/02), no. 1-4, 35–39.
[C1976] C1976) John H. Conway. On Numbers and Games. Acad. Press, 1976.
[CD1969] Alfred J. Cole and A. J. T. Davie, A game based on the Euclidean algorithm
and a winning strategy for it, The Mathematical Gazette, volume 53, (1969), 354–357.
[KLRU2022] Prem Kant, Urban Larsson, Ravi K. Rai, and Akshay V. Upasany, Bidding
combinatorial games, preprint arXiv:2207.08073.
[KLRU2023] Prem Kant, Urban Larsson, Ravi K. Rai, and Akshay V. Upasany. Con-
structive comparison in bidding combinatorial games, preprint arXiv:2207.11596.
[LN2023] U. Larsson, R. J. Nowakowski, Atomic weights and the combinatorial game of
bipass, Discr. Math. 346, (2023)
[W1907] W.A. Wythoff, A modification of the game of Nim, Nieuw Arch. Wisk. 7 (1907)
199–202.
[Z1913] Ernst Zermelo. Uber eine anwendung der mengenlehre auf die theorie des
schachspiels. In Proceedings of the fifth international congress of mathematicians,
volume 2, pages 501–504. Cambridge Univ. Press, (1913).
[S2013] A. N. Siegel, Combinatorial Game Theory, American Math. Society, 2013.

You might also like