0% found this document useful (0 votes)
61 views12 pages

A Feebly Secure Trapdoor Function

This document describes a construction of a feebly secure trapdoor function with a security order of 25/22. It begins with background on feebly one-way functions and definitions for feebly trapdoor candidates. It then discusses preparatory work establishing properties of hard matrix functions. The document reviews the gate elimination technique for proving circuit complexity lower bounds. It presents the construction of the feebly secure trapdoor function and proves hardness amplification results, showing an exponential guarantee on an adversary's success probability.

Uploaded by

Anand Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views12 pages

A Feebly Secure Trapdoor Function

This document describes a construction of a feebly secure trapdoor function with a security order of 25/22. It begins with background on feebly one-way functions and definitions for feebly trapdoor candidates. It then discusses preparatory work establishing properties of hard matrix functions. The document reviews the gate elimination technique for proving circuit complexity lower bounds. It presents the construction of the feebly secure trapdoor function and proves hardness amplification results, showing an exponential guarantee on an adversary's success probability.

Uploaded by

Anand Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

A feebly secure trapdoor function

Edward A. Hirsch and Sergey I. Nikolenko


Steklov Institute of Mathematics at St.Petersburg, Russia
Abstract. In 1992, A. Hiltgen [1] provided the rst constructions of prov-
ably (slightly) secure cryptographic primitives, namely feebly one-way functions.
These functions are provably harder to invert than to compute, but the com-
plexity (viewed as circuit complexity over circuits with arbitrary binary gates)
is amplied by a constant factor only (with the factor approaching 2).
In traditional cryptography, one-way functions are the basic primitive of private-
key and digital signature schemes, while public-key cryptosystems are con-
structed with trapdoor functions. We continue Hiltgens work by providing an
example of a feebly trapdoor function where the adversary is guaranteed to spend
more time than every honest participant by a constant factor of
25
22
.
1 Introduction
Numerous public-key cryptographic protocols have been devised over the years, and
yet there exists not a single proof of their security: neither an unconditional proof (that
would necessarily imply P = NP) nor a proof based on standard (not problem-specic)
structural complexity assumptions. While universal primitives for one-way functions [2,
3] and public-key cryptosystems are known [4] (see also [5]), they give no connection to
the core assumptions of traditional structural complexity theory; more, the asymptotic
nature of polynomial-time reductions leaves no hope for condence that a particular
scheme cannot be broken at a particular key length. In general, it appears like there is
still a very long way to go before cryptography can claim any provably secure public-key
construction.
If we are unable to prove a superpolynomial gap between the complexities of honest
parties and adversaries, can we prove at least some gap? In 1992, Alain Hiltgen [1]
managed to present a function that is twice harder to invert than to compute. His
example is a linear function over GF(2) with a matrix that has few non-zero entries
while the inverse matrix has many non-zero entries; the complexity gap follows by a
simple argument of Lamagna and Savage [6, 7]: every bit of the output depends non-idly
on many variables and all these bits correspond to dierent functions, hence a lower
bound on the complexity of computing them all together. The model of computation
here is the most general one: the number of gates in a Boolean circuit that uses arbitrary
binary Boolean gates. Note that little more could be expected for this model at present,
since known lower bounds here are linear in the number of inputs [8, 9].
In this work, we construct another feebly secure cryptographic primitive: namely,
a feebly trapdoor function. Of course, in order to obtain the result, we have to prove a

This research was partially supported by the Russian Foundation for Basic Research
(RFBR), grants 08-01-00640-a, 08-01-00649, 09-01-00784-a, the Dynasty Foundation, the
President of RF grant for leading scientic schools support NSh 4392.2008.1, and the Fun-
damental research program of the Russian Academy of Sciences.
lower bound on the circuit complexity of a certain function; we use the gate elimina-
tion technique from circuit complexity of the eighties. More formally, the complexity
of inverting (decryption) without the use of trapdoor information in our construction
is at least
25
22
times greater than the complexities of honest encryption, decryption, and
key generation. We also provide hardness amplication results that give exponential
guarantees on the success probability for weaker attackers. In Section 2.1, we give basic
denitions. Section 2.2 does preparational work, establishing some combinatorial prop-
erties of the matrices representing hard function candidates. Section 2.3 reviews the
basic method of gate elimination, which we use to prove lower bounds on complexity,
and applies it to the feeble security setting. Finally, Sections 3.1 and 3.2 present the
construction of a feebly secure trapdoor function, Section 3.3 proves hardness ampli-
cation results, and Section 4 lists possible directions for further research.
2 Preliminaries
2.1 Basic denitions
Let us denote by B
n,m
the set of all 2
m2
n
functions f : B
n
B
m
, where B = {0, 1} is
the eld of two elements. Our computational model is Boolean circuits with arbitrary
binary gates. A circuit is a directed acyclic graph with in-degree zero (these nodes are
called circuit inputs, or variables) and two (these nodes are called gates). Every gate is
labelled by a binary Boolean function (any of the 16 functions in B
2,1
). Some gates are
marked as outputs (to avoid extra negation gates, we may also dene an output as the
negation of the value obtained in a gate; to avoid identity gates, we also allow to dene
an output as the value of a variable or the negation of it). A circuit with n inputs and m
outputs naturally computes a Boolean function in B
n,m
. We denote the size of a circuit c
by C(c). The circuit complexity (or simply complexity) of a function f, denoted (slightly
abusing notation) by C(f), is the smallest number of gates in a circuit computing f
(such circuit is called an optimal circuit for f): C(f) = min
c:x c(x)=f(x)
C(c). We may
safely assume that every gate of this circuit depends on both inputs, i.e., there are no
constant functions and no unary functions Id and NOT, because such gates can be
easily eliminated from the circuit without increasing the number of the gates.
Following Hiltgen, for every injective f
n
B
n,m
we can dene its measure of one-
wayness M
F
(f
n
) =
C(f
1
n
)
C(f
n
)
. Hiltgens work was to nd sequences of functions f =
{f
n
}

n=1
with large asymptotic constant liminf
n
M
F
(f
n
), which Hiltgen calls fs
order of one-wayness. We will discuss his results in more detail in Section 2.3. We now
extend this denition in order to capture feebly secure trapdoor functions. Since we
are interested in constants here, we must pay attention to all the details.
Denition 1. A feebly trapdoor candidate is a sequence of triples of circuits C =
{(Key
n
, Eval
n
, Inv
n
)}

n=1
where:
{Key
n
}

n=1
is a family of sampling circuits Key
n
: B
n
B
pi(n)
B
ti(n)
,
{Eval
n
}

n=1
is a family of evaluation circuits Eval
n
: B
pi(n)
B
m(n)
B
c(n)
, and
{Inv
n
}

n=1
is a family of inversion circuits Inv
n
: B
ti(n)
B
c(n)
B
m(n)
such that for every security parameter n, every seed s B
n
, and every input m B
m(n)
,
Inv
n
(Key
n,2
(s), Eval
n
(Key
n,1
(s), m)) = m,
where Key
n,1
(s) and Key
n,2
(s) are the rst pi(n) bits (public information) and the
last ti(n) bits (trapdoor information) of Key
n
(s), respectively.
We call this function a candidate because the denition does not imply any secu-
rity, it merely sets up the dimensions and provides correct inversion. In our construc-
tions, m(n) = c(n) and pi(n) = ti(n). To nd how secure a function is, we introduce
the notion of a break. Informally, an adversary should invert the function without
knowing the trapdoor information. We introduce break as inversion with probability
greater than
3
4
. (We later investigate security against adversaries having smaller success
probabilities, too.) We denote by C
3/4
(f) the minimal size of a circuit that correctly
computes a function f B
n,m
on more than
3
4
of its inputs (of length n). Obviously,
C
3/4
(f) C(f) for all f.
Denition 2. A circuit N breaks a feebly trapdoor candidate C = {Key
n
, Eval
n
, Inv
n
}
on seed length n if for uniformly chosen seeds s B
n
and inputs m B
m(n)
Pr
(s,m)U

N(Key
n,1
(s), Eval
n
(Key
n,1
(s), m)) = m

>
3
4
.
Remark 1. In fact, in what follows we prove a stronger result: we prove that no circuit
(of a certain size) can break our candidate for any random seed s, that is, for every
seed s, every adversary fails with probability at least 1/4.
For a trapdoor function to be secure, circuits that break the function should be
larger than the circuits computing it.
Denition 3. A feebly trapdoor candidate C = {Key
n
, Eval
n
, Inv
n
} has order of se-
curity k if for every sequence of circuits {N
n
}

n=1
that break f on every input length
n,
lim inf
n
min

C(N
n
)
C(Key
n
)
,
C(N
n
)
C(Eval
n
)
,
C(N
n
)
C(Inv
n
)

k.
Remark 2. One could consider key generation as a separate process and omit its com-
plexity from the denition of the order of security. However, we prove our results for
the denition stated above as it makes them stronger.
Remark 3. Let us note explicitly that we are talking about one-time security. An ad-
versary can amortize his circuit complexity on inverting a feebly trapdoor candidate for
the second time for the same seed, for example, by computing the trapdoor information
and successfully reusing it. Thus, in our setting one has to pick a new seed for every
input.
In the next sections, we develop our construction of a feebly trapdoor function (that
is, a feebly trapdoor candidate with nontrivial order of security).
2.2 Hard matrices
All our constructions are based on a linear function f : B
n
B
n
shown by A. Hiltgen
[1] to be feebly one-way of order 3/2. We restrict ourselves to the case when n 0
(mod 4). Note that Denition 3 carries through this restriction: for n 0 (mod 4) one
can simply consider circuit with input size equal to the lowest multiple of 4 greater
than n.
In what follows, all computations are done over F
2
. We use standard matrix nota-
tion: e
k
is the identity k k matrix, 0
k
is the zero k k matrix, e
ij
is a matrix with
a single non-zero element in position (i, j), e
i
is a matrix where the i
th
row consists
of 1s, and all other elements are zero, 1
k
is the k k matrix lled with 1s, u
k
is the
upper triangular kk matrix (u
ij
= 1 i < j), l
k
is the lower triangular kk matrix
(l
ij
= 1 i > j), and m

is the permutation matrix for (m


ij
= 1 j = (i)). By
e, 0, 1, u, and l without subscripts we denote the correspondent matrices of dimension
n
2

n
2
. We also set
n
to be the cyclic permutation (1 2 . . . n).
In this notation the matrix of the Hiltgens function f is A = e
n
+m

n
+e
n,
n
2
+1
.
We are also interested in the n 2n matrix A consisting of A
2
and A
1
stacked
together: A =

A
2
A
1

.
The following lemma can be easily veried by direct calculation.
Lemma 1. Let n = 4k for some k N. Then
A
1
= 1
n
+

e +u 0
0 l

, A
2
= 1
n
u
n
+u
n
1
n
+

e +u
2
0
0 l
2

.
Lemma 2. Let n = 4k for some k N. The following statements hold.
1. All columns of A (and, hence, A
1
) are dierent.
2. Each row of A
1
(resp., A) contains at least
n
2
(resp.,
5n
4
) nonzero entries.
3. After eliminating all but two (resp., all but ve) columns of A
1
(resp., A) there
remains at least one row with two nonzero entries.
Proof. Let us rst interpret the results of Lemma 1. Each row of A contains two ones (on
the diagonal and to the right) except for the last row that has three ones, in positions
(n, 1), (n,
n
2
+1), and (n, n). Each row of A
1
has at least
n
2
non-zero elements (ones),
and the (
n
2
+ 1)
th
row does not contain a single zero.
The A
2
matrix also has lots of ones: (1
n
u
n
+u
n
1
n
) is an nn matrix lled with
zeroes and ones chequered, since
(1
n
u
n
)
ij
= 1 j 1 (mod 2),
(u
n
1
n
)
ij
= 1 i 0 (mod 2).
Moreover,
(e +u
2
)
ij
= 1 j > i and i +j 0 (mod 2),
(l
2
)
ij
= 1 i > j and i +j 0 (mod 2),
and thus A
2
has two triangular blocks lled with ones: for 1 i j
n
2
and for
n
2
+ 1 < j < i n. Thus, each row of A
2
contains at least
n
2
ones; moreover, its
triangular blocks consisting of ones coincide with the triangular blocks of A
1
lled
with zeroes, and the rest is covered with zeroes and ones chequered.
The rst statement is obvious.
The i
th
row of A
1
contains
n
2
+ i non-zero elements for i
n
2
and
n
2
+ n i
non-zero elements for i
n
2
. Thus, the second statement holds for A
1
. At the same
time, the i
th
row of A
2
contains at least
3n
4

i
2
non-zero elements for i
n
2
and
at least
n
2
+
1
2
(i
n
2
1) non-zero elements for i >
n
2
. Therefore, the i
th
row of A
2
contains at least
n
2
+ i +
3n
4

i
2
=
5n
4
+
i
2
nonzero entries for i
n
2
and at least
n
2
+n i +
n
2
+
1
2
(i
n
2
1) =
7n
4

1
2
(i 1)
5n
4
nonzero entries for i
n
2
.
Let us now prove the third claim. Since A
1
has a row that contains only nonzero
entries, all but one columns of this matrix should be eliminated to leave just one nonzero
entry. The same holds for the left part of the matrix A
2
(see its rst row). The same
holds for the right part of the matrix A
2
without the last column (see its last row).
2.3 Gate elimination
In this section, we rst briey remind about Hiltgens methods and then introduce
gate elimination as the primary technique for proving our bounds. Hiltgen proved all
his bounds with the following very simple argument due to Lamagna and Savage.
Proposition 1 ([6, 7]; [1, Theorems 3 and 4]).
1. Suppose that f : B
n
B depends non-idly on each of its n variables, that is, for
every i there exist values a
1
, . . . , a
i1
, a
i+1
, . . . , a
n
B such that f(a
1
, . . . , a
i1
, 0,
a
i+1
, . . . , a
n
) = f(a
1
, . . . , a
i1
, 1, a
i+1
, . . . , a
n
). Then C(f) n 1.
2. Let f = (f
(1)
, . . . , f
(m)
) : B
n
B
m
, where f
(k)
is the k
th
component of f. If the
m component functions f
(i)
and their negations
1
are pairwise dierent and each of
them satises C(f
(i)
) c 1 then C(f) c +m1.
Hiltgen counted the minimal complexity of computing one bit of the input (e.g. since
each row of A
1
has at least
n
2
nonzero entries, the minimal complexity of each com-
ponent of A
1
y is
n
2
) and thus produced lower bounds on the complexity of inverting
the function (e.g. the complexity of computing A
1
y is
n
2
+n 2 =
3n
2
2).
Besides, in cryptography it is generally desirable to prove not only worst-case
bounds, but also that an adversary is unable to invert the function on a substan-
tial fraction of inputs. Indeed, for each of the matrices constructed here, any circuit
using less than the minimal necessary number of gates inverts it on less than
3
4
of the
inputs. In Hiltgens works, this fact followed from a very simple observation (which was
not even explicitly stated).
Lemma 3. Consider a function f =

n
i=1
x
i
. For any g that depends on only m < n
of these variables, Pr
x
1
,...,x
n
[f(x
1
, . . . , x
n
) = g(x
i
1
, . . . , x
i
m
)] =
1
2
.
Proof. Since m < n, there exists an index j 1..n such that g does not depend on x
j
.
This means that for every set of values of the other variables, whatever the value of g
is, for one of the values of x
j
f agrees with g, and on the other value f diers from g.
This means that f diers from g on precisely
1
2
of the inputs.
The argument suces for Hiltgens feebly one-wayness result for the square matrix
A
1
: rst we apply the rst part of Proposition 1 and see that every output has
complexity at least
n
2
1, and then the second part of Proposition 1 yields the necessary
bound of
3n
2
1. Moreover, if a circuit has less than the necessary number of gates, one
1
Note that we had to add the requirement f
(i)
= f
(j)
to the original statement of Lamagna
and Savage because we do not count the negation of an output as a separate gate.
of its outputs inevitably depends on less than the necessary number of input variables,
which, by Lemma 3, gives the necessary
1
2
error rate.
However, in this paper we also use non-square matrices, and it turns out that a
similar simple argument does not provide sucient bounds for our matrices. Therefore,
we use a dierent way of proving lower bounds, namely gate elimination that has been
previously used for every lower bound in regular circuit complexity [9]. The basic
idea of this method is to use the following inductive argument. Consider function f and
substitute some value c for some variable x thus obtaining a circuit for the function
f|
x=c
. The circuit can be simplied, because the gates that had this variable as inputs
become either unary (recollect that the negation can be embedded into subsequent
gates) or constant (in this case we can proceed to eliminating subsequent gates). The
important case here is when the gate is non-linear, such as an AND or an OR gate. In
this case it is always possible to choose a value for an input of such gate so that this
gate becomes a constant. One then proceeds by induction as long as it is possible to
nd a suitable variable that eliminates many enough gates. Evidently, the number of
eliminated gates is a lower bound on the complexity of f.
First, we prove a prerequisite to the master lemma.
Lemma 4. Let t 0. Assume that : B
v(n)
B
n
is a linear function with matrix X
over GF(2). Assume also that all columns of X are dierent, there are no zero rows
in X, and after removing any t columns of X, the matrix still has at least one row
containing at least two nonzero entries. Then C() t + 1 and, moreover, no circuit
with less than t + 1 gates can compute on more than
1
2
of the inputs.
Proof. We argue by induction on t. For t = 0 the statement is obvious: the value of a
circuit with no gates
2
cannot agree on more than
1
2
of the input assignments with a
linear function essentially depending on two variables.
Let now t 1 and consider an optimal circuit of size less than t + 1 implementing
on more than
1
2
of the inputs. Let us x a topological order on its nodes. We denote
the actual function this circuit implements by h (it does not need to be linear, but does
have to agree with on more than
1
2
of the inputs).
Consider the topmost gate g in this order. Since g is topmost, its incoming edges
come from the inputs of the circuit, denote them by x and y. To eliminate a gate, we
simply substitute a value to x; substituting a value for one variable is equivalent to
removing a column from the matrix, and it reduces t by at most 1.
To invoke the induction hypothesis, it remains to note that if h agrees with on
more than
1
2
of the inputs, then either h|
x=0
or h|
x=1
agrees with the corresponding
restriction of on more than
1
2
of the remaining inputs. Thus, if h did compute on
more than
1
2
of the inputs, substituting this value of x into h would yield a function of
n 1 inputs that contradicted the induction hypothesis. Thus, substituting this value
of x into yields a function of n 1 inputs satisfying the conditions of the theorem
with parameter t 1, and this function can be computed by a circuit of size less than
t 1, which contradicts the induction hypothesis.
The following is a master lemma that we will apply to our matrices.
Lemma 5. Let t u 1. Assume that : B
v(n)
B
n
is a linear function with
matrix X over GF(2). Assume also that all columns of X are dierent, every row of
2
Recall that it can still compute unary functions.
X has at least u nonzero entries, and after removing any t columns of X, the matrix
still has at least one row containing at least two nonzero entries. Then C() u + t
and, moreover, C
3/4
() u +t.
Proof. This time, we argue by induction on u.
The induction base (u = 1) follows from Lemma 4.
Let now u 2 and consider an optimal circuit of size less than u +t implementing
on more than
3
4
of the inputs and x a topological order on its nodes. We denote the
actual function this circuit implements by h (it does not need to be linear, but does
have to agree with on more than
3
4
of the inputs).
Consider the topmost gate g in this order. Since g is topmost, its incoming edges
come from the inputs of the circuit, denote them by x and y. By Lemma 3, neither of
its input variables can be marked as an output, because for u 2 each row has at least
two variables.
The following possibilities exhaust all possible cases.
1. One of the input variables of g, say x, enters some other gate. In this case by
setting x to any constant we can eliminate at least 2 gates. To invoke the induction
hypothesis, it remains to note that if h agrees with on more than
3
4
of the inputs,
then either h|
x=0
or h|
x=1
agrees with the corresponding restriction of on more
than
3
4
of the remaining inputs. Thus, substituting this value of x into yields a
function of n 1 inputs satisfying the conditions of the theorem with parameters
u 1 and t 1, and this function can be computed by a circuit of size less than
u +t 2, which contradicts the induction hypothesis.
2. Neither x nor y enters any other gate. In this case, h is a function of neither x nor
y but only g(x, y) (if any); we show that this is impossible. Note that depends
on x and y separately; in particular, for one of these variables, say x, there exists
an output gate
3

i
that depends only on x:
i
= x

zZ
z, where y / Z. Since
every gate of an optimal circuit essentially depends on both its arguments, there
exist values a and b such that g(0, a) = g(1, b). Thus, for every assignment of the
remaining variables h
i
=
i
either for x = 0, y = a or for x = 1, y = b, which
means that and h disagree on at least
1
4
of all assignments.
In what follows we will also use block-diagonal matrices. Intuition hints that joint
computation of two functions that have dierent inputs should be as hard as computing
them separately (thus, the lower bound should be the sum of respective lower bounds).
However, for certain functions it is not the case, as seen in [9, Section 10.2] We show
it for our particular case.
Lemma 6. Assume that a linear function is determined by a block diagonal matrix

x
(1)
, x
(2)
, . . . , x
(m)

X
1
0 . . . 0
0 X
2
. . . 0
.
.
.
.
.
.
.
.
.
0 0 . . . X
m

x
(1)
x
(2)
.
.
.
x
(m)

,
and the matrices X
j
satisfy the requirements of Lemma 5 with u
j
(n) and t
j
(n), re-
spectively (the matrices may be non-square and of dierent dimensions). Then C()

m
j=1
(u
j
(n) +t
j
(n)) and, moreover, C
3/4
()

m
j=1
(u
j
(n) +t
j
(n)).
3
Recall that we distinguish outputs from output gates: an output can be the negation of a
function computed in an output gate.
Proof. We proceed similarly to Lemma 5. Note that when we substitute a variable
from x
(1)
, it does not change anything in X
2
, and vice versa. Thus we substitute
good variables (those that eliminate two gates) as long as we have them and then
substitute bad variables (eliminating one gate per step) when we do not have good
ones separately for each matrix. If one of the matrices runs out of rows that contain
at least two nonzero entries (it may happen after eliminating u
i
(n) 1 good and
then t
i
(n) u
i
(n) + 2 other variables from it), we substitute the remaining variables
corresponding to this matrix and forget about this part of the block-diagonal matrix.
It can happen, however, that one of the inputs (variables) in the topmost gate is
from x
(1)
and the other one is from x
(2)
. Both cases from the proof of Lemma 5 go
through smoothly in this situation: in the rst case we substitute a value in the good
variable, and the second case is impossible for the same reasons.
Thus, eliminating all columns from X
i
leads to eliminating at least
2(u
i
1) + (t
i
u
i
+ 2) = t
i
+u
i
gates, and we obtain the overall bound of
C
3/4
()
m

j=1
(u
j
+t
j
).

We now formulate the direct consequences of these lemmas and note upper bounds
for our specic matrices.
Lemma 7. Let n, n

0 (mod 4),
(x) = A
1
x,
2
(x) =

A
1
A
2

x,

(x) =

A
1
A
2
0
0 0 A
1

x,
where A
1

denotes a matrix with the same structure as A


1
, but with dimension n

instead of n. Then C
3/4
()
3n
2
2, C
3/4
(
2
)
13n
4
5, C
3/4
(

)
3n

2
+
13n
4
7.
Proof. Follows from Lemmas 5 and 6, by substituting the respective bounds u(n) and
t(n) from Lemma 2 (in particular, t(n) = n 2 for the matrix A
1
and t(n) = 2n 5
for A).
Lemma 8. 1. There exists a circuit of size
3n
2
1 that implements the linear function
: B
n
B
n
with matrix A
1
.
2. There exists a circuit of size
7n
2
that implements the linear function : B
2n
B
n
with matrix

A
1
A

.
3. There exists a circuit of size
5n
2
1 that implements the linear function : B
2n
B
n
with matrix

A
1
A
1

.
Proof.
1. First construct the sum

n/2
i=1
x
i
(
n
2
1 gates). Then, adding one by one each of
the inputs x
i
, i =
n
2
..n, compute all outputs y
i
, i =
n
2
..n and, by the way, the sum
of all inputs

n
i=1
x
i
(this takes another
n
2
gates). Finally, the rst
n
2
outputs will
be computed by subtracting the rst
n
2
inputs from the sum of all inputs one by
one (another
n
2
gates).
2. To implement the left part of this matrix, we need
3n
2
1 gates. Afterwards we add
to each output the two bits from the right part of the matrix (three bits in case of
the last row); we add 2n + 1 gates in this way.
3. Note that in this case (a, b) = ( A
1
A
1
) (
a
b
) = A
1
(a b) for any a, b B
n
.
Thus, we rst add a b (n gates) and then implement A
1
(
3n
2
1 gates).
3 The feebly secure trapdoor function
3.1 Two constructions
We are almost ready to present the construction of our feebly trapdoor function (recall
Denition 1). In this section, we consider two dierent constructions, none of which
works alone; however, we will merge them into one in the subsequent section, and the
resulting mixture will be feebly secure.
In our rst construction, the inversion with trapdoor is faster than inversion without
trapdoor, but, unfortunately, evaluating the function is even harder. In terms of De-
nition 1, we now present a feebly trapdoor candidate with identical lengths of the seed,
public information, trapdoor, input, and output c(n) = m(n) = pi(n) = ti(n) = n.
Given a random seed, the sampler produces a pair of public and trapdoor information
(pi, ti), where ti is the random seed itself and pi = A(ti) (thus, the sampler can be
implemented using n + 1 gates). In this construction, evaluation produces the output
c for an input m as follows:
Eval
n
(pi, m) = A
1
(pi) A(m).
An upper bound on evaluation circuit complexity follows from Lemma 8; one can
evaluate this function with a circuit of size
7n
2
. Inversion with trapdoor goes as follows:
Inv
n
(ti, c) = A
1
(A
1
(pi) c) = A
1
(ti c).
Due to the nice linearity (note that bounds proven in previous sections do not apply
here, because the inversion matrix has a lot of identical columns), this circuit can be
implemented in
5n
2
1 gates: rst one computes ti c using n gates, then one applies
A
1
using another
3n
2
1 gates (see Lemma 8).
Finally, an adversary has to invert Bobs message the hard way:
m = A
1
(A
1
(pi) c) = A

pi
c

.
By Lemma 7, the complexity of this function is at least
13n
4
5 gates, and any adversary
with less than
13n
4
5 gates fails on at least 1/4 of the inputs.
In this construction evaluation is harder than inversion without trapdoor. In order
to x this problem, we use also a dierent construction, also a candidate trapdoor
function now with c(n) = m(n) = n and pi(n) = ti(n) = 0. Our second construction is
just the Hiltgens feebly one-way function. Thus, the public and trapdoor information is
not used at all, and the evaluationinversion functions are as follows: Eval
n
(m) = A(m),
Inv
n
(c) = A
1
(c), Adv
n
(c) = A
1
(c).
This construction, of course, is not a trapdoor function at all because inversion is
implemented with no regard for the trapdoor. For a message m of length |m| = n the
evaluation circuit has n+1 gates, while inversion, by Lemma 8, can be performed only
by circuits of
3n
2
1 gates each. Thus, in this construction evaluation is easy, while
inversion is hard, with or without trapdoor.
3.2 The combined construction
We now combine the two functions presented in Section 3.1. The resulting one will
make it easier for both inversion with trapdoor and evaluation than for an adversary.
We split the input into two parts; the rst part m
1
, of length n, will be subject to our
rst (less trivial) construction, while the second part m
2
, of length n, will be subject
to the second construction. We will choose later to maximize the relative hardness
for an adversary.
Now each participant has a block-diagonal matrix:
Eval
n
(pi, m) =

A
1
A 0
0 0 A

pi
m
1
m
2

c
1
c
2

,
Inv
n
(ti, c) =

A
1
A
1
0
0 0 A
1

ti
c
1
c
2

m
1
m
2

,
Adv
n
(pi, m) =

A
2
A
1
0
0 0 A
1

pi
c
1
c
2

m
1
m
2

,
where A

denotes the matrix with the same structure as A, but with dimension n
instead of n. Thus, in terms of Denition 1, we get a feebly trapdoor candidate where
inputs and outputs are longer than the seed and the public and trapdoor information:
pi(n) = ti(n) = n, c(n) = m(n) = (1 +)n.
Lemma 8 yields upper bounds for evaluation and inversion with trapdoor, and
Lemma 7 yields a lower bound for the adversary: C(Eval
n
)
7n
2
+n + 1, C(Inv
n
)
5n
2
+
3n
2
2, C
3/4
(Adv
n
)
13n
4
+
3n
2
7. Thus, to get a feebly trapdoor function we
simply need to choose such that
13
4
+
3
2
>
7
2
+ and
13
4
+
3
2
>
5
2
+
3
2
. The second
inequality is trivial, and the rst one yields >
1
2
.
We would like to maximize the order of security of this trapdoor function (De-
nition 3); since sampling is always strictly faster than evaluation and inversion with
trapdoor, we are maximizing
min

lim
n
C
3/4
(Adv
n
)
C(Inv
n
)
, lim
n
C
3/4
(Adv
n
)
C(Eval
n
)

= min

13
4
+
3
2
5
2
+
3
2
,
13
4
+
3
2
7
2
+

.
This expression reaches maximum when = 2, and the order of security in this case
reaches
25
22
. We summarize this in the following theorem.
Theorem 1. There exists a feebly trapdoor function with the seed length pi(n) =
ti(n) = n, the length of inputs and outputs c(n) = m(n) = 3n, and the order of
security
25
22
.
3.3 Hardness amplication
In the previous sections, we saw a construction of a linear feebly trapdoor function
that guarantees that any circuit with less than precisely the necessary number of gates
fails to invert this function on more that
3
4
of its inputs. In this section, we use this
construction to design a function with a superpolynomial bound on the probability of
success (namely, 2
c

n+o(

n)
). Let us denote by h the function an adversary had to
compute in the previous section, and by X its matrix. Consider the linear function H
dened by the block diagonal matrix
H

x
(1)
, x
(2)
, . . . , x
(m)

X 0 . . . 0
0 X . . . 0
.
.
.
.
.
.
.
.
.
0 0 . . . X

x
(1)
x
(2)
.
.
.
x
(m)

.
By Lemma 6, H has complexity at least mC(h). The dimensions of X are (1 +)n
(2 +)n; we denote n

= (1 +)n.
Lemma 9. If a circuit computes H on more that
1
p(m)
fraction of its inputs, and for
each block in H:
all columns of X are dierent;
every row of X has at least u(n) nonzero entries;
after removing any t(n) columns of X, this matrix still has at least one row con-
taining at least two nonzero entries,
then the complexity of this circuit is not less than (u(n) +t(n))(mlog
4/3
p(m)).
Proof. First, recall that H consists of m separate blocks with disjoint sets of variables
X
i
; let us denote h
i
= H|
X
i
. Since X
i
are disjoint, mistakes in computing h
i
are
independent: if a circuit C computes h
i
on
i
fraction of its inputs and h
j
on
j
fraction of its inputs, it cannot compute H on more that
i

j
fraction of its inputs.
Thus, there are at most log
4/3
p(m) blocks where C violating our claim can aord
to make mistakes on more than
1
4
of the inputs. During this proof we will call them
terrible blocks.
We proceed by the same gate elimination induction we had been using several times
already. Consider the topmost gate and the two variables that enter it. In the proof of
Lemma 6, we marked variables as good or bad depending on whether they fall into
a block where all good variables have been eliminated. This time, we do the same
thing, but mark all variables in terrible blocks as bad in advance. As in the previous
proofs, whenever the topmost gate has at least one bad variable as input, we set this
variable, thus eliminating only one gate from the circuit. Whenever the topmost gate
has two good variables as inputs, we should always be able to eliminate two gates
from the circuit. There are still the same basic possibilities as in Lemma 5, and we also
have to always choose the part of the input assignments space where the circuit errs
on less than
1
4
of the remaining inputs (since the initial circuit errs on less than
1
4
of
these inputs, such a subcurcuit must exist).
The rest of the proof is the same as Lemma 5. We proceed by induction, eliminating
two gates whenever two good variables enter a topmost gate, and eliminating one
bad variable whenever it enters a topmost gate. Thus, the overall complexity is
at least twice the number of good variables plus the number of remaining bad
variables. We have to discard terrible blocks completely after all, their complexity
may actually be equal to zero. Thus, we get a bound of (t +u)(mlog
4/3
p(m)).
Note also that stacking the matrices up in a large block diagonal matrix does
not change the parameters of a feebly trapdoor function. Thus, we have obtained the
following theorem.
Theorem 2. There exists a feebly trapdoor function candidate C = {Key
n
, Eval
n
, Inv
n
}
with the seed length pi(n) = ti(n) = n, the length of inputs and outputs c(n) = m(n) =
3n with complexities C(Inv
n
)
11n
2
+O(1), C(Eval
n
)
11n
2
+O(1), C(Key
n
) = n+1,
and the order of security
25
22
. Moreover, no adversary with less than
25
4
n
25
4
n
(a+1)/2
gates is able to invert this feebly trapdoor function on more than (4/3)
n
a/2
+o(

n)
fraction of the inputs for any constant > 0, 1 > a > 0.
4 Conclusion and further work
In this work, we have presented the rst known construction of a provably secure
trapdoor function, although security should be understood in a very restricted sense
of Denition 3.
Here are natural directions for further research. First, it would be interesting to
devise a more natural construction. Both the second (keyless) construction and the
merge of two matrices are counter-intuitive. Second, it would be great to substantially
improve the order of security. While a certain improvement to the constant
25
22
may be
straightforward, further development will soon hit the general obstacle of our present
inability to prove lower bounds greater than 4no(1) for B
n
B
n
functions. Moreover,
the constructions based on linear functions will never overcome a bound of n1 gates
per one bit of output; thus, nonlinear constructions are necessary. Finally, a natural
extension of this work would be to devise other feebly secure cryptographic primitives.
Acknowledgements
The authors are grateful to Olga Melanich who noticed a aw in the initial version of
the proof at the very last moment and to the anonymous referees for useful comments.
References
1. Hiltgen, A.P.: Constructions of feebly-one-way families of permutations. In: Proc. of
AsiaCrypt 92. (1992) 422434
2. Levin, L.A.: One-way functions and pseudorandom generators. Combinatorica 7(4) (1987)
357363
3. Goldreich, O.: Foundations of Cryptography. Cambridge University Press (2001)
4. Harnik, D., Kilian, J., Naor, M., Reingold, O., Rosen, A.: On robust combiners for oblivious
transfers and other primitives. In: Proc. of EuroCrypt05, LNCS. Volume 3494. (2005) 96
113
5. Grigoriev, D., Hirsch, E.A., Pervyshev, K.: A complete public-key cryptosystem. Groups,
Complexity, Cryptology 1(1) (2008) 112
6. Lamagna, E.A., Savage, J.E.: On the logical complexity of symmetric switching functions
in monotone and complete bases. Technical report, Brown University, Rhode Island (jul
1973)
7. Savage, J.E.: The Complexity of Computing. Wiley, New York (1976)
8. Blum, N.: A boolean function requiring 3n network size. Theoretical Computer Science
28 (1984) 337345
9. Wegener, I.: The Complexity of Boolean Functions. B. G. Teubner, and John Wiley &
Sons (1987)

You might also like