0% found this document useful (0 votes)
17 views2 pages

Mid Sem

Uploaded by

afzaljangsher8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views2 pages

Mid Sem

Uploaded by

afzaljangsher8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

CS620 Mid-semester Exam (Spring 2021)

Max marks: 45 Duration: 2 hours

ˆ Be brief, complete and stick to what has been asked.

ˆ Untidy presentation of answers, and random ramblings will be penalized by negative marks.

ˆ Unless asked for explicitly, you may cite results/proofs covered in class without reproducing them.

ˆ If you need to make any assumptions, state them clearly.

ˆ Do not copy solutions from others. Penalty for offenders: FR grade.

ˆ Expected time to solve: ≤ 120 mins.

ˆ You will have an additional 30 mins to revise, scan your answer papers and upload on
Moodle.

1. Consider an image classification DNN N with an associated input-output transformer ν. Suppose the
input domain, denoted Im, consists of 32 pixel ×32 pixel gray-scale images of animals, where each pixel
is a real number (hence, each pixel can take infinitely many values). Suppose the output domain is the
set of labels {Cat, Dog, Other}.
Let ∆ : Im × Im → R≥0 be a carefully designed image similarity metric that maps a pair of input
images to a non-negative real number, also called their similarity score. Let ε > 0 be a small positive
real number, called similarity threshold. You are told that for every pair of input images I1 , I2 ∈ Im,
the following hold: (i) ∆(I1 , I2 ) = 0 iff I1 = I2 , and (ii) ∆(I1 , I2 ) = ∆(I2 , I1 ). Additionally, for
every
Vk finite set of images {I1 , I2 , . . . Ik } ⊆ Im, there exists at least one image Ik+1 ∈ Im such that
j=1 (∆(Ik+1 , Ij ) > M ) holds, where M = maxi,j∈{1,...k} ∆(Ii , Ij ).

(a) [10 marks] We want a network like N to not classify similar looking images differently. A convenient
way of expressing this is via the Hoare triple HT1 :

{∆(I1 , I2 ) < ε} `1 = ν(I1 ); `2 = ν(I2 ); {`1 = `2 }.

However, we have seen in the lectures that HT1 may (rather unexpectedly) require N to classify
everything with the same label, rendering HT1 meaningless.
It turns out, however, that under certain conditions on the input image space, the triple HT1
can indeed be meaningful and can express our intuitive ask, i.e. similar looking images should be
classified the same.
Give a first order logic sentence (i.e. formula with no free variables), say α, over elements of Im
such that if Im satisfies α, then indeed HT1 specifies the desired (or intended) property of N . Give
as weak a sentence α as you can, so that the restriction on Im is as mild as possible. Explain your
answer clearly.
(b) [10 marks] A student has written the following Hoare triple HT2 for N :

{∆(I1 , I2 ) > ε} `1 = ν(I1 ); `2 = ν(I2 ) {`1 6= `2 }

Intuitively, the student wishes to express the property that if two input images are hugely dissimilar,
then they should not be classified the same.
Prove that if HT2 holds, then for every pair of images I1 , I2 ∈ =, we must have ∆(I1 , I2 ) < ε. In
other words, the only way for HT2 to hold is vacuously, i.e. the pre-condition itself is unsatisfiable.
2. A saturating ReLU is a function that behaves as follows:

y = max(0, min(z, c)),

where c > 0 is a given saturation value.


Consider the DNN shown in Fig. 1 below. Assume that each node in the hidden layer uses a saturating

in1 1 n1,1 1
out1
1
-2

2 -1
in2 n1,2 out2
1 -1

Figure 1: A simple DNN

ReLU with c = 10, and each node in the output layer uses a standard ReLU.

ˆ [7.5 marks] Write the node constraint for n1,2 in the above DNN using a boolean combination of
linear constraints. You must not use max or min functions in writing the node constraint.
ˆ [7.5 marks] Give the best over-approximation of the above constraint (for node n1,2 ) that you can,
as a conjunction of linear constraints (i.e. linear equalities and inequalities).
ˆ [10 marks] A student wants to use Reluplex to prove properties of the DNN shown in Fig. 1.
However, since Reluplex doesn’t handle saturated ReLUs natively, it is not possible to use Reluplex
directly.
You are required to help the student by constructing a new DNN that mimics the input-output
behaviour of the DNN shown in Fig. 1, but uses only ReLUs as the non-linear activation functions.
This will allow the student to use Reluplex directly to reason about the network in Fig. 1. Effec-
tively, you are required to mimic the behaviour of a saturated ReLU using one or more ReLUs,
and appropriate interconnections between them. You may also use constant valued inputs of the
new DNN and additional hidden layers/additional nodes, if needed.
Draw the new DNN that uses only ReLUs (not saturated ReLUs) and clearly explain how the
behaviour of each saturating ReLU in the original network is mimicked in the new network.

You might also like