Course: Machine Learning - Foundations
Mock
1. (2 points)
Answer: 0.166
x y f (x) = sign(x1 − 3x2 )
[1,0] +1 +1
[2,2] +1 -1
[3,1] +1 +1
[4,5] -1 -1
[5,4] -1 -1
[6,5] -1 -1
1
P6 1
Loss = 6 i=1 1(f (xi ) 6= y i ) = 6
= 0.1666
2. (2 points)
Answer: 3.4
x2 − 2
x y f (x) =
5
1 2 -0.2
3 4 1.4
4 3 2.8
5 6 4.6
1
P4
Loss = 4 i=1 (f (x
i
) − y i )2 ) = 14 (4.84 + 6.76 + 0.04 + 1.96) = 3.4
3. (2 points)
Answer: 0.6989
x P(x)
1
2.5
5
1
1
5
1
3
5
1
4.5
5
1
4.95
5
1
P5
Loss = 5 i=1 −log(P (xi ) = 0.6989
4. (3 points)
Answer: 5.148
Course: Machine Learning - Foundations Page 2 of 5
x = [x1 , x2 , x3 ] f (x) g(f (x)) ||g(f (x)) − x)||2
[-1,1,-1] -2.5 [-0.75,2.5,-2.5] 4.56
[1,2,1] -0.5 [-0.15,05,-0.5] 5.82
[1,1,-1] -0.5 [-0.15,0.5,-05] 1.82
[1,-1,2] 3 [0.9,-3,3] 5.01
[0,1,3] 0.5 [0.15,-0.5,0.5] 8.52
1
P5
Loss = 5 i=1 ||g(f (x)) − x)||2 = 15 (4.56 + 5.82 + 1.82 + 5.01 + 8.52)
5. (2 points)
Answer: 0.116
Linear approximation L(x) = g(0.5) + g 0 (0.5)(x − 0.5)
2 +0.2x+2
For g(x) = 2.5e−x , we get
g(0.5) = 15.89
2 +0.2x+2
g 0 (x) = 2.5e−x (−2x + 0.2)
g 0 (0.5) = −12.719
L(x) = 22.246 − 12.719x hence, L(1.5) = 3.178
6. (2 points)
Answer: -3.535
For f (x, y) = xy 2 + 3x2 , Directional derivative:
Dw~ f (x, y) = ∇f · w
where, ∇f = [ ∂f , ∂f ] and unit vector along w
∂x ∂y
~= w
~
||w||
−1 1
Dw~ f (x, y) = [y 2 + 6x, 2xy] · [ √ , √ ]
2 2
Dw~ f (1, 1) = −3.535
7. (1 point)
Answer: (1.511)
p ∂f ∂f
For g(x, y) = 2x2 − y 3 , gradient ∇g = , ]
∂x ∂y
2x −3y 2
∇g = [ p , p ]
2x2 − y 3 2 2x2 − y 3
At x = 2 and y = 1,
∇g = [1.511, −0.566]
Course: Machine Learning - Foundations Page 3 of 5
8. (1 point)
Answer: (-0.566)
p ∂f ∂f
For g(x, y) = 2x2 − y 3 , gradient ∇g = , ]
∂x ∂y
2x −3y 2
∇g = [ p , p ]
2x2 − y 3 2 2x2 − y 3
At x = 2 and y = 1,
∇g = [1.511, −0.566]
9. (2 points)
Answer: -41.2
∂f (2,−2) ∂f (2,−2)
Linear approximation L(2, −2) = f (2, −2) + ∂x
(x − 2) + ∂y
(y + 2)
For f (x, y) = x2 + 2xy 3 , we get
f (2, −2) = −28
∂f (2, −2)
= −12
∂x
∂f (2, −2)
= 48
∂y
L(2, −2) = −12x + 48y + 92
Therefore, L(2.3, −2.2) = −41.2
10. (1 point)
Answer: 3
2 0 4
The characteristic polynomial for A = 0 3 0 is −λ3 + 7λ2 − 16λ + 12.
0 1 2
Solving −λ3 + 7λ2 − 16λ + 12 = 0, we get λ1 = 3 and λ2 = 2. Dominant eigen value = 3.
11. (2 points)
Answer: (-1.928,2.143,-0.785)
−1 1 vT u
Projection of ~u = 4 onto ~v = 2 is expressed as p = T v.
v v
2 3
−1
T
v u= 1 2 3 4 = 13
2
T
1
v v = 1 2 3 2 = 14
3
Course: Machine Learning - Foundations Page 4 of 5
1
13
p= 14
∗ 2
3
−1 1 −1.928
13
Error, e = u − p = 4 − 14
∗ 2 = 2.143
2 3 −0.785
12. (1 point)
Answer: 0.666
With x = (a, b), r = (2, −1) and s = (−1, 2) and conditions x · r = 1 and x · s = 0, we
get
2a − b = 1
−a + 2b = 0
Solving these two equations, we get a = 0.666, b = 0.333.
13. (1 point)
Answer: 0.333
With x = (a, b), r = (2, −1) and s = (−1, 2) and conditions x · r = 1 and x · s = 0, we
get
2a − b = 1
−a + 2b = 0
Solving these two equations, we get a = 0.666, b = 0.333.
14. (3 points)
Answer: 6
Plot of points and line is given below.
P4
Sum of squared residuals = i=1 (2 − yi )2 = 6
Course: Machine Learning - Foundations Page 5 of 5
15. (1 point)
Answer: 3
1 2 2 4 6
Triangular echelon form, U = 0 0 1 2 3
0 0 0 0 0
Free variables =x2 , x4 , x5