0% found this document useful (0 votes)
30 views4 pages

MA324 Midsem Soln

The document contains model answers for a mid-semester examination in Statistical Inference and Multivariate Analysis at IIT Guwahati. It includes solutions to various problems related to statistical estimation, likelihood functions, and hypothesis testing. Key topics covered include completeness of distributions, method of moments, maximum likelihood estimation, and likelihood ratio tests.

Uploaded by

prince lidhoriya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views4 pages

MA324 Midsem Soln

The document contains model answers for a mid-semester examination in Statistical Inference and Multivariate Analysis at IIT Guwahati. It includes solutions to various problems related to statistical estimation, likelihood functions, and hypothesis testing. Key topics covered include completeness of distributions, method of moments, maximum likelihood estimation, and likelihood ratio tests.

Uploaded by

prince lidhoriya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

MA 324 Mid-semester Examination

Statistical Inference and Multivariate Analysis 14:00–16:00 IST


IIT GUWAHATI March 01, 2022

Model Answers of Mid-semester Examination

i.i.d.
1. (2 points) Let X1 , X2 ∼ P oi(λ), where λ > 0 is unknown parameter. Is the family of
distributions induced by the statistic T = (X1 , X2 ) complete?

Solution: Note that E (X1 − X2 ) = 0 for all λ > 0. Now,

P (X1 ̸= X2 ) ≥ P (X1 = 0, X2 = 1) = λe−2λ > 0 =⇒ P (X1 − X2 = 0) = P (X1 = X2 ) < 1.

Thus, the family of distributions induced by the statistics (X1 , X2 ) is not complete. [2 points ]

2. (5 points) Let X1 , X2 , . . . , X9 be a random sample of size 9 form population having U (θ1 , θ2 )


distribution, where both θ1 and θ2 are unknown and −∞ < θ1 < θ2 < ∞. Derive the
estimators of θ1 and θ2 using method of moment.

θ2 +θ θ +θ2
Solution: Here E(X1 ) = θ1 +θ
2
2
and E (X12 ) = 1 13 2 2 . [1 point]
Let M1 = 91 9i=1 Xi , M2 = 19 9i=1 Xi2 and S 2 = 19 9i=1 (Xi − M1 )2 . The method of moment
P P P
estimators can be found by solving the following equations:

θ1 + θ2 =2m1 and θ12 + θ1 θ2 + θ22 =3m2 . [1 point]


√ √  √ √ 
The solutions for (θ1 , θ2 ) are m1 − 3s, m1 + 3s and m1 + 3s, m1 − 3s , where S is

the positive√square root of S 2 . As θ1 < θ2 , the estimator for θ1 and θ2 are θb1 = M1 − 3S and
θb2 = M1 + 3S, respectively. [3 points ]

3. Let X1 , X2 , . . . , Xn be a random sample of size n (≥ 2) from a population having probability


density function
( h 2i
2
x exp − xθ if x > 0
f (x, θ) = θ
0 otherwise,
1
where θ > 0 is a unknown parameter. Consider the problem of estimation of τ (θ) = √ .
θ
(a) (5 points) Derive minimum variance unbiased estimator of τ (θ).
Solution: Note that f (·, θ) belongs to a full rank
P exponential family. Thus, using prop-
erty of exponential family of distributions, T = ni=1 Xi2 is complete and sufficient statis-
tic for θ. [2 points ]
Notice that Xi2 follows exponential distribution with mean θ. Therefore,
 
1
T ∼ Gamma n, . [1 point]
θ

Thus, for k > −n,


Z ∞  
k 1 k+n−1 − θt Γ(n + k) k Γ(n)
Tk = θk .

E T = n t e dt = θ =⇒ E
θ Γ(n) 0 Γ(n) Γ(n + k)

Page 1 of 4
2021-22 Even Mid-semester Examination MA 324

Therefore, using Lehmann-Scheffee theorem and taking k = − 12 , we have

n
!− 12
Γ(n) X
τb = Xi2
Γ(n − 12 ) i=1

1
is the UMVUE of τ (θ) = θ− 2 . [2 points ]

(b) (3 points) Show that the estimator that you obtained in (a) is consistent. You may use
√ 1
Stirling’s approximation for Γ(n): Γ(n) ∼ 2π (n − 1)n− 2 e−n+1 .
Solution: Here E (X12 ) = θ. Therefore, using WLLN,
n
1X 2
X −→ θ
n i=1 i

in probability. [1 point]
Now,

n
!− 12
Γ(n) X
τbn = Xi2
Γ(n − 21 ) i=1
n
!− 12
Γ(n) 1X 2
=√ X
nΓ(n − 21 ) n i=1 i
  12 n
!− 12
1 1 1 1X 2
∼ e− 2 1− X −→ τ (θ)
n i=1 i
n−1
n

1
1− 2(n−1)

in probability. Therefore θb is a consistent estimator of τ (θ). [2 points ]

(c) (3 points) Compute Cramer-Rao lower bound of an unbiased estimator of τ (θ).


3
Solution: Here τ ′ (θ) = − 12 θ− 2 . The Fisher information present in X1 is
 2 
d
IX1 (θ) = −E ln f (X, θ)
dθ2
 2  
d 2X1 − X12
= −E ln e θ
dθ2 θ
 2 
X12

d
= −E ln 2 − ln θ + ln X1 −
dθ2 θ
2X 2
 
1
= −E 2 − 31
θ θ
1
= 2 . [1 point]
θ
Therefore, CRLB is

(τ ′ (θ))2 1
= . [2 points ]
nIX1 (θ) 4nθ

Page 2 of 4
2021-22 Even Mid-semester Examination MA 324

4. (5 points) Let X1 , X2 , . . . , Xn be a random sample from a Bernoulli distribution with success


1
probability p = 1+e θ , where θ ∈ R. Find the maximum likelihood estimator of θ. [Hint:
Investigate the existence and non-existence of maximum likelihood estimator.]

Solution: The likelihood function of θ is


 m  n−m
1 1
L(θ) = 1− ,
1 + eθ 1 + eθ
Pn
where m is the realized value of i=1 Xi . [1 point]
Now, consider the following cases.
Case I: m = 0. In this case, the likelihood function of θ is
 n
1
L(θ) = 1 − ,
1 + eθ

which is an increasing function in θ ∈ R. Therefore, in this case the MLE of θ does not exist.
[1 point]
Case II: m = n. In this case the likelihood function of θ is
 n
1
L(θ) = ,
1 + eθ

which is a decreasing function in θ ∈ R. Thus, the MLE of θ does not exist in this case also.
[1 point]
Case III: m = 1, 2, . . . , n − 1. In this case, the log-likelihood function is

l(θ) = −n ln 1 + eθ + (n − m)θ.


Taking first derivative with respect to θ and equate it to zero, we obtain

neθ n 
− + n − m = 0 =⇒ θ = ln −1 .
1 + eθ m
Moreover,

d2 neθ
l(θ) = − <0
dθ2 (1 + eθ )2
n

for all θ ∈ R. Therefore, l(θ) attains
 it’s maximum
 at θ = ln m
− 1 . Thus, MLE of θ exists
in this case and the MLE is θb = ln Pnn Xi − 1 . [2 points ]
i=1

5. (7 points) Let X1 , X2 , . . . , Xn be a random sample of size n (≥ 2) from a population with


probability density function
(
1 − xθ
e if x > 0
f (x; θ) = θ
0 otherwise,

where θ > 0 is unknown. With preassigned α ∈ (0, 1), derive a level α likelihood ratio test for
H0 : θ = θ0 (> 0) against H1 : θ ̸= θ0 .

Page 3 of 4
2021-22 Even Mid-semester Examination MA 324

Solution: Here Θ0 = {θ0 } and Θ1 = (0, ∞) \ Θ0 . The likelihood function is


" n
#
1 1X
L(θ) = n exp − xi . [1 point]
θ θ i=1

Therefore,
" n
#
1 1 X
sup L(θ) = L(θ0 ) = n exp − xi .
θ∈Θ0 θ0 θ0 i=1

To find supθ∈Θ0 ∪Θ1 L(θ), we need to find MLE of θ > 0. Now, standard calculation shows that
MLE of θ is X = n1 ni=1 Xi . Therefore,
P

" n
#
1 1X 1
sup L(θ) = sup L(θ) = L (x) = n exp − xi = n e−n . [1 point]
θ∈Θ0 ∪Θ1 θ∈(0, ∞) x x i=1 x

The likelihood ratio test statistics is


 n   
x x
Λ= exp −n − 1 . [1 point]
θ0 θ0

A LRT rejects null hypothesis if

Λ < k [1 point]
 n  
x nx
⇐⇒ exp − < k.
θ0 θ0

Here k is used as an generic constant. Now consider the function

f (y) = y n e−ny for y > 0.

It is easy to see that f has unique maximum at y = 1, f is strictly increasing for 0 < y < 1
and strictly decreasing for y > 1. Moreover, f (0) = 0 and limy→∞ f (y) = 0. Therefore,
 n  
x nx nx nx
Λ < k ⇐⇒ exp − < k ⇐⇒ < k1 or > k2 , [2 points ]
θ0 θ0 θ0 θ0

for k1 < k2 . Now, under null hypothesis,

nX
∼ Gamma(n, 1).
θ0
Thus, the test function of level α LRT is
(
1 if nx
θ0
< G1− α2 or nx
θ0
> G α2
ψ(x) =
0 otherwise,

where Gα is upper α-point of a Gamma(n, 1) distribution. [1 point]

Page 4 of 4

You might also like