CS 6313: Mini Project #3
Name: Group Trio 14
Name of Group Members:
Anusha Gupta (axg230026)
Avtans Kumar (axk220317)
Md Shahir Zaoad (mxz230002)
Contribution of Each Group Members:
All the group members contributed equally. Group members came up with the solutions through group
discussion. The coding and documenting has also been done in close collaboration.
Ans To Que. No. 1
1(a)
To calculate the Mean Squared Error (MSE) of an estimator using Monte Carlo Method we’ll use the following
steps:
Let, we want to compute MSE of estimatorθ^ . The corresponding parameter is θ.
Now, we’ll use the squared difference to calculate the deviation as (θ^ - θ).
Using Monte Carlo Simulation we’ll generate many samples (large number of samples) and each time
we’ll calculate the estimator and take the squared difference.
Finally, for all the squared differences we’ll take their average, E{(θ^ - θ)} to get the MSE.
1(b)
In this part, we’ll calculate Mean Squared Error (MSE) for n=5 and θ=5 and N=1000 replications. For this purpose
the following code has been used. Short description of the code:
myMse is the main function which takes three parameters (n, θ, nsim – the number of replications).
Now, in each 1000 run we call the myEstim function to calculate the value of theta_1 (Maximum
Likelihood Estimator (MLE)) and theta_2 (Moments of Method Estimator (MOME)) for the given n and θ.
After that, for each value of MLE and MOME among the 1000 values we calculate the squared difference
between θ^ and θ.
Finally, we take the mean to get the Mean Squared Error.
Following is the obtained result in case of n=5 and θ=5.
1(c)
Value of Mean Squared Error (MSE) for all the combinations of (n, θ) have been summarized in the following four
figures. Here,
Black line represents Method of Moments Estimator.
Red line represents Maximum Likelihood Estimator
Following R code has been used to generate the graphs:
1(d)
From the 1(c) graph we can deduce the following:
In terms of both MLE and MOME the value of Mean Squared Error decreases as the number of sample (n)
increases. This is logical, as taking large sample means our estimated value will be getting closer to the
actual parameter value, and hence the Error will be insignificant.
Now, between the two estimators, the Method of Moments Estimator (MOME) is better as from the
graph we can see that when ‘n’ >= 5 the MSE is less for Maximum Likelihood Estimator in all four cases
(graphs). However, when ‘n’ = 1 or ‘n’ = 2 the MSE of both MLE and MOME are almost identical. The most
possible reason is that when n is very small the error would be very large irrespective of the method,
hence they become somewhat identical. Therefore, when ‘n’ >= 5 the MLE is better than MOME, and
when ‘n’ = 1 or ‘n’ = 2 they are the same in terms of quality.