ECON3310 ASSIGNMENT 2
Due Date: 29 October
(Overall Weight 20%)
Q1. (10 Marks)
Consider the linear model
yi = xi + i i = 1; 2; :::; n
where xi = 1 for all i and the i are independent random variables with mean 0 and variance 2
i
(a) Write the model in matrix algebra notation. Include the covariance matrix of the error vector.
(b) Show that the least squares estimator for is y and that its variance is 1
n2
Pn
i=1
2
i
(c) Show that the generalized least squares estimator for is
nP
i=1
yi
2
i
!
nP
i=1
1
2
i
!
and that its variance is

Pn
i=1

1
2
i
??1
:
(d) Why is the least squares estimator inecient in this case?
Q2. (5 Marks)
Consider the linear model
y = X + (1)
where y is T 1, X is T K of rank K and xed, is a K-vector of unknown parameters and is a T 1
random vector.
The t ?? th observation of is given by
t = t??1 + ut (2)
E(ut) = 0 and E(u2t
) = 2u
.
(a) Find the cov[“t; “t??s] for s = 1; 2; :::; s
(b) Suppose that is unknown. Briey detail the steps required to estimate model in (1) and (2) using a
Feasible Generalised Least Squares estimator.
(c) Show that when the Prais-Winsten transformation is used to construct a GLS estimate, the transformed
error of the rst observation (“1) is uncorrelated with the remaining transformed errors (“t) t = 2; :::; n
Q3. (15 Marks)
Consider the model
yi = x
i + “i
where the “i are independent N
??
0; 2

.
1
(a) Write the probability density function for yi, f(yi)
(b) Find the log-likelihood function for a single observation, Li = ln f (yi) :
(c) Find the rst derivatives with respect to the unknown parameters
(d) One possible estimator for the covariance matrix of the maximum likelihood estimator of and
is,
V

~

=

Xn
i=1

@Li
@

@Li
@ 0
#??1
Let =

, nd V

~

(e) A Wald statistics for the hypothesis H0 :
= 1 against H1 :
6= 1 is given by
W =
(^
?? 1)2
v^ar (^
)
Given the following numerical results and your result from (c), what is the computed value of W?
P
x2^
i

yi ?? ^x^
i
2
= 4710:915; n = 100
P
x2^
i

yi ?? ^x^
i
2
ln (xi) = 11309:91;
P
x2^
i

yi ?? ^x^
i
2
[ln (xi)]2 = 27462:01
^ = 0:84612; ^
= 0:56641; ^2 = 3:2622;
P
x2^
i = 1278:79
P
x2^
i ln (xi) = 2952:532;
P
yix^
i ln (xi) = 2498:197;
P
xiyi = 2966:976 ;
P
x2^
i [ln (xi)]2 = 6934:875 ;
P
x2i
= 9771:47;
P
yix^
i [ln (xi)]2 = 5860:275;
P
x2i
ln (xi) = 23232:78;
P
xiyi ln (xi) = 6952:256
2

ECON3310 ASSIGNMENT 2
Due Date: 29 October
(Overall Weight 20%)
Q1. (10 Marks)
Consider the linear model
yi = xiβ + i i = 1, 2, …, n
where xi = 1 for all i and the i are independent random variables with mean 0 and variance σ
2
i
(a) Write the model in matrix algebra notation. Include the covariance matrix of the error vector.
(b) Show that the least squares estimator for β is y¯ and that its variance is 1
n2
Pn
i=1
σ
2
i
(c) Show that the generalized least squares estimator for β is
Pn
i=1
yi
σ
2
i
!
Pn
i=1
1
σ
2
i
!
and that its variance is Pn
i=1
1
σ
2
i
−1
.
(d) Why is the least squares estimator inecient in this case?
Q2. (5 Marks)
Consider the linear model
y = Xβ + (1)
where y is T × 1, X is T × K of rank K and xed, β is a K-vector of unknown parameters and is a T × 1
random vector.
The t − th observation of is given by
t = ρt−1 + ut (2)
E(ut) = 0 and E(u
2
t
) = σ
2
u
.
(a) Find the cov[εt, εt−s] for s = 1, 2, …, s
(b) Suppose that ρ is unknown. Briey detail the steps required to estimate model in (1) and (2) using a
Feasible Generalised Least Squares estimator.
(c) Show that when the Prais-Winsten transformation is used to construct a GLS estimate, the transformed
error of the rst observation (ε∗1) is uncorrelated with the remaining transformed errors (ε∗t) t = 2, …, n
Q3. (15 Marks)
Consider the model
yi = θxγ
i + εi
where the εi are independent N

0, σ2

.

(a) Write the probability density function for yi
, f(yi)
(b) Find the log-likelihood function for a single observation, Li = ln f (yi).
(c) Find the rst derivatives with respect to the unknown parameters
(d) One possible estimator for the covariance matrix of the maximum likelihood estimator of θ and γ is,
V

β˜

=
“Xn
i=1

∂Li
∂β ∂Li
∂β0
#−1
Let β =

θ
γ

, nd V

β˜

(e) A Wald statistics for the hypothesis H0 : γ = 1 against H1 : γ 6= 1 is given by
W =
(ˆγ − 1)2
varˆ (ˆγ)
Given the following numerical results and your result from (c), what is the computed value of W?
Px
2ˆγ
i

yi − ˆθxγˆ
i
2
= 4710.915; n = 100
Px
2ˆγ
i

yi − ˆθxγˆ
i
2
ln (xi) = 11309.91;
Px
2ˆγ
i

yi − ˆθxγˆ
i
2
[ln (xi)]2 = 27462.01
ˆθ = 0.84612; γˆ = 0.56641; σˆ
2 = 3.2622;
Px
2ˆγ
i = 1278.79
Px
2ˆγ
i
ln (xi) = 2952.532;
Pyix
γˆ
i
ln (xi) = 2498.197;
Pxiyi = 2966.976 ;
Px
2ˆγ
i
[ln (xi)]2 = 6934.875 ;
Px
2
i = 9771.47;
Pyix
γˆ
i
[ln (xi)]2 = 5860.275;
Px
2
i
ln (xi) = 23232.78;
Pxiyi
ln (xi) = 6952.256

 


What Students Are Saying About Us

.......... Customer ID: 12*** | Rating: ⭐⭐⭐⭐⭐
"Honestly, I was afraid to send my paper to you, but you proved you are a trustworthy service. My essay was done in less than a day, and I received a brilliant piece. I didn’t even believe it was my essay at first 🙂 Great job, thank you!"

.......... Customer ID: 11***| Rating: ⭐⭐⭐⭐⭐
"This company is the best there is. They saved me so many times, I cannot even keep count. Now I recommend it to all my friends, and none of them have complained about it. The writers here are excellent."


"Order a custom Paper on Similar Assignment at essayfount.com! No Plagiarism! Enjoy 20% Discount!"