Let X₁,..., Xn be a random sample of size n > 1 from the N(μ, o²) distribution, where μ€ (-∞, +∞) and o² >0 are two unknown parameters. Since the value of o² depends on the value of μ, we cannot estimate o² without estimating and we cannot expect to estimate two unknown parameters unless we have at least two data points - hence the requirement that n > 1. Given a data sample (x₁,...,xn), the likelihood function of (μ, o²) is defined as n 1 ² 1 C(14,0³²) = (2705²) exp ( - 203² Σ(²; − µ)²). - i=1 You can freely use the fact that the random variable Σ(x − x)2 - i=1 has the x² distribution with (n − 1) degrees of freedom x= (a) Without using any tools from multivariable calculus, show that (x, s²), where W = n (b) Calculate E(S²), where n i=1 n 1 n ₁ and ²: = 1.n ·Σ(x₁ - x)², i=1 is the global maximizer of the likelihood function L. n (X; – X)² with X = 1 n n ΣX₁. i=1

Linear Algebra: A Modern Introduction
4th Edition
ISBN:9781285463247
Author:David Poole
Publisher:David Poole
Chapter4: Eigenvalues And Eigenvectors
Section4.6: Applications And The Perron-frobenius Theorem
Problem 25EQ
icon
Related questions
Question

thank you .

•••,
Let X₁, ‚ X₁ be a random sample of size n > 1 from the N(μ, o²) distribution, where
μ€ (-∞, +∞) and o² >0 are two unknown parameters. Since the value of ²
depends on the value of µ, we cannot estimate o² without estimating µ and we cannot
expect to estimate two unknown parameters unless we have at least two data points -
hence the requirement that n > 1.
Given a data sample (x₁,...,xn), the likelihood function of (µ, o²) is defined as
1
C(1₁0²) = (2-10²2) ²
p(-20-3( (14) ²).
20².
You can freely use the fact that the random variable
n
120X
02
i=1
W =
has the x² distribution with (n − 1) degrees of freedom
(b) Calculate E(S²), where
exp
n
(a) Without using any tools from multivariable calculus, show that (x, s²), where
1
x = - x; and $²
n
i=1
(X₁ – X)²
-
-
=
1
n
n
i=1
is the global maximizer of the likelihood function L.
i=1
(X; - X)² with X
(x₁ - x)²,
==
n
n
ΣX₁.
i=1
Transcribed Image Text:•••, Let X₁, ‚ X₁ be a random sample of size n > 1 from the N(μ, o²) distribution, where μ€ (-∞, +∞) and o² >0 are two unknown parameters. Since the value of ² depends on the value of µ, we cannot estimate o² without estimating µ and we cannot expect to estimate two unknown parameters unless we have at least two data points - hence the requirement that n > 1. Given a data sample (x₁,...,xn), the likelihood function of (µ, o²) is defined as 1 C(1₁0²) = (2-10²2) ² p(-20-3( (14) ²). 20². You can freely use the fact that the random variable n 120X 02 i=1 W = has the x² distribution with (n − 1) degrees of freedom (b) Calculate E(S²), where exp n (a) Without using any tools from multivariable calculus, show that (x, s²), where 1 x = - x; and $² n i=1 (X₁ – X)² - - = 1 n n i=1 is the global maximizer of the likelihood function L. i=1 (X; - X)² with X (x₁ - x)², == n n ΣX₁. i=1
Before moving on to the remaining parts of the problem, let us set up some notation.
Consider the class of estimators of o² of the form cQ, where c> 0 is a constant and
Q
82
n
Σ
i=1
Show that
We know from part (a) that c = 1/n leads to the maximum likelihood estimator S² of
o2. We know from part (b) that c = 1/(n-1) leads to an unbiased estimator
i – X)².
(Xi - .
n - 1
n
1
i=1
(X; – X)²
of o2, which is actually the UMVUE of o².
(c) Calculate the MSE of S2; remember that S2 is an unbiased estimator of o², so the
MSE of S2 equals the variance of S².
(d) Recall that given any estimator of a parameter 0, we define
Bias () = E() - 0.
MSEe ( Ꮎ
(@) = Vare (@) + [Bias (@)]².
(e) Calculate the MSE of G2 and show that the MSE of G2 is smaller than the MSE of
S².
(f) Let f(c) denote the MSE of the estimator cQ. Of course the value of f(c) depends on
o2, but in any given problem o² is fixed (though unknown), so f is a function defined on
(0, ∞). Find c*> 0 such that c* is the global minimizer of f.
Transcribed Image Text:Before moving on to the remaining parts of the problem, let us set up some notation. Consider the class of estimators of o² of the form cQ, where c> 0 is a constant and Q 82 n Σ i=1 Show that We know from part (a) that c = 1/n leads to the maximum likelihood estimator S² of o2. We know from part (b) that c = 1/(n-1) leads to an unbiased estimator i – X)². (Xi - . n - 1 n 1 i=1 (X; – X)² of o2, which is actually the UMVUE of o². (c) Calculate the MSE of S2; remember that S2 is an unbiased estimator of o², so the MSE of S2 equals the variance of S². (d) Recall that given any estimator of a parameter 0, we define Bias () = E() - 0. MSEe ( Ꮎ (@) = Vare (@) + [Bias (@)]². (e) Calculate the MSE of G2 and show that the MSE of G2 is smaller than the MSE of S². (f) Let f(c) denote the MSE of the estimator cQ. Of course the value of f(c) depends on o2, but in any given problem o² is fixed (though unknown), so f is a function defined on (0, ∞). Find c*> 0 such that c* is the global minimizer of f.
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 8 steps with 77 images

Blurred answer
Recommended textbooks for you
Linear Algebra: A Modern Introduction
Linear Algebra: A Modern Introduction
Algebra
ISBN:
9781285463247
Author:
David Poole
Publisher:
Cengage Learning
Calculus For The Life Sciences
Calculus For The Life Sciences
Calculus
ISBN:
9780321964038
Author:
GREENWELL, Raymond N., RITCHEY, Nathan P., Lial, Margaret L.
Publisher:
Pearson Addison Wesley,