曝光台 注意防骗
网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者
L() = f (x1, x2, . . . , xn|),
2 , where is the parameter space. Observe that the likelihood is a function
of rather than xi. If the distribution is discrete, which means f is a frquency
function, the likelihood function gives the probability of observing the given data
as a function of the parameter . The maximum likelihood estimate of is that
value of maximizing the likelihood, i.e making the observed data ”most probable”
or ”most likely” (Rice [year]).
The likelihood function for n independent, identically distributed random variables
(X1, . . . , Xn) is given by
L() =
nY
i=1
f (Xi|)
where =2 , where f is the marginal density function of Xi.
The above likelihood function does not have to have a maximizer. Even if it does
have a maximizier, that need not to be unique. In the case that this function has
more than one maximizer, most people tend to take the global maximizer as estimated
parameters, however in general a local maximizer may give better results
than the global maximizer. Therefore, an optimization algorithm which is initialized
at a good starting point and aims to find a local maximizer would yield MLE
parameters, if it converges to a solution. A good starting point, theoretically, obeys
the square root law, that is the estimation error goes to 0 with a rate c/n1/2, where
c is a constant. One may use the sample parameters as a starting point of the algorithm,
as those are the best one can get from the sample. (Maximum Likelihood in
R, Charles J Geyer [2003])
In order to get rid of the divergence problems one may take suitable transformations
of likelihood function. Because the parameter which maximizes L is equal
to the parameter which maximizes log(L), as maxima are not affected by monotone
62
transformations, one calculates log(L), that is the log-likelihood function:
l() =
nX
i=1
log [ f (Xi|)]
A.3.1 Maximum Likelihood Estimator for Gamma Distribution
If X follows a Gamma distribution with = (k, ) where k is the shape and is the
scale parameters, then
l() = (k − 1)
nX
i=1
log(xi) −
nX
i=1
xi
− nk log() − n log((k)) (A.1)
given Xi = xi.
Now it is straightforward to maximize the log-likelihood function A.1 with respect
to k and by taking the derivative of l() and equaling it to 0.
First, taking the derivative with respest to gives:
dl(k, )
d
= Pni
=1 xi
2 −
nk
Equaling the above equation and solving for yields the maximum likelihood estimator
of this parameter:
ˆ
=
1
kn
(
nX
i=1
xi)
If one substitutes the estimator of in the equation A.1, it gives:
lˆ (k) = (k − 1)
nX
i=1
l log(xi) − nk − nk log(
nX
i=1
xi/kn) − n log((k))
Taking the derivative with respect to k and setting it equal to 0 yields the nonlinear
equation
log(ˆk) − (ˆk ) = log(
1
n
nX
i=1
xi) −
1
n
(
nX
i=1
log xi)
where (ˆk) = 0(ˆk)/(ˆk) is the digamma function. There is no straightgorward
calculation for ˆk, the estimator for the shape k, but numerical approaches are used
to obtain an estimator.
The maximum likelihood estimators (ˆk,ˆ) are known to be sufficient estimators for
Gamma distribution (Bowman and Shenton [1968]).
63
A.3.2 Maximum Likelihood Estimator forWeibull Distribution
If X follows a Weibull distribution with = (k, ) where k is the shape and is the
scale parameters, then
l() = n log k + (k − 1)
nX
i=1
log xi − kn log −
nX
i=1 xi
k
(A.2)
given Xi = xi.
Taking the derivative of log-likelihood function with respect to gives:
dl(k, )
d
= −
kn
+
nX
i=1
k xi
k−1 xi
2 (A.3)
Equaling A.3 to 0, one obtains
−kn +
1
k k
nX
i=1
xk
i = 0
nk =
nX
i=1
xk
i ) k−1 = Pn
i=1 xk
i
n
) ˆ =0BB@Pni
=1 xk
i
n 1CCA
1/k
Taking the derivative of l() with respect to k and equaling it to 0 with = ˆ yields
n
k +
nX
i=1
log xi −
n
Pni
=1 xk
i
nX
i=1
xk
i log xi = 0 (A.4)
We can now rewrite A.4 as follows
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:
航空资料31(25)