关灯
护眼
字体:
大
中
小
烟雨山河 [不良人|尤川](13)
作者:紫微客 阅读记录
\begin{figure}[ht!] %!t
\centering
\includegraphics[width=3.5in]{setting2.png}
\caption{The variation trend of each parameter estimators and RMSE with sample size of MOM and MLE in setting2: $\mu$ = 3, $\sigma$ = 1, and $\beta$ = 2}
\label{LP}
\end{figure}
\begin{figure}[ht!] %!t
\centering
\includegraphics[width=3.5in]{setting4.png}
\caption{The variation trend of each parameter estimators and RMSE with sample size of MOM and MLE in setting4: $\mu$ = 3, $\sigma$ = 2, and $\beta$ = 1}
\label{LP}
\end{figure}
\section{Estimation of two parameters}
In this section, we will demonstrate the detailed process of the two parameter estimation. That is to say, we first assign a value to the shape parameter $\beta$, and then sequentially use the four methods introduced in the methodology to estimate the location parameter $\mu$ and the scale parameter $\sigma$.
\subsection{The method of moment estimation}
Here, we set $\beta=1$, then (1.1) can be written as
\begin{equation}
f(x ; \mu, \sigma)=\frac{1}{\sigma} e^{-\frac{x-\mu}{\sigma}} \quad, \quad x>\mu
\end{equation}
Under this case, (3.4) and (3.5) should be replaced with
\begin{equation}
E(X)=\int_\mu^{\infty} x \cdot \frac{1}{\sigma} e^{-\frac{x-\mu}{\sigma}} d x=\mu+\sigma=\bar{x}
\end{equation}
\begin{equation}
Var(X)=\sigma^2=s^2
\end{equation}
Then, recall (3.7) and (3.8), the moment estimates of $\mu$ and $\sigma$ are $\hat{\sigma}=s$ and $\hat{\mu}=\bar{x}-s$ respectively.
\subsection{The maximum likelihood estimation method}
In this case, since $f\left(x_i; \mu, \sigma\right)$ is a monotonically decreasing function of $\mu$, and $\mu \leq x_i$, so MLE of $\mu$ is
\begin{equation}
\hat{\mu}=min(x_i)
\end{equation}
Then, recall (3.10) and (3.11), we can get the likelihood function and log-likelihood function under this case:
\begin{equation}
L(x ; \mu, \sigma)=\frac{1}{\sigma^n} \cdot e^{-\frac{1}{\sigma} \sum_{i=1}^n\left(x_i-\mu\right)}
\end{equation}
\begin{equation}
l(x ; \mu, \sigma)=-n \ln (\sigma)-\frac{1}{\sigma} \sum_{i=1}^n\left(x_i-\mu\right)
\end{equation}
Similarly, (3.13) can be written as
\begin{equation}
\frac{\partial l}{\partial \sigma}=-\frac{n}{\sigma}+\frac{1}{\sigma^2} \sum_{i=1}^n\left(x_i-\mu\right)=0
\end{equation}
By (3.18) and (3.21), we can get the MLE of $\sigma$ is
\begin{equation}
\hat{\sigma}=\frac{\sum_{i=1}^n\left(x_i-\hat{\mu}\right)}{n}=\bar{x}-\hat{\mu}
\end{equation}
\subsection{Fisher Minimum $\chi^2$ Estimation with Equiprobable Cells}
The Fisher minimum chi-square estimation method is based on minimizing the Pearson-Fisher chi-square statistic
$$
\chi_n^2(\theta)=\sum_{i=1}^m \frac{\left(N_i-n p_i(\theta)\right)^2}{n p_i(\theta)} .
$$
With $\mu=0$ and $\sigma=1$, the PDF of Pearson type III distribution is given by
\begin{equation}
f(x ; 0, 1)=e^{-x} \quad, \quad x>0
\end{equation}
The CDF of this distribution is
\begin{equation}
F(x;0,1) = 1-e^{-x}
\end{equation}
Let $\Delta_1, \Delta_2, \ldots \Delta_m$ to be equiprobable points for the standard Gumbel distribution.
Then, the classification can be given by
$$
p_1(0,1)=\int_{-\infty}^{\Delta_1} \boldsymbol{f}(x ; 0,1) d x=p_2(0,1)=\int_{\Delta_1}^{\Delta_2} \boldsymbol{f}(x ; 0,1) d x=\cdots=\frac{1}{m}
$$
For Gumble distribution,
$$
p_i(0,1)=\left\{\begin{array}{lc}
\int_{-\infty}^{\Delta_1} f(x ; 0,1) d x=e^{-e^{-\Delta_1}}, & \text { for } i=1 \\
\int_{\Delta_{i-1}}^{\Delta_{i-1}} f(x ; 0,1) d x=e^{-e^{-\Delta_i}}-e^{-e^{-\Delta_{i-1}},}, & \text { for } 1<i<m \\
\int_1^{\infty} f(x ; 0,1) d x=-e^{-e^{-\Delta_m}}, & \text { for } i=m
\end{array}\right.
$$
Obtaining $\Delta_i$ by letting $p_i(0,1)=\frac{1}{m}$ and defining the cells:
$$
\begin{gathered}
J_1=\left(-\infty, \hat{\mu}+\hat{\sigma} \Delta_1\right), \ldots J_{m+1}=\left(\hat{\mu}+\hat{\sigma} \Delta_m,+\infty\right), \\
j=1, \ldots, m+1
\end{gathered}
$$
where the $\hat{\mu}$ and $\hat{\sigma}$ are the results of MLE for the given parameters. Let
$$
N_i=\operatorname{Card}\left\{X_j \in J_i: j=1, \ldots, n\right\}, i=1, \ldots, m+1
$$
Define the cells probabilities:
$$
p_i(\mu, \sigma)=\int_{J_i} f(x ; \mu, \sigma) d x, i=1, \ldots, m+1