关灯
护眼
字体:
大
中
小
烟雨山河 [不良人|尤川](14)
作者:紫微客 阅读记录
$$
For gumble distribution,
Pearson-Fisher's minimum chi-square estimator satisfies
$$
\sum_{i=1}^m \frac{N_i}{p_i(\theta)} \cdot \frac{\partial p_i(\theta)}{\partial \theta_j}=0, j=1,2, \theta=(\mu, \sigma)
$$
\subsection{RP-based Minimum $\chi^2$ Estimation}
The cells in defining the chi-square equations for RP-minimum chi-square estimation are different from the Fisher Minimum chi-square estimation's. The cells in Fisher Minimum chi-sqaure estimation are defined by a certain amount of equiprobable points, whereas the cells in RP-minimum chisquare estimation are defined by the representative points (RPs).
Finding representative points is based on the idea to find discrete distribution to approximate continuous distribution. By introducing the loss function, the optimal RPs have the best representative performance to minimize the loss function.
Let $\left\{R_i^0: i=, \ldots, m\right\}$ be a set of RPs (representative points) obtained by existing algorithm from standard Gumbel function. Then a set of RPs from $f(x ; \mu, \sigma)$, which stands for Gumbel function with non-standard parameters, can be estimated by
\begin{equation}
R_i=\hat{\mu}+\hat{\sigma} R_i^0, i=1, \ldots, m
\end{equation}
where $\hat{\mu}$ and $\hat{\sigma}$ are MLEs of real parameters $\mu$ and $\sigma$, separately. Define the cells:
\begin{equation}
\begin{gathered}
I_1=\left(-\infty, \frac{R_1+R_2}{2}\right), \ldots, I_{m-1}=\left(\frac{R_{m-1}+R_m}{2},+\infty\right), \\
j=2, \ldots, m-1 .
\end{gathered}
\end{equation}
The RP minimum chi-square estimators are the solution to equation 2.6 .
The algorithm of RP-minimum chi-square estimation is similar to the Fisher-minimum chi-square estimation’s.
\subsection{Procedures of Estimations}
\subsubsection{ Moment and Maximum Likelihood Estimations}
The moment estimators, derived as previously described, are denoted by (2.8) and (2.9). As these results are algebraic solutions, constructing the procedures is straightforward, as outlined below:
\begin{enumerate}
\item \textbf{Step 1:} Determine the true values for $v$ ($v > 2$), $\mu$, and $\sigma$.
\item \textbf{Step 2:} Randomly select $n$ samples following the generalized Student's $t$-distribution with parameters $v$ ($v > 2$), $\mu$, and $\sigma$.
\item \textbf{Step 3:} Calculate $\hat{\mu}$ and $\hat{\sigma}$ using the formulas in equations (2.8) and (2.9), respectively.
\item \textbf{Step 4:} Repeat steps 2 and 3 for $N$ iterations, andpute the Root Mean Square Error (RMSE) using the formula in equation (2.24).
\end{enumerate}
The maximum likelihood estimators follow the formulas (2.17) and (2.18), derived as before. While $\hat{\mu}$ can be expressed algebraically, $\hat{\sigma}$ cannot. Therefore, the procedures differ slightly from those of moment estimation.
\begin{enumerate}
\item \textbf{Step 1:} Determine the true values for $v$ ($v > 2$), $\mu$, and $\sigma$.
\item \textbf{Step 2:} Randomly select $n$ samples following the generalized Student's $t$-distribution with parameters $v$ ($v > 2$), $\mu$, and $\sigma$.
\item \textbf{Step 3:} Calculate $\hat{\mu}$ using the formula in equation (2.17). Use the Newton-Raphson method to numerically solve equation (2.18) for $\hat{\sigma}$ (the "fsolve" built-in function in MATLAB could be applied).
\item \textbf{Step 4:} Repeat steps 2 and 3 for $N$ iterations, andpute the RMSE using the formula in equation (2.24).
\end{enumerate}
\subsubsection{ Fisher Minimum $\chi^2$ Estimations}
For the two minimum $\chi^2$ estimations, the procedures differ from the previous ones. Initially, we must determine the true values and the maximum likelihood estimators of the parameters, which is no different from steps 1 to 3 of the maximum likelihood estimation procedures. Next, it is crucial to generate $m$ cells of interest over the support of the generalized Student's $t$-distribution, i.e., all real numbers. Since the generalized Student's $t$-distribution is a location-scale family extended by the standard Student's $t$-distribution, we can first generate $m$ cells of interest with respect to the standard distribution, and then linearly transform them using the maximum likelihood estimators of the parameters. This transformation makes the transformed cells approximately the ideal cells of interest \textit{w.r.t} the generalised distribution. Following this, (2.23) should be determined and solved by MATLAB built-in function ”fsolve”. The parameters that are solved are called minimum $\chi^2$ estimators. By repeating the above steps for $N$ times, the RMSE should be calculated by applying (2.24). The detailed steps are listed as follows.