关灯
护眼
字体:
大
中
小
烟雨山河 [不良人|尤川](5)
作者:紫微客 阅读记录
\]
Taking the derivative of \(\mathcal{L}\) with respect to \(\pi_k\) and setting it to zero:
\[
\frac{\partial}{\partial \pi_k} \mathcal{L}(\boldsymbol{\pi}, \lambda) = \sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} + \lambda = 0.
\]
Rearranging the equation, we have:
\[
\sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} = -\lambda.
\]
Knowing that \(\sum_{k=1}^K \pi_k = 1\), we can write:
\[
\sum_{k=1}^K \sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} = -\lambda \sum_{k=1}^K \pi_k = -\lambda.
\]
Thus, \(-\lambda = N\).
Substitute \(-\lambda\) back into the previous equation:
\[
\sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} = N.
\]
Solving for \(\pi_k\), we get:
\[
\pi_k = \frac{N_k}{N},
\]
where \(N_k = \sum_{n=1}^N \gamma(z_{nk})\).
Therefore, using the Lagrange multiplier method, we have shown that \(\pi_k = \frac{N_k}{N}\) maximizes the given expression while keeping \(\gamma(z_{nk})\) fixed.
\newpage
Next, let's consider the maximization with respect to \(\pi_k\). \\
Here, We need to find the derivative of the following expression with respect to \(\pi_k\), and then solve for \(\pi_k\):
\[
\ln p(\mathbf{X} \mid \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Sigma}) + \lambda \left( \sum_{k=1}^K \pi_k - 1 \right)
\]
First, let's write out the log-likelihood function. Suppose \(\mathbf{X} = \{\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_N\}\) are the observations, and \(\boldsymbol{\pi} = (\pi_1, \pi_2, \ldots, \pi_K)\) are the mixing coefficients for the Gaussian distributions. The log-likelihood function is:
\[
\ln p(\mathbf{X} \mid \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \sum_{n=1}^N \ln \left( \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right)
\]
Next, consider the objective function with the Lagrange multiplier:
\[
\mathcal{L} = \sum_{n=1}^N \ln \left( \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right) + \lambda \left( \sum_{k=1}^K \pi_k - 1 \right)
\]
We take the derivative of \(\mathcal{L}\) with respect to \(\pi_k\):
\[
\frac{\partial \mathcal{L}}{\partial \pi_k} = \sum_{n=1}^N \frac{\mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)}{\sum_{j=1}^K \pi_j \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)} + \lambda
\]
Define \(\gamma(z_{nk})\) as the posterior probability that data point \(\mathbf{x}_n\) belongs to the \(k\)-th Gaussianponent:
\[
\gamma(z_{nk}) = \frac{\pi_k \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)}{\sum_{j=1}^K \pi_j \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)}
\]
Thus, we can rewrite the derivative as:
\[
\frac{\partial \mathcal{L}}{\partial \pi_k} = \sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} + \lambda
\]
To find the optimal value, set the derivative to zero:
\[
\sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} + \lambda = 0
\]
Rewrite the equation and solve for \(\pi_k\):
\[
\sum_{n=1}^N \gamma(z_{nk}) = -\lambda \pi_k
\]
We know that \(\sum_{k=1}^K \pi_k = 1\), hence:
\[
\sum_{k=1}^K \sum_{n=1}^N \gamma(z_{nk}) = -\lambda \sum_{k=1}^K \pi_k = -\lambda
\]
Thus,
\[
-\lambda = N
\]
Substitute \(-\lambda\) back into the previous equation:
\[
\sum_{n=1}^N \gamma(z_{nk}) = N \pi_k
\]
Solve for \(\pi_k\):
\[
\pi_k = \frac{\sum_{n=1}^N \gamma(z_{nk})}{N}
\]
Define \(N_k = \sum_{n=1}^N \gamma(z_{nk})\) as the effective number of samples belonging to the \(k\)-th Gaussianponent, then:
\[
\pi_k = \frac{N_k}{N}
\]
Thus, we have derived the expression for \(\pi_k\):
\[
\pi_k = \frac{N_k}{N}
\]
第 3 章
Now, we can ignore the constant terms since they won't affect the maximization result. Therefore, we only need to focus on the terms that are related to $\boldsymbol{\Sigma}_k$, which is
$$
-\frac{1}{2} \sum_{n=1}^N \gamma\left(z_{n k}\right) (\mathbf{x}_n - \boldsymbol{\mu}_k)^T \boldsymbol{\Sigma}_k^{-1} (\mathbf{x}_n - \boldsymbol{\mu}_k)+ \frac{1}{2} \sum_{n=1}^N \gamma\left(z_{n k}\right) \ln \boldsymbol{\Sigma}_k^{-1}.
Taking the derivative of \(\mathcal{L}\) with respect to \(\pi_k\) and setting it to zero:
\[
\frac{\partial}{\partial \pi_k} \mathcal{L}(\boldsymbol{\pi}, \lambda) = \sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} + \lambda = 0.
\]
Rearranging the equation, we have:
\[
\sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} = -\lambda.
\]
Knowing that \(\sum_{k=1}^K \pi_k = 1\), we can write:
\[
\sum_{k=1}^K \sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} = -\lambda \sum_{k=1}^K \pi_k = -\lambda.
\]
Thus, \(-\lambda = N\).
Substitute \(-\lambda\) back into the previous equation:
\[
\sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} = N.
\]
Solving for \(\pi_k\), we get:
\[
\pi_k = \frac{N_k}{N},
\]
where \(N_k = \sum_{n=1}^N \gamma(z_{nk})\).
Therefore, using the Lagrange multiplier method, we have shown that \(\pi_k = \frac{N_k}{N}\) maximizes the given expression while keeping \(\gamma(z_{nk})\) fixed.
\newpage
Next, let's consider the maximization with respect to \(\pi_k\). \\
Here, We need to find the derivative of the following expression with respect to \(\pi_k\), and then solve for \(\pi_k\):
\[
\ln p(\mathbf{X} \mid \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Sigma}) + \lambda \left( \sum_{k=1}^K \pi_k - 1 \right)
\]
First, let's write out the log-likelihood function. Suppose \(\mathbf{X} = \{\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_N\}\) are the observations, and \(\boldsymbol{\pi} = (\pi_1, \pi_2, \ldots, \pi_K)\) are the mixing coefficients for the Gaussian distributions. The log-likelihood function is:
\[
\ln p(\mathbf{X} \mid \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \sum_{n=1}^N \ln \left( \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right)
\]
Next, consider the objective function with the Lagrange multiplier:
\[
\mathcal{L} = \sum_{n=1}^N \ln \left( \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right) + \lambda \left( \sum_{k=1}^K \pi_k - 1 \right)
\]
We take the derivative of \(\mathcal{L}\) with respect to \(\pi_k\):
\[
\frac{\partial \mathcal{L}}{\partial \pi_k} = \sum_{n=1}^N \frac{\mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)}{\sum_{j=1}^K \pi_j \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)} + \lambda
\]
Define \(\gamma(z_{nk})\) as the posterior probability that data point \(\mathbf{x}_n\) belongs to the \(k\)-th Gaussianponent:
\[
\gamma(z_{nk}) = \frac{\pi_k \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)}{\sum_{j=1}^K \pi_j \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)}
\]
Thus, we can rewrite the derivative as:
\[
\frac{\partial \mathcal{L}}{\partial \pi_k} = \sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} + \lambda
\]
To find the optimal value, set the derivative to zero:
\[
\sum_{n=1}^N \frac{\gamma(z_{nk})}{\pi_k} + \lambda = 0
\]
Rewrite the equation and solve for \(\pi_k\):
\[
\sum_{n=1}^N \gamma(z_{nk}) = -\lambda \pi_k
\]
We know that \(\sum_{k=1}^K \pi_k = 1\), hence:
\[
\sum_{k=1}^K \sum_{n=1}^N \gamma(z_{nk}) = -\lambda \sum_{k=1}^K \pi_k = -\lambda
\]
Thus,
\[
-\lambda = N
\]
Substitute \(-\lambda\) back into the previous equation:
\[
\sum_{n=1}^N \gamma(z_{nk}) = N \pi_k
\]
Solve for \(\pi_k\):
\[
\pi_k = \frac{\sum_{n=1}^N \gamma(z_{nk})}{N}
\]
Define \(N_k = \sum_{n=1}^N \gamma(z_{nk})\) as the effective number of samples belonging to the \(k\)-th Gaussianponent, then:
\[
\pi_k = \frac{N_k}{N}
\]
Thus, we have derived the expression for \(\pi_k\):
\[
\pi_k = \frac{N_k}{N}
\]
第 3 章
Now, we can ignore the constant terms since they won't affect the maximization result. Therefore, we only need to focus on the terms that are related to $\boldsymbol{\Sigma}_k$, which is
$$
-\frac{1}{2} \sum_{n=1}^N \gamma\left(z_{n k}\right) (\mathbf{x}_n - \boldsymbol{\mu}_k)^T \boldsymbol{\Sigma}_k^{-1} (\mathbf{x}_n - \boldsymbol{\mu}_k)+ \frac{1}{2} \sum_{n=1}^N \gamma\left(z_{n k}\right) \ln \boldsymbol{\Sigma}_k^{-1}.