in Japanese
Introduction
In this page, I will describe a brief explanation on the Gaussian Mixture and the EM Algorithm.
(
Here, I show how to use the implementation of the EM Algorithm the OpenCV provides to us.)
Gaussian Mixture
Suppose we have a probability of an observed data
.
We can represent the probability as a superposition of K Gaussian distributions of the form
,
where,
is a Gaussian density, and
and
indicate a mean vector and a covariance matrix, respectively.
is a weight for each Gaussian density. The superposition is called a
Gaussian Mixture or
Mixture of Gaussians.
When we have a data set consisting of N observations
,
we can write a probability of the data set in the form
.
By substituting the Gaussian mixture representation into the equation, we obtain
.
An elegant and powerful method to decide unknown parameters in the right hand side,
,
is called the
EM Algorithm.
EM Algorithm
To make notations simple, we introduce the following variables:
Then,
is rewritten as
. This means that how probable the observed data
is under the condition of the given parameters
. The quantity is called
likelihood function.
The method of the maximum likelihood selects a set of values of the parameters that maximize the likelihood function to approximate the true distribution by our Gaussian mixture model.
All we have to do is to maximize the following quantity:
.
For simplicity, we maximize the log of the likelihood function given by
.
Before going any further, we introduce a constraint on
.
The probability
is given by
.
If we integrate both sides of the equation with respect to
, we obtain
.
It should be noticed that the individual Gaussian components are normalized.
Using a Lagrange multiplier, the quantity to maximize taking into account the constraint is written as
.
The rest of our work is setting the derivatives of
with respect to
,
, and
to zero. We show only the results.
- The derivative with respect to
yields the following equation:
where,
.
- The derivative with respect to
yields the following equation:
where, T indicates transposition.
- The derivative with respect to
yields the following equation:
.
We obtain three equations to decide parameters. These equations depend on each other. To satisfy all equations, we can use a simple iterative scheme. The scheme is called the EM Algorithm. The procedures are as follows:
- Set initial values to
.
- Calculate
using the current parameter values.
- Using
, calculate three parameters
.
- Calculate
.
- Until
or
converge, repeat the loop from the step 2 to the step 4.
The steps 2 and 4 are called the expectation and the maximization steps, respectively.
In the step 1,
the K-means clustering is often used for initialization of the parameters. We consider the center of the k-th cluster as the mean
. The covariance matrix can be calculated as
.
The weight is given by
where,
is the number of data points assigned to the k-th cluster.
References
Pattern Recognition and Machine Learning, Christopher M. Bishop, Springer