-
Notifications
You must be signed in to change notification settings - Fork 0
Generalized Learning Vector Quantization
Hier ist das richtige Latex in schön
-
Reference vectors are updated based on the steepest descent method
-
One problem with LVQ is that reference vectors diverge and thus degrade recognition ability
-
Consider the relative distance difference (\mu(x)) defined as: \begin{aligned} \mu(x) = \frac{d_1 - d_2}{d_1 + d_2}\end{aligned} Where (d_1) be the distance of the nearest reference vector that belongs to the same class of x, and likewise let (d_2) belong to the nearest reference vector that belongs to a different class from x.
-
(\mu(x)) ranges between -1 and 1 and if negative, x is classified correctly.
-
The criterion for learning is formulated as the minimizing of a cost function S defined by: [\begin{aligned} S = \sum_{i=1}^{N}f(\mu(x_i))\end{aligned}] where N is the number of input vectors for training, and (f(\mu)) is a monotonically increasing function.
-
To minimize S, (w_1) and (w_2) are updated based on the steepest descent method with a small positive constant a as follows: as: [\begin{aligned} w_i \leftarrow w_i - \alpha \frac{\delta S}{\delta w_i} \end{aligned}]
-
detailed computation of gradient descent in paper
-
In this paper, (\frac{\delta f}{\delta \mu} = f(\mu,t){1- f(\mu,t)}) was used in the experiments, where t is learning time and (f(\mu, t)) is a sigmoid function of (1/(1 + \exp(-\mu t))
-
Implementation and testing based on fair data samples
-
Comparison to old LVQ
-
IDEA: different distance measures than squared Euclidic distance? Maybe we could integrate a fairness measure directly, yields to new computation of gradient descent!