10 minute read

Support Vector Machine; SVM

Linear SVM

Introduction

A hyperplane in an $n$-dimensional feature space can be represented by the following equation:

$f(\mathbf{x}) = \mathbf{x}^T \mathbf{w} + b = \sum_{i=1}^n x_i w_i + b = 0$


Dividing by $|| \mathbf{w} ||$, we get

$\frac {\mathbf{x}^T \mathbf{w}} {|| \mathbf{w} ||} = P_{\mathbf{w}} (\mathbf{x}) = - \frac {b}{|| \mathbf{w} ||}$


indicating that the projection of any point $\mathbf{x}$ on the plane onto the vector $\mathbf{w}$ always has the length $\frac {|b|}{|| \mathbf{w} ||}$. Moreover, we see that $\mathbf{w}$ is the normal direction of the plane.

image


Note that the equation of the hyperplane is not unique. For instance, $c \cdot f(\mathbf{x}) = 0$ represents the same plane for any $c$.

The $n$-dimensional space is partitioned into two regions by the plane. Specifically, we define a map

$y = \text{sign}(f(\mathbf{x})) \in \{ -1, 1 \}$


A point $\mathbf{x}$ of unknown class will be classified to P (Positive) if $f(\mathbf{x}) > 0$, or N (Negative) if $f(\mathbf{x}) < 0$.

$\mathbf{Example}$

A straight line in a $2$-dimensional space $\mathbf{x} = [x_1, x_2]^T$, which is perpendicular to $\mathbf{w} = [1, 2]^T$ and $b = -1$, described by the following equation:

$f(\mathbf{x}) = \mathbf{x}^T \mathbf{w} + b = x_1 + 2x_2 - 1 = 0$


And it divides the $2$-dimensional plane into two halves. The distance between the origin and the line is

$\frac {|b|}{|| \mathbf{w} ||} = \frac {1} {\sqrt{w_1^2 + w_2^2} } = \frac{1}{\sqrt{5}} = 0.447$


image


Given a set of $K$ training samples from two linearly separable classes P and N:

$\{ (\mathbf{x}_k, y_k) | y_k \in \{ -1, 1\}, k = 1, \cdots, K \}$

where $y_k$ labels $\mathbf{x}_k$ to belong to either of the two classes. We want to find a hyperplane in terms of $\mathbf{w}$ and $b$, that linearly separates the two classes. Before the classifier is properly trained, the actual output $\widehat{y} = \text{sign}(f(\mathbf{x}))$ may not be the same as the desired output $y$. There are four possible cases:

  Input $(\mathbf{x}, y)$ Output $\widehat{y} = \text{sign}(f(\mathbf{x}))$ result
1 $(\mathbf{x}, y=1)$ $\widehat{y} = 1 = y$ correct
2 $(\mathbf{x}, y=-1)$ $\widehat{y} = 1 \neq y$ incorrect
3 $(\mathbf{x}, y=1)$ $\widehat{y} = -1 \neq y$ incorrect
4 $(\mathbf{x}, y=-1)$ $\widehat{y} = -1 - y$ correct

Note that for a correct decision hyperplane it satisfies that

$y_i (\mathbf{x}_i^T + \mathbf{w} + b) \geq 0$ for all $i = 1, 2, \cdots, K$.

image


Among all correct decision hyperplanes, we want to find the optimal one $H_0$ that separates the 2 classes with the maximal margin (the distance between the decision plane and the closest sample points).

It is reasonable that the optimal hyperplane should be in the middle of the 2 classes, so that the distance from the hyperplane to the closest point on either side is the same. So, we define two additional hyperplanes $H_+$ and $H_-$ that are parallel to $H_0$ and go through the point closest to the hyperplane $H_0$ on either side.

$H_+: \mathbf{x}^T \mathbf{w} + b = 1$ and $H_-: \mathbf{x}^T \mathbf{w} + b = -1$

Note that these two hyperplanes can uniquely determine $H_0$.

image


$\mathbf{Definition}$ (Support vectors). All points $\mathbf{x}_i \in P$ on the positive side should satisfy

$\mathbf{x}_{i}^T \mathbf{w} + b \geq 1, y_i = 1$

and all points $\mathbf{x}_i \in N$ on the negative side should satisfy

$\mathbf{x}_{i}^T \mathbf{w} + b \leq 1, y_i = -1$


These can be combined into one inequality: $y_i (\mathbf{x}_{i}^T \mathbf{w} + b) \geq 1 (i = 1, · · · , K)$

The equality holds for those points on the hyperplanes $H_+$ and $H_-$. Such points are called support vectors, for which

$\mathbf{x}_{i}^T \mathbf{w} + b = y_i$


$\mathbf{Remark.}$ Recall that the distances from the origin to the three parallel hyperplanes $H_+, H_0$ and $H_-$ are $\frac{ b-1 }{   \mathbf{w}   }, \frac{ b }{   \mathbf{w}   },$ and $\frac{ b+1 }{   \mathbf{w}   }$, respectively. Moreover, the distance between hyperplanes $H_-$ and $H_+$ is $\frac{2}{   \mathbf{w}   }$.

image


Finding SVM

Our goal is to maximize the distance, equivalently to minimize the norm $||\mathbf{w}||$ from which we have the following constrained optimization problem:

minimize $\frac{1}{2} \mathbf{w}^T \mathbf{w} = \frac{1}{2} ||\mathbf{w}||^2$
subject to $y_i (\mathbf{x}_{i}^T \mathbf{w} + b) \geq 1$, $i = 1, \cdots, K$


Next, for simplicity, convert into an unconstrained optimization problem by using a penalty term as follows:

minimize $\frac{1}{2} \mathbf{w}^T \mathbf{w} + \text{penalty term}$


where the penalty term is given by

$\underset{\alpha_i \geq 0}{\text{max }} \alpha_i (1 - y_i (\mathbf{x}_{i}^T \mathbf{w} + b)) = 0$ if $y_i (\mathbf{x}_{i}^T \mathbf{w} + b) \geq 1$, otherwise $\infty$


Now we have:

$\underset{\mathbf{w}, b}{\text{min }}[ \frac{1}{2} \mathbf{w}^T \mathbf{w} + \sum_{i=1}^K \underset{\alpha_i \geq 0}{\text{max }} \alpha_i (1 - y_i ( \mathbf{x}_i^T \mathbf{w} + b)) ]$
$= \underset{\mathbf{w}, b}{\text{min }} \underset{\alpha_i \geq 0}{\text{max }} [ \frac{1}{2} \mathbf{w}^T \mathbf{w} + \sum_{i=1}^K \alpha_i (1 - y_i ( \mathbf{x}_i^T \mathbf{w} + b)) ]$
$\geq \underset{\alpha_i \geq 0}{\text{max }} \underset{\mathbf{w}, b}{\text{min }} [ \frac{1}{2} \mathbf{w}^T \mathbf{w} + \sum_{i=1}^K \alpha_i (1 - y_i ( \mathbf{x}_i^T \mathbf{w} + b)) ]$


(For inequality, refer to max-min inequality)
Here, $\alpha_i$ are called Lagrange multipliers and the equality holds in certain condtions. Let

$L_p (\mathbf{w}, b, \mathbf{\alpha}) = \frac{1}{2} \mathbf{w}^T \mathbf{w} + \sum_{i=1}^K \alpha_i ( 1 - y_i (\mathbf{x}_i^T \mathbf{w} + b)).$


Solving the minimization problem first by using $\nabla_\mathbf{w} L_p (\mathbf{w}, b, \mathbf{\alpha}) = 0, \frac{\partial}{\partial b} L_p (\mathbf{w}, b, \mathbf{\alpha}) = 0$, we get

$\mathbf{w} = \sum_{i=1}^K \alpha_i y_i \mathbf{x}_i$ and $\sum_{i=1}^K \alpha_i y_i = 0$.


Substituting these two equations back into the expression of $L_p (\mathbf{w}, b, \mathbf{\alpha}) $, we get the following maximation problem with respect to $\alpha_i$:

$\underset{\alpha_i \geq 0}{\text{max }} (\sum_{i=1}^K \alpha_i - \frac{1}{2} \sum_{i=1}^K \sum_{j=1}^K \alpha_i \alpha_j \mathbf{x}_{i}^T \mathbf{x}_{j}^T)$


where $\sum_{i=1}^K \alpha_i y_i = 0$.

For the solution $\alpha_i \ (i = 1, 2, \cdots, K)$ of the above problem, we may get the optimal $\mathbf{w}^* = \sum_{i=1}^K \alpha_i y_i \mathbf{x}_i$. To get the constant $b$, observe that for a support vector $\mathbf{x}_i$, it satisfies $y_i (\mathbf{x}_i^T \mathbf{w} + b) = 1$, which is equivalent to $\mathbf{x}_i^T \mathbf{w} + b = y_i$. Thus, the optimal $b^* = y_i - \mathbf{x}_i^T \mathbf{w}$.

$\mathbf{Remark.}$ Recall that support vectors are those points $\mathbf{x_i}$ on either of the two hyperplanes $H_+$ and $H_-$ (for which the equality $y_i (\mathbf{x_i}^T + b) = 1$ holds.) Then, we see that $\alpha_i = 0$ for non-support vectors $\mathbf{x_i}$ where $y_i ( \mathbf{x_i}^T + b) > 1$. (the penalty term must be $0$ or $\infty$ but $\alpha_i \geq 0$. ) So, the training depends only on the support vectors, while all other samples away from the planes $H_+$ and $H_−$ are not important.

From this fact, we can derive the length of optimal weight $\mathbf{w}$:

First note that, $\mathbf{w} = \sum_{i=1}^K \alpha_j y_j \mathbf{x_j} = \sum_{j \in I} \alpha_j y_j \mathbf{x_j} \to y_i (\sum_{j \in I} \alpha_j y_j \mathbf{x_j} + b) = 1$. Then,

$|| \mathbf{w} ||^2 = \mathbf{w}^T \mathbf{w} = \sum_{j \in I} \alpha_j y_j \mathbf{x_j}^T \sum_{j \in I} \alpha_j y_j \mathbf{x_j} = \sum_{i \in I} \alpha_i y_i \sum_{j \in I} \alpha_j y_j \mathbf{x_i}^T\mathbf{x_j}$
$= \sum_{i \in I} \alpha_i (1 - y_i b) = \sum_{i \in I} \alpha_i - b \sum_{i \in I} \alpha_i y_i$
$= \sum_{i \in I} \alpha_i$
where we use the constraint that $\sum_{i \in I} \alpha_i y_i = \sum_{i=1}^K \alpha_i y_i = 0$.

Logistic Regression vs. SVM

Logistic regression takes the decision hyperplane that maiximizes the probability of data (we use the likelihood function). So the farther the data lies from the decision hyperplane (on the correct side), the happier the logistic regression is.

Support vector machine takes the decision hyperplane that maximizes the distance of the closest points (the margine of the support vectors) from the hyperplane. If a point is not a support vector, it doesn’t really matter.

Soft Margin SVM

When the two classes are not linearly separable (e.g., due to noise), the condition for the optimal hyperplane can be relaxed by including an extra term:

$y_i \mathbf{x}_{i}^T \mathbf{w} + b \geq 1 - \xi_i, i = 1, \cdots K$

image


For minimum error, $\xi_i \geq 0$ should be also minimized as well as $| \mathbf{w} |$, and the objective function becomes:

minimize $\frac{1}{2} \mathbf{w}^T \mathbf{w} + C \sum_{i=1}^K \xi_i$
subject to $y_i (\mathbf{x}_{i}^T \mathbf{w} + b) \geq 1 - \xi_i$ and $\xi_i \geq 0$; $i = 1, \cdots, K$


Here, $C$ is a regularization parameter that controls the trade-off between maximizing the margin adn minimizing the training error. A small value of $C$ tends to emphasize the margin while ignoring the outliers in the training data, while a large value of C may tend to overfit the training data.

Using Lagrange multipliers $\alpha_i$ and $\mu_i$, we have the following uncontrained optimization problem:

$\text{min } L_p (\mathbf{w}, b, \xi_i, \alpha_i, \mu_i)$
s.t. $\alpha_i \geq 0$, $\mu_i \geq 0$, $1 \leq i \leq n$


where $L_p (\mathbf{w}, b, \xi_i, \alpha_i, \mu_i) = \frac{1}{2} \mathbf{w}^T \mathbf{w} + C \sum_{i=1}^n \xi_i - \sum_{i=1}^n \alpha_i (y_i (\mathbf{w}^T \mathbf{x_i} + b) - 1 + \xi_i) - \sum_{i=1}^n \mu_i \xi_i$.

Observe that

$\frac {\partial L_p} {\partial \mathbf{w}} = 0 \to \mathbf{w} = \sum_{i=1}^n \alpha_i y_i \mathbf{x}_i$
$\frac {\partial L_p} {\partial b} = 0 \to \sum_{i=1}^n \alpha_i y_i = 0$
$\frac {\partial L_p} {\partial \xi_i} = 0 \to C - \alpha_i - \mu_i = 0$


Using the above results we have the following optimization problem:
$\underset{\alpha_i}{\text{max }} L_D (\alpha_i) = \underset{\alpha_i}{\text{max }} \sum_{i=1}^n \alpha_i - \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n \alpha_i \alpha_j y_i y_j \mathbf{x_i}^T \mathbf{x_j}$
for $0 \leq \alpha_i \leq C$.

Kernel Trick

The algorithm above converges only for linearly separable data. If the data set is not linearly separable, we can map the samples $x$ into a feature space of higher dimensions:

$\mathbf{x} \to \phi (\mathbf{x})$


in which the classes can be linearly separated.

image


The decision function in the new space becomes:

$f(\mathbf{x}) = \phi (\mathbf{x})^T \mathbf{w} + b = \sum_{i=1}^K \alpha_i y_i ( \phi(\mathbf{x})^T \phi(\mathbf{x}) ) + b$


where

$\mathbf{w} = \sum_{j=1}^K \alpha_j y_j \phi (\mathbf{x}_j)$


and $b$ are the parameters of the decision plane in the new space.
Let $K(\mathbf{x_1}, \mathbf{x_2}) = \phi(\mathbf{x_1})^T \phi(\mathbf{x_2})$.
It then follows that $f(\mathbf{x}) = \phi (\mathbf{x})^T \mathbf{w} + b = \sum_{j=1}^K \alpha_j y_j K(\mathbf{x}, \mathbf{x_j}) + b$.

The parameter $b$ can be found from any support vectors $\mathbf{x_i}$:
$b = y_i - \phi( \mathbf{x_i} )^T \mathbf{w} = y_i - \sum_{j=1}^K \alpha_j y_j (\phi (\mathbf{x_i}^T \phi (\mathbf{x_j})) = y_i - \sum_{j=1}^K \alpha_j y_j K(\mathbf{x_i}, \mathbf{x_j})$.

Similarly as in the linear SVM, we solve the following maximization problem:

$\underset{\alpha_i \geq 0}{\text{max }} [\sum_{i=1}^K \alpha_i - \frac{1}{2} \sum_{i=1}^K \sum_{j=1}^K \alpha_i \alpha_j y_i y_j K(\mathbf{x}_i, \mathbf{x}_j)]$


Unlike the linear SVM, we cannot express $\mathbf{w}$ as a linear combination of support vectors in this case. Even though kernels allow very flexible models, we have to determine the kernel parameters which is one of drawbacks.

$\mathbf{Example\ 1.}$ Hard margin SVM - linearly seperable case

Consider a classification problem in a 2D space, which is linearly separable. Here, the goal is to find a linear decision boundary.

image


Here, the black contour is the linear decision boundary which comes from a hard margin support vector machine.

$\mathbf{Example\ 2.}$ Soft margin SVM - linearly seperable case

Now, by adding one outlier to the previous data set, we make the data to be non-linearly separable. In this case, if we want to find a linear decision bound, we should use a soft margin support vector machine.

image


In this case, the soft margin SVM model suggests that mis-classifying the positive outlier is the best.

$\mathbf{Example\ 3.}$ Kernelized SVM

We now consider a classification problem with a highly non-linearly separable data set which needs a non-linear decision boundary as shown in the following two cases.

image


To get a non-linear decision boundary, we use a kernel trick in support vector machine learning. For two cases, we use the following polynomial kernel of degree 3.

$K(\mathbf{x}, \mathbf{z}) = (\mathbf{x}^T \mathbf{z})^3 = (x_1 z_1 + x_2 z_2)^3$


We then get the following results.

image


$\mathbf{Example\ 4.}$ Kernelized SVM 2

In the kernelized svm, not only choosing a proper kernel, but also selecting a valid hyperparameter is also important. Consider the following radial basis function, which is a widely used kernel.

$K(\mathbf{x}, \mathbf{z}) = e^{-||\mathbf{x} - \mathbf{z}||^2 / 2 \gamma^2$


For a soft margin kernelized svm with the radial basis function, choosing $\gamma$ and the regularization term $C$ is very important as shown in the following example.

image


By varying the values of $C$ and $\gamma$, we get different decision boundaries as follows.

image


Here, the score means the accuracy of the decision boundary for training data.

Leave a comment