# Machine Learning

Machine Learning

## Loss functions

### (Mean) Squared Error

The squared error is:

If our model is linear regression then this is convex.

so the hessian is positive semi-definite

### Cross Entropy

The cross entropy loss is

- Notes

- If our model is where is the sigmoid function then this is convex

which is a PSD matrix

### Hinge Loss

## Optimization

### Gradient Descent

Also known as direction of steepest gradient.

To minimize a loss function, just take steps in the opposite direction of the gradient.

### Stochastic Gradient Descent

Oftentimes with large amounts of data, you can't take the gradient of all your data at once.
In this case, we use batches of data.

We can take batches by getting a random order of indices without replacement.

for epoch=1 to n generate batches for i=1 to m take gradient w.r.t batch i update using above gradient

- Batch Size

### Coordinate Block Descent

### Learning Rate

## SVM

Andrew Ng Notes

Support Vector Machine

This is a linear classifier the same as a perceptron except the goal is not to just classify our data properly but to also maximize the margin.

where is the sign function.

### Margins

The margin denoted by is the distance between our classifier and the closest point.

- Functional Margin

The margin corresponding to one example is:

The margin for our entire sample is the smallest margin per example.

- Geometric Margin

The geometric margin is the actual distance.

- Notes

- is the normal vector of our hyperplane so is the length of the projection of x onto our normal vector.

- This is the distance to our hyperplane.

### Lagrangians

The goal for svm is to maximize the margin:

which is equivalent to, by setting ,

In general, given an optimization in the (primal) form:

we can rewrite the optimization as

where is called the lagrangian.

Since ,

we have:

The left term is called the dual problem.

If the solution to the dual problem satisfy some conditions called the KKT conditions, then it is also the solution to the original problem.

### Kernel Trick

Oftentimes, using linear classifiers such as perceptron and SVM may fail to classify data for which the true decision boundary is non-linear.

In this case, one way to get around this is to perform a non-linear preprocessing of the data .

For example
If our original model and training only used then we only need

A kernel is a function that can be expressed as for some function

#### Identifying if a function is a kernel

Basic check:
Since the kernel is an inner-product between , it should satisfy the axioms of inner products, namely , otherwise it is not a kernel.

#### Mercer's Theorem

Let our kernel function be . Then for any sample S, the corresponding matrix where is symmetric positive definite.

Symmetry:

Positive Definite:

Let .

Then