Deep Learning: Difference between revisions

Line 22: Line 22:
<math>g(z)=\frac{1}{1+e^{-z}}</math>   
<math>g(z)=\frac{1}{1+e^{-z}}</math>   
<math>\min_{W} \left[-\sum_{i=1}^{N}\left[y_i\log(y(f_W(x_i)) + (1-y_i)\log(1-g(f_W(x_i))\right] \right]</math>
<math>\min_{W} \left[-\sum_{i=1}^{N}\left[y_i\log(y(f_W(x_i)) + (1-y_i)\log(1-g(f_W(x_i))\right] \right]</math>
===Nonlinear functions===
Given an activation function \(\phi()\), \(\phi w^tx + b\) is a nonlinear function.
===Models===
Multi-layer perceptron (MLP): Fully-connected feed-forward network.
[[Convolutional neural network]]


==Misc==
==Misc==