Activation Functions: Difference between revisions

Created page with "List of common activation functions ==Sinusoidal== ===Tanh=== One of the older activation functions ===Sigmoid=== ===Cosine/Sine=== See [https://proceedings.neurips.cc/paper/2020/hash/53c04118df112c13a8c34b38343b9c10-Abstract.html SIREN] ==ReLU== https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html ReLU is one of the most popular activation functions. It simply performs <math>ReLU(x) = max(x, 0) = (x > 0) * x</math> ===LeakyReLU=== https://pytorch.org/docs..."
(Created page with "List of common activation functions ==Sinusoidal== ===Tanh=== One of the older activation functions ===Sigmoid=== ===Cosine/Sine=== See [https://proceedings.neurips.cc/paper/2020/hash/53c04118df112c13a8c34b38343b9c10-Abstract.html SIREN] ==ReLU== https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html ReLU is one of the most popular activation functions. It simply performs <math>ReLU(x) = max(x, 0) = (x > 0) * x</math> ===LeakyReLU=== https://pytorch.org/docs...")
(No difference)