Jump to content

Convolutional neural network: Difference between revisions

no edit summary
No edit summary
Line 9: Line 9:
** Often includes some type of padding such as zero padding.
** Often includes some type of padding such as zero padding.
* Upscale layer (for decoders only).
* Upscale layer (for decoders only).
* Normalization or pooling layer (e.g. BatchNorm or MaxPooling).
* Normalization or pooling layer (e.g. [[Batch normalization]] or Max Pool).
* Activation (typically ReLU or some variant).
* Activation (typically ReLU or some variant).
More traditionally as in the UNet paper, convolutional blocks come in blocks of two conv layers
* Conv2D layer.
* Activation.
* Conv2d Layer.
* Activation.
* Max pool or Avg pool
Upsampling blocks also have a transposed convolution or a bilinear upsample in the beginning.


The last layer is typically just a Conv2D with a possible Sigmoid.
The last layer is typically just a Conv2D with a possible Sigmoid.