# TensorFlow


TensorFlow is the famous machine learning library by Google

## Install

• Install CUDA and CuDNN
• Create a conda environment with python 3.7
• You can also just create a tensorflow environment using conda
• conda create -n my_env tensorflow
• Install with pip

### Install TF2

pip install tensorflow tensorflow-addons


### Install TF1

Note: You will only need TF1 if working with a TF1 repo.
If migrating your old code, you can install TF2 and use:

pip install tensorflow-gpu==1.15


## Usage (TF2)

Here we'll cover usage using TensorFlow 2 which has eager execution.
This is using the Keras API in tensorflow.keras.

### Basics

The general pipeline using Keras is:

• Define a model, typically using tf.keras.Sequential
• Call model.compile
• Here you pass in your optimizer, loss function, and metrics.
• Train your model by calling model.fit
• Here you pass in your training data, batch size, number of epochs, and training callbacks

After training, you can use your model by calling model.evaluate

### Custom Models

An alternative way to define a model is by extending the Model class:

• Write a python class which extends tf.keras.Model
• Implement a forward pass in the call method

### Custom Training Loop

Reference
While you can train using model.compile and model.fit, using your own custom training loop is much more flexable and easier to understand. You can write your own training loop by doing the following:

my_model= keras.Sequential([
keras.layers.Dense(400, input_shape=400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(2)
])

optimizer = keras.optimizers.SGD(learning_rate=1e-3)

training_loss = []
validation_loss = []
for epoch in range(100):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
guess = my_model(x_batch_train)
loss_value = my_custom_loss(y_batch_train, guess)

# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.

# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.

# Log every 200 batches.
if step % 200 == 0:
print('Training loss at step %s: %s' % (step, float(loss_value)))
training_loss.append(loss_value)
guess_validation = model(x_validation)
validation_loss.append(my_custom_loss(y_validation, guess_validation))


### Custom Layers

Extend tf.keras.layer.Layer

class ReflectionPadding2D(Layer):
self.input_spec = [InputSpec(ndim=4)]

def compute_output_shape(self, s):
""" If you are using "channels_last" configuration"""
return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])


BilinearUpsample
class BilinearUpsample(layers.Layer):

def __init__(self):
super().__init__()
self.input_spec = [keras.layers.InputSpec(ndim=4)]

def compute_output_shape(self, shape):
return shape[0], 2 * shape[1], 2 * shape[2], shape[3]

new_height = int(2 * inputs.shape[1])
new_width = int(2 * inputs.shape[2])
return tf.image.resize_images(inputs, [new_height, new_width])


## Operators

### Matrix Multiplication

The two matrix multiplication operators are:

New: With both operators, the first $$k-2$$ dimensions can now be the batch size.
E.g. If $$A$$ is $$b_1 \times b_2 \times 3 \times 3$$ and $$B$$ is $$b_1 \times b_2 \times 3$$, you can multiply them with $$C = \operatorname{tf.linalg.matvec}(A,B)$$ and $$C$$ will be $$b_1 \times b_2 \times 3$$.
Also the batch size in A can be 1 and it will properly broadcast to the same size as $$B$$.

## Estimators

### Training Statistics

Reference
You can extract the training loss from the events file in tensorflow.

pip install tensorflow-addons


### tfa.image.interpolate_bilinear

This is a bilinear interpolation. It is equivalent to PyTorch's grid_sample.
However, you need to reshape the grid to a nx2 array and make sure indexing='xy' when calling the function.
You can reshape the output back to the dimensions of your original image.

## Tensorflow Graphics

pip install tensorflow-graphics --upgrade


You may need to install a static openexr from https://www.lfd.uci.edu/~gohlke/pythonlibs/#openexr.