TensorFlow: Difference between revisions
Line 20: | Line 20: | ||
===Install TF1=== | ===Install TF1=== | ||
<pre> | <pre> | ||
conda install tensorflow-gpu=1.15 | conda install tensorflow-gpu=1.15 |
Revision as of 22:40, 28 January 2021
TensorFlow is the famous machine learning library by Google
Install
- Install CUDA and CuDNN
- Create a conda environment with python 3.5+
conda create -n my_env python=3.8
- Install with pip
Install TF2
Install tensorflow and tensorflow-addons
conda install tensorflow-gpu pip install tensorflow-addons
- If you prefer, you can install only cuda and cudnn from conda:
- See https://www.tensorflow.org/install/source#linux for a list of compatible Cuda and Cudnn versions.
conda search cudatoolkit
to which versions of cuda available- Download cudnn and copy the binaries to the environment's
Library/bin/
directory.
Install TF1
conda install tensorflow-gpu=1.15
- Notes
- Conda will automatically install a compatible cuda and cudnn into the cuda environment. Your host OS only needs to have a sufficiently new version of nvidia drivers installed.
- Sometimes, I get
CUDNN_STATUS_INTERNAL_ERROR
. This is fixed by setting the environment variableTF_FORCE_GPU_ALLOW_GROWTH=true
in my conda env. See Add env variables to conda env
Usage (TF2)
Here we'll cover usage using TensorFlow 2 which has eager execution.
This is using the Keras API in tensorflow.keras.
Basics
The general pipeline using Keras is:
- Define a model, typically using tf.keras.Sequential
- Call
model.compile
- Here you pass in your optimizer, loss function, and metrics.
- Train your model by calling
model.fit
- Here you pass in your training data, batch size, number of epochs, and training callbacks
- For more information about callbacks, see Keras custom callbacks.
After training, you can use your model by calling model.evaluate
Custom Models
An alternative way to define a model is by extending the Model class:
- Write a python class which extends
tf.keras.Model
- Implement a forward pass in the
call
method
See Tensorflow: Custom Layers And Models #Building Models
Custom Training Loop
Reference
While you can train using model.compile
and model.fit
, using your own custom training loop is much more flexable and easier to understand.
You can write your own training loop by doing the following:
import tensorflow as tf
from tensorflow import keras
my_model = keras.Sequential([
keras.layers.Dense(400, input_shape=400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(2)
])
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
training_loss = []
validation_loss = []
for epoch in range(100):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
guess = my_model(x_batch_train)
loss_value = my_custom_loss(y_batch_train, guess)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, my_model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, my_model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss at step %s: %s' % (step, float(loss_value)))
training_loss.append(loss_value)
guess_validation = model(x_validation)
validation_loss.append(my_custom_loss(y_validation, guess_validation))
Save and Load Models
Custom Layers
Extend tf.keras.layer.Layer
class ReflectionPadding2D(Layer):
def __init__(self, padding=(1, 1), **kwargs):
self.padding = tuple(padding)
self.input_spec = [InputSpec(ndim=4)]
super(ReflectionPadding2D, self).__init__(**kwargs)
def compute_output_shape(self, s):
""" If you are using "channels_last" configuration"""
return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])
def call(self, x, mask=None):
w_pad,h_pad = self.padding
return tf.pad(x, [[0,0], [h_pad,h_pad], [w_pad,w_pad], [0,0] ], 'REFLECT')
class BilinearUpsample(layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = [keras.layers.InputSpec(ndim=4)]
def compute_output_shape(self, shape):
return shape[0], 2 * shape[1], 2 * shape[2], shape[3]
def call(self, inputs, training=None, mask=None):
new_height = int(2 * inputs.shape[1])
new_width = int(2 * inputs.shape[2])
return tf.image.resize_images(inputs, [new_height, new_width])
Operators
Matrix Multiplication
The two matrix multiplication operators are:
tf.linalg.matmul
(also aliased astf.matmul
)tf.linalg.matvec
New: With both operators, the first \(k-2\) dimensions can now be the batch size.
E.g. If \(A\) is \(b_1 \times b_2 \times 3 \times 3\) and \(B\) is \(b_1 \times b_2 \times 3\), you can multiply them with \(C = \operatorname{tf.linalg.matvec}(A,B)\) and \(C\) will be \(b_1 \times b_2 \times 3\).
Also the batch size in A can be 1 and it will properly broadcast to the same size as \(B\).
Usage (TF1)
In TF1, you first build a computational graph by chaining commands with placeholder.
Then, you execute the graph in a tf session.
import tensorflow as tf
Estimators
First Contact w/ TF Estimator (TDS)
Training Statistics
Reference
You can extract the training loss from the events file in tensorflow.
Tensorflow Addons
pip install tensorflow-addons
tfa.image.interpolate_bilinear
This is a bilinear interpolation. It is equivalent to PyTorch's grid_sample.
However, you need to reshape the grid to a nx2 array and make sure indexing='xy'
when calling the function.
You can reshape the output back to the dimensions of your original image.
Tensorflow Graphics
pip install tensorflow-graphics --upgrade
You may need to install a static openexr from https://www.lfd.uci.edu/~gohlke/pythonlibs/#openexr.