TensorFlow: Difference between revisions

From David's Wiki
 
(26 intermediate revisions by the same user not shown)
Line 2: Line 2:


==Install==
==Install==
* Install CUDA and CuDNN
* Create a conda environment with python 3.7
** You can also just create a tensorflow environment using conda
** <code>conda create -n my_env tensorflow</code>
* Install with pip


===Install TF2===
===Install TF2===
See https://www.tensorflow.org/install/pip
Install tensorflow and [https://www.tensorflow.org/addons/overview tensorflow-addons]
Install tensorflow and [https://www.tensorflow.org/addons/overview tensorflow-addons]
<pre>
<pre>
# Install cuda and cudnn if necessary
pip install tensorflow-addons
conda install cudatoolkit=10.1.243 cudnn=7.6.5
</pre>


pip install tensorflow tensorflow-addons
;Notes
</pre>
* Note that [https://anaconda.org/anaconda/tensorflow anaconda/tensorflow] does not always have the latest version.
* If you prefer, you can install only cuda and cudnn from conda:
** See [https://www.tensorflow.org/install/source#linux https://www.tensorflow.org/install/source#linux] for a list of compatible Cuda and Cudnn versions.
** <code>conda search cudatoolkit</code> to which versions of cuda available
** Download [https://developer.nvidia.com/cuDNN cudnn] and copy the binaries to the environment's <code>Library/bin/</code> directory.


===Install TF1===
===Install TF1===
Note: You will only need TF1 if working with a TF1 repo.
The last official version of TensorFlow v1 is 1.15. This version does not work on RTX 3000+ (Ampere) GPUs. Your code will run but output bad results.<br>
If migrating your old code, you can install TF2 and use:
If you need TensorFlow v1, see [https://github.com/NVIDIA/tensorflow nvidia-tensorflow].
* <code>import tensorflow.compat.v1 as tf</code>
* See [https://www.tensorflow.org/guide/migrate TF Guide Migrate]
 
See [https://www.tensorflow.org/install/source#linux https://www.tensorflow.org/install/source#linux] for a list of compatible Cuda and Cudnn versions.
 
<pre>
<pre>
# Install compatible cuda and cudnn versions.
pip install nvidia-pyindex
conda install cudatoolkit=10.0.130 cudnn=7.6.5
pip install nvidia-tensorflow
 
# Install tensorflow
pip install tensorflow-gpu==1.15
 
# Test GPU support
python -c "import tensorflow as tf;print(tf.test.is_gpu_available())"
</pre>
</pre>


Line 39: Line 28:
Here we'll cover usage using TensorFlow 2 which has eager execution.<br>
Here we'll cover usage using TensorFlow 2 which has eager execution.<br>
This is using the Keras API in tensorflow.keras.
This is using the Keras API in tensorflow.keras.
===Basics===
===Keras Pipeline===
[https://www.tensorflow.org/api_docs/python/tf/keras/Model tf.keras.Model]
 
The general pipeline using Keras is:
The general pipeline using Keras is:
* Define a model, typically using [https://www.tensorflow.org/api_docs/python/tf/keras/Sequential tf.keras.Sequential]
* Define a model, typically using [https://www.tensorflow.org/api_docs/python/tf/keras/Sequential tf.keras.Sequential]
* Call [https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile <code>model.compile</code>]
* Call [https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile <code>model.compile</code>]
** Here you pass in your optimizer, loss function, and metrics.
** Here you pass in your optimizer, loss function, and metrics.
* Train your model by calling <code>model.fit</code>
* Train your model by calling [https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit <code>model.fit</code>]
** Here you pass in your training data, batch size, number of epochs, and training callbacks
** Here you pass in your training data, batch size, number of epochs, and training callbacks
** For more information about callbacks, see [https://www.tensorflow.org/guide/keras/custom_callback Keras custom callbacks].
** For more information about callbacks, see [https://www.tensorflow.org/guide/keras/custom_callback Keras custom callbacks].


After training, you can use your model by calling <code>model.evaluate</code>
After training, you can use your model by calling [https://www.tensorflow.org/api_docs/python/tf/keras/Model#evaluate <code>model.evaluate</code>]


===Custom Models===
===Custom Models===
Line 63: Line 54:
You can write your own training loop by doing the following:
You can write your own training loop by doing the following:
<syntaxhighlight lang="python">
<syntaxhighlight lang="python">
import tensorflow as tf
from tensorflow import keras


my_model= keras.Sequential([
my_model = keras.Sequential([
     keras.layers.Dense(400, input_shape=400, activation='relu'),
     keras.Input(shape=(400,)),
    keras.layers.Dense(400, activation='relu'),
    keras.layers.Dense(400, activation='relu'),
     keras.layers.Dense(400, activation='relu'),
     keras.layers.Dense(400, activation='relu'),
     keras.layers.Dense(400, activation='relu'),
     keras.layers.Dense(400, activation='relu'),
Line 154: Line 145:


==Usage (TF1)==
==Usage (TF1)==
In TF1, you first build a computational graph by chaining commands with placeholders and constant variables. 
Then, you execute the graph in a <code>tf.Session()</code>.
{{hidden | TF1 MNIST Example |
<syntaxhighlight lang="python">
import tensorflow as tf
from tensorflow import keras
import numpy as np
NUM_EPOCHS = 10
BATCH_SIZE = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
rng = np.random.default_rng()
classification_model = keras.Sequential([
    keras.Input(shape=(28, 28, 1)),
    keras.layers.Conv2D(16, 3, padding="SAME"),
    keras.layers.ReLU(),
    keras.layers.Conv2D(16, 3, padding="SAME"),
    keras.layers.ReLU(),
    keras.layers.Flatten(),
    keras.layers.Dense(10, activation='relu'),
])
x_in = tf.compat.v1.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
logits = classification_model(x_in)
gt_classes = tf.compat.v1.placeholder(dtype=tf.int32, shape=(None,))
loss = tf.losses.softmax_cross_entropy(tf.one_hot(gt_classes, 10), logits)
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
with tf.compat.v1.Session() as sess:
    sess.run(tf.compat.v1.global_variables_initializer())
    global_step = 0
    for epoch in range(NUM_EPOCHS):
        x_count = x_train.shape[0]
        image_ordering = rng.choice(range(x_count), x_count, replace=False)
        current_idx = 0
        while current_idx < x_count:
            my_indices = image_ordering[current_idx:min(current_idx + BATCH_SIZE, x_count)]
            x = x_train[my_indices]
            x = x[:, :, :, None] / 255
            logits_val, loss_val, _ = sess.run((logits, loss, optimizer), {
                x_in: x,
                gt_classes: y_train[my_indices]
            })
            if global_step % 100 == 0:
                print("Loss", loss_val)
            current_idx += BATCH_SIZE
            global_step += 1
</syntaxhighlight>
}}
===Batch Normalization===
See [https://www.tensorflow.org/api_docs/python/tf/compat/v1/layers/batch_normalization <code>tf.compat.v1.layers.batch_normalization</code>]
When training with batchnorm, you need to run <code>tf.GraphKeys.UPDATE_OPS</code> in your session to update the batchnorm variables or they will not be updated.
These variables do not contribute to the loss when training is true so they will not by updated by the optimizer.
<syntaxhighlight lang="python">
update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS)
train_op = optimizer.minimize(loss)
train_op = tf.group([train_op, update_ops])
</syntaxhighlight>


==Estimators==
==Estimators==

Latest revision as of 14:23, 31 May 2023

TensorFlow is the famous machine learning library by Google

Install

Install TF2

See https://www.tensorflow.org/install/pip Install tensorflow and tensorflow-addons

pip install tensorflow-addons
Notes
  • Note that anaconda/tensorflow does not always have the latest version.
  • If you prefer, you can install only cuda and cudnn from conda:

Install TF1

The last official version of TensorFlow v1 is 1.15. This version does not work on RTX 3000+ (Ampere) GPUs. Your code will run but output bad results.
If you need TensorFlow v1, see nvidia-tensorflow.

pip install nvidia-pyindex
pip install nvidia-tensorflow

Usage (TF2)

Here we'll cover usage using TensorFlow 2 which has eager execution.
This is using the Keras API in tensorflow.keras.

Keras Pipeline

tf.keras.Model

The general pipeline using Keras is:

  • Define a model, typically using tf.keras.Sequential
  • Call model.compile
    • Here you pass in your optimizer, loss function, and metrics.
  • Train your model by calling model.fit
    • Here you pass in your training data, batch size, number of epochs, and training callbacks
    • For more information about callbacks, see Keras custom callbacks.

After training, you can use your model by calling model.evaluate

Custom Models

An alternative way to define a model is by extending the Model class:

  • Write a python class which extends tf.keras.Model
  • Implement a forward pass in the call method

See Tensorflow: Custom Layers And Models #Building Models

Custom Training Loop

Reference
While you can train using model.compile and model.fit, using your own custom training loop is much more flexable and easier to understand. You can write your own training loop by doing the following:

import tensorflow as tf
from tensorflow import keras

my_model = keras.Sequential([
    keras.Input(shape=(400,)),
    keras.layers.Dense(400, activation='relu'),
    keras.layers.Dense(400, activation='relu'),
    keras.layers.Dense(2)
])

optimizer = keras.optimizers.SGD(learning_rate=1e-3)

training_loss = []
validation_loss = []
for epoch in range(100):
    print('Start of epoch %d' % (epoch,))
    for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
        with tf.GradientTape() as tape:
            guess = my_model(x_batch_train)
            loss_value = my_custom_loss(y_batch_train, guess)

        # Use the gradient tape to automatically retrieve
        # the gradients of the trainable variables with respect to the loss.
        grads = tape.gradient(loss_value, my_model.trainable_weights)

        # Run one step of gradient descent by updating
        # the value of the variables to minimize the loss.
        optimizer.apply_gradients(zip(grads, my_model.trainable_weights))

        # Log every 200 batches.
        if step % 200 == 0:
            print('Training loss at step %s: %s' % (step, float(loss_value)))
        training_loss.append(loss_value)
        guess_validation = model(x_validation)
        validation_loss.append(my_custom_loss(y_validation, guess_validation))

Save and Load Models

Reference

Custom Layers

Extend tf.keras.layer.Layer

ReflectionPadding2D

SO Source

class ReflectionPadding2D(Layer):
    def __init__(self, padding=(1, 1), **kwargs):
        self.padding = tuple(padding)
        self.input_spec = [InputSpec(ndim=4)]
        super(ReflectionPadding2D, self).__init__(**kwargs)

    def compute_output_shape(self, s):
        """ If you are using "channels_last" configuration"""
        return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])

    def call(self, x, mask=None):
        w_pad,h_pad = self.padding
        return tf.pad(x, [[0,0], [h_pad,h_pad], [w_pad,w_pad], [0,0] ], 'REFLECT')
BilinearUpsample
class BilinearUpsample(layers.Layer):

  def __init__(self):
    super().__init__()
    self.input_spec = [keras.layers.InputSpec(ndim=4)]

  def compute_output_shape(self, shape):
      return shape[0], 2 * shape[1], 2 * shape[2], shape[3]

  def call(self, inputs, training=None, mask=None):
    new_height = int(2 * inputs.shape[1])
    new_width = int(2 * inputs.shape[2])
    return tf.image.resize_images(inputs, [new_height, new_width])

Operators

Matrix Multiplication

The two matrix multiplication operators are:

New: With both operators, the first \(k-2\) dimensions can now be the batch size.
E.g. If \(A\) is \(b_1 \times b_2 \times 3 \times 3\) and \(B\) is \(b_1 \times b_2 \times 3\), you can multiply them with \(C = \operatorname{tf.linalg.matvec}(A,B)\) and \(C\) will be \(b_1 \times b_2 \times 3\).
Also the batch size in A can be 1 and it will properly broadcast to the same size as \(B\).

Usage (TF1)

In TF1, you first build a computational graph by chaining commands with placeholders and constant variables.
Then, you execute the graph in a tf.Session().

TF1 MNIST Example
import tensorflow as tf
from tensorflow import keras
import numpy as np

NUM_EPOCHS = 10
BATCH_SIZE = 64

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
rng = np.random.default_rng()

classification_model = keras.Sequential([
    keras.Input(shape=(28, 28, 1)),
    keras.layers.Conv2D(16, 3, padding="SAME"),
    keras.layers.ReLU(),
    keras.layers.Conv2D(16, 3, padding="SAME"),
    keras.layers.ReLU(),
    keras.layers.Flatten(),
    keras.layers.Dense(10, activation='relu'),
])
x_in = tf.compat.v1.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
logits = classification_model(x_in)
gt_classes = tf.compat.v1.placeholder(dtype=tf.int32, shape=(None,))
loss = tf.losses.softmax_cross_entropy(tf.one_hot(gt_classes, 10), logits)
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)

with tf.compat.v1.Session() as sess:
    sess.run(tf.compat.v1.global_variables_initializer())
    global_step = 0
    for epoch in range(NUM_EPOCHS):
        x_count = x_train.shape[0]
        image_ordering = rng.choice(range(x_count), x_count, replace=False)
        current_idx = 0
        while current_idx < x_count:
            my_indices = image_ordering[current_idx:min(current_idx + BATCH_SIZE, x_count)]
            x = x_train[my_indices]
            x = x[:, :, :, None] / 255
            logits_val, loss_val, _ = sess.run((logits, loss, optimizer), {
                x_in: x,
                gt_classes: y_train[my_indices]
            })
            if global_step % 100 == 0:
                print("Loss", loss_val)

            current_idx += BATCH_SIZE
            global_step += 1

Batch Normalization

See tf.compat.v1.layers.batch_normalization When training with batchnorm, you need to run tf.GraphKeys.UPDATE_OPS in your session to update the batchnorm variables or they will not be updated. These variables do not contribute to the loss when training is true so they will not by updated by the optimizer.

update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS)
train_op = optimizer.minimize(loss)
train_op = tf.group([train_op, update_ops])

Estimators

First Contact w/ TF Estimator (TDS)

Training Statistics

Reference
You can extract the training loss from the events file in tensorflow.


Tensorflow Addons

pip install tensorflow-addons

tfa.image.interpolate_bilinear

Reference

This is a bilinear interpolation. It is equivalent to PyTorch's grid_sample.
However, you need to reshape the grid to a nx2 array and make sure indexing='xy' when calling the function.
You can reshape the output back to the dimensions of your original image.

Tensorflow Graphics

pip install tensorflow-graphics --upgrade

You may need to install a static openexr from https://www.lfd.uci.edu/~gohlke/pythonlibs/#openexr.