TensorFlow: Difference between revisions
| (34 intermediate revisions by the same user not shown) | |||
| Line 2: | Line 2: | ||
==Install== | ==Install== | ||
===Install TF2=== | ===Install TF2=== | ||
See https://www.tensorflow.org/install/pip | |||
Install tensorflow and [https://www.tensorflow.org/addons/overview tensorflow-addons] | Install tensorflow and [https://www.tensorflow.org/addons/overview tensorflow-addons] | ||
<pre> | <pre> | ||
pip install | pip install tensorflow-addons | ||
</pre> | </pre> | ||
;Notes | |||
* Note that [https://anaconda.org/anaconda/tensorflow anaconda/tensorflow] does not always have the latest version. | |||
* If you prefer, you can install only cuda and cudnn from conda: | |||
** See [https://www.tensorflow.org/install/source#linux https://www.tensorflow.org/install/source#linux] for a list of compatible Cuda and Cudnn versions. | |||
** <code>conda search cudatoolkit</code> to which versions of cuda available | |||
** Download [https://developer.nvidia.com/cuDNN cudnn] and copy the binaries to the environment's <code>Library/bin/</code> directory. | |||
===Install TF1=== | ===Install TF1=== | ||
The last official version of TensorFlow v1 is 1.15. This version does not work on RTX 3000+ (Ampere) GPUs. Your code will run but output bad results.<br> | |||
If | If you need TensorFlow v1, see [https://github.com/NVIDIA/tensorflow nvidia-tensorflow]. | ||
<pre> | <pre> | ||
pip install tensorflow | pip install nvidia-pyindex | ||
pip install nvidia-tensorflow | |||
</pre> | </pre> | ||
| Line 27: | Line 28: | ||
Here we'll cover usage using TensorFlow 2 which has eager execution.<br> | Here we'll cover usage using TensorFlow 2 which has eager execution.<br> | ||
This is using the Keras API in tensorflow.keras. | This is using the Keras API in tensorflow.keras. | ||
=== | ===Keras Pipeline=== | ||
[https://www.tensorflow.org/api_docs/python/tf/keras/Model tf.keras.Model] | |||
The general pipeline using Keras is: | The general pipeline using Keras is: | ||
* Define a model, typically using [https://www.tensorflow.org/api_docs/python/tf/keras/Sequential tf.keras.Sequential] | * Define a model, typically using [https://www.tensorflow.org/api_docs/python/tf/keras/Sequential tf.keras.Sequential] | ||
* Call [https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile <code>model.compile</code>] | * Call [https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile <code>model.compile</code>] | ||
** Here you pass in your optimizer, loss function, and metrics. | ** Here you pass in your optimizer, loss function, and metrics. | ||
* Train your model by calling <code>model.fit</code> | * Train your model by calling [https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit <code>model.fit</code>] | ||
** Here you pass in your training data, batch size, number of epochs, and training callbacks | ** Here you pass in your training data, batch size, number of epochs, and training callbacks | ||
** For more information about callbacks, see [https://www.tensorflow.org/guide/keras/custom_callback Keras custom callbacks]. | ** For more information about callbacks, see [https://www.tensorflow.org/guide/keras/custom_callback Keras custom callbacks]. | ||
After training, you can use your model by calling <code>model.evaluate</code> | After training, you can use your model by calling [https://www.tensorflow.org/api_docs/python/tf/keras/Model#evaluate <code>model.evaluate</code>] | ||
===Custom Models=== | ===Custom Models=== | ||
| Line 51: | Line 54: | ||
You can write your own training loop by doing the following: | You can write your own training loop by doing the following: | ||
<syntaxhighlight lang="python"> | <syntaxhighlight lang="python"> | ||
import tensorflow as tf | |||
from tensorflow import keras | |||
my_model= keras.Sequential([ | my_model = keras.Sequential([ | ||
keras. | keras.Input(shape=(400,)), | ||
keras.layers.Dense(400, activation='relu'), | keras.layers.Dense(400, activation='relu'), | ||
keras.layers.Dense(400, activation='relu'), | keras.layers.Dense(400, activation='relu'), | ||
| Line 142: | Line 145: | ||
==Usage (TF1)== | ==Usage (TF1)== | ||
In TF1, you first build a computational graph by chaining commands with placeholders and constant variables. | |||
Then, you execute the graph in a <code>tf.Session()</code>. | |||
{{hidden | TF1 MNIST Example | | |||
<syntaxhighlight lang="python"> | |||
import tensorflow as tf | |||
from tensorflow import keras | |||
import numpy as np | |||
NUM_EPOCHS = 10 | |||
BATCH_SIZE = 64 | |||
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() | |||
rng = np.random.default_rng() | |||
classification_model = keras.Sequential([ | |||
keras.Input(shape=(28, 28, 1)), | |||
keras.layers.Conv2D(16, 3, padding="SAME"), | |||
keras.layers.ReLU(), | |||
keras.layers.Conv2D(16, 3, padding="SAME"), | |||
keras.layers.ReLU(), | |||
keras.layers.Flatten(), | |||
keras.layers.Dense(10, activation='relu'), | |||
]) | |||
x_in = tf.compat.v1.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1)) | |||
logits = classification_model(x_in) | |||
gt_classes = tf.compat.v1.placeholder(dtype=tf.int32, shape=(None,)) | |||
loss = tf.losses.softmax_cross_entropy(tf.one_hot(gt_classes, 10), logits) | |||
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss) | |||
with tf.compat.v1.Session() as sess: | |||
sess.run(tf.compat.v1.global_variables_initializer()) | |||
global_step = 0 | |||
for epoch in range(NUM_EPOCHS): | |||
x_count = x_train.shape[0] | |||
image_ordering = rng.choice(range(x_count), x_count, replace=False) | |||
current_idx = 0 | |||
while current_idx < x_count: | |||
my_indices = image_ordering[current_idx:min(current_idx + BATCH_SIZE, x_count)] | |||
x = x_train[my_indices] | |||
x = x[:, :, :, None] / 255 | |||
logits_val, loss_val, _ = sess.run((logits, loss, optimizer), { | |||
x_in: x, | |||
gt_classes: y_train[my_indices] | |||
}) | |||
if global_step % 100 == 0: | |||
print("Loss", loss_val) | |||
current_idx += BATCH_SIZE | |||
global_step += 1 | |||
</syntaxhighlight> | |||
}} | |||
===Batch Normalization=== | |||
See [https://www.tensorflow.org/api_docs/python/tf/compat/v1/layers/batch_normalization <code>tf.compat.v1.layers.batch_normalization</code>] | |||
When training with batchnorm, you need to run <code>tf.GraphKeys.UPDATE_OPS</code> in your session to update the batchnorm variables or they will not be updated. | |||
These variables do not contribute to the loss when training is true so they will not by updated by the optimizer. | |||
<syntaxhighlight lang="python"> | |||
update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS) | |||
train_op = optimizer.minimize(loss) | |||
train_op = tf.group([train_op, update_ops]) | |||
</syntaxhighlight> | |||
==Estimators== | ==Estimators== | ||
| Line 149: | Line 214: | ||
[https://stackoverflow.com/questions/48940155/tensorflow-is-there-a-way-to-store-the-training-loss-in-tf-estimator Reference]<br> | [https://stackoverflow.com/questions/48940155/tensorflow-is-there-a-way-to-store-the-training-loss-in-tf-estimator Reference]<br> | ||
You can extract the training loss from the events file in tensorflow. | You can extract the training loss from the events file in tensorflow. | ||
==Tensorflow Addons== | |||
<pre> | |||
pip install tensorflow-addons | |||
</pre> | |||
===<code>tfa.image.interpolate_bilinear</code>=== | |||
[https://www.tensorflow.org/addons/api_docs/python/tfa/image/interpolate_bilinear Reference] | |||
This is a bilinear interpolation. It is equivalent to PyTorch's grid_sample. | |||
However, you need to reshape the grid to a nx2 array and make sure <code>indexing='xy'</code> when calling the function. | |||
You can reshape the output back to the dimensions of your original image. | |||
==Tensorflow Graphics== | |||
<pre> | |||
pip install tensorflow-graphics --upgrade | |||
</pre> | |||
You may need to install a static openexr from [https://www.lfd.uci.edu/~gohlke/pythonlibs/#openexr https://www.lfd.uci.edu/~gohlke/pythonlibs/#openexr]. | |||