TensorFlow
TensorFlow is the famous machine learning library by Google
Install
- Install CUDA and CuDNN
- Create a conda environment with python 3.7
- You can also just create a tensorflow environment using conda
conda create -n my_env tensorflow
- Install with pip
Install TF1
pip install tensorflow
Install TF2
pip install tensorflow-gpu==1.15
Usage (TF2)
Here we'll cover usage using TensorFlow 2 which has eager execution.
This is using the Keras API in tensorflow.keras.
Basics
The general pipeline using Keras is:
- Define a model, typically using tf.keras.Sequential
- Call
model.compile
- Here you pass in your optimizer, loss function, metrics, and training callbacks.
- Train your model by calling
model.fit
- Here you pass in your training data and some hyperparameters: number of epochs
After training, you can use your model by calling model.evaluate
Custom Models
An alternative way to define a model is by extending the Model class:
- Write a python class which extends
tf.keras.Model
- Implement a forward pass in the
call
method
Custom Training Loop
Reference
While you can train using model.compile
and model.fit
, using your own custom training loop is much more flexable and easier to understand.
You can write your own training loop by doing the following:
my_model= keras.Sequential([
keras.layers.Dense(400, input_shape=400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(2)
])
training_loss = []
validation_loss = []
for epoch in range(100):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
guess = my_model(x_batch_train)
loss_value = my_custom_loss(y_batch_train, guess)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, my_model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, my_model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss at step %s: %s' % (step, float(loss_value)))
training_loss.append(loss_value)
guess_validation = model(x_validation)
validation_loss.append(my_custom_loss(y_validation, guess_validation))
Save and Load Models
Usage (TF1)
Estimators
First Contact w/ TF Estimator (TDS)
Training Statistics
Reference
You can extract the training loss from the events file in tensorflow.