TensorFlow: Difference between revisions
(Created page with "TensorFlow is the famous machine learning library by Google ==Usage (TF2)== Here we'll cover usage using TensorFlow 2 which has eager execution. ===Basics=== ===Training Loo...") |
|||
(5 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
==Usage (TF2)== | ==Usage (TF2)== | ||
Here we'll cover usage using TensorFlow 2 which has eager execution. | Here we'll cover usage using TensorFlow 2 which has eager execution.<br> | ||
This is using the Keras API in tensorflow.keras. | |||
===Basics=== | ===Basics=== | ||
===Training Loop=== | ===Training Loop=== | ||
[https://www.tensorflow.org/guide/keras/train_and_evaluate#part_ii_writing_your_own_training_evaluation_loops_from_scratch Reference]<br> | [https://www.tensorflow.org/guide/keras/train_and_evaluate#part_ii_writing_your_own_training_evaluation_loops_from_scratch Reference]<br> | ||
You can write your own training loop | While you can train using <code>model.compile</code> and <code>model.fit</code>, using your own custom training loop is much more flexable and easier to understand. | ||
You can write your own training loop by doing the following: | |||
<syntaxhighlight lang="python"> | |||
my_model= keras.Sequential([ | |||
keras.layers.Dense(400, input_shape=400, activation='relu'), | |||
keras.layers.Dense(400, activation='relu'), | |||
keras.layers.Dense(400, activation='relu'), | |||
keras.layers.Dense(400, activation='relu'), | |||
keras.layers.Dense(400, activation='relu'), | |||
keras.layers.Dense(2) | |||
]) | |||
training_loss = [] | |||
validation_loss = [] | |||
for epoch in range(100): | |||
print('Start of epoch %d' % (epoch,)) | |||
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): | |||
with tf.GradientTape() as tape: | |||
guess = my_model(x_batch_train) | |||
loss_value = my_custom_loss(y_batch_train, guess) | |||
# Use the gradient tape to automatically retrieve | |||
# the gradients of the trainable variables with respect to the loss. | |||
grads = tape.gradient(loss_value, my_model.trainable_weights) | |||
# Run one step of gradient descent by updating | |||
# the value of the variables to minimize the loss. | |||
optimizer.apply_gradients(zip(grads, my_model.trainable_weights)) | |||
# Log every 200 batches. | |||
if step % 200 == 0: | |||
print('Training loss at step %s: %s' % (step, float(loss_value))) | |||
training_loss.append(loss_value) | |||
guess_validation = model(x_validation) | |||
validation_loss.append(my_custom_loss(y_validation, guess_validation)) | |||
</syntaxhighlight> | |||
===Save and Load Models=== | |||
[https://www.tensorflow.org/tutorials/keras/save_and_load Reference] | |||
==Usage (TF1)== | ==Usage (TF1)== | ||
==Estimators== | |||
[https://towardsdatascience.com/first-contact-with-tensorflow-estimator-69a5e072998d First Contact w/ TF Estimator (TDS)]<br> | |||
===Training Statistics=== | |||
[https://stackoverflow.com/questions/48940155/tensorflow-is-there-a-way-to-store-the-training-loss-in-tf-estimator Reference]<br> | |||
You can extract the training loss from the events file in tensorflow. |
Revision as of 04:09, 30 November 2019
TensorFlow is the famous machine learning library by Google
Usage (TF2)
Here we'll cover usage using TensorFlow 2 which has eager execution.
This is using the Keras API in tensorflow.keras.
Basics
Training Loop
Reference
While you can train using model.compile
and model.fit
, using your own custom training loop is much more flexable and easier to understand.
You can write your own training loop by doing the following:
my_model= keras.Sequential([
keras.layers.Dense(400, input_shape=400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(2)
])
training_loss = []
validation_loss = []
for epoch in range(100):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
guess = my_model(x_batch_train)
loss_value = my_custom_loss(y_batch_train, guess)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, my_model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, my_model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss at step %s: %s' % (step, float(loss_value)))
training_loss.append(loss_value)
guess_validation = model(x_validation)
validation_loss.append(my_custom_loss(y_validation, guess_validation))
Save and Load Models
Usage (TF1)
Estimators
First Contact w/ TF Estimator (TDS)
Training Statistics
Reference
You can extract the training loss from the events file in tensorflow.