TensorFlow: Difference between revisions
Line 10: | Line 10: | ||
While you can train using <code>model.compile</code> and <code>model.fit</code>, using your own custom training loop is much more flexable and easier to understand. | While you can train using <code>model.compile</code> and <code>model.fit</code>, using your own custom training loop is much more flexable and easier to understand. | ||
You can write your own training loop by doing the following: | You can write your own training loop by doing the following: | ||
<syntaxhighlight | <syntaxhighlight lang="python"> | ||
my_model= keras.Sequential([ | my_model= keras.Sequential([ |
Revision as of 21:00, 15 October 2019
TensorFlow is the famous machine learning library by Google
Usage (TF2)
Here we'll cover usage using TensorFlow 2 which has eager execution.
This is using the Keras API in tensorflow.keras.
Basics
Training Loop
Reference
While you can train using model.compile
and model.fit
, using your own custom training loop is much more flexable and easier to understand.
You can write your own training loop by doing the following:
my_model= keras.Sequential([
keras.layers.Dense(400, input_shape=400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(400, activation='relu'),
keras.layers.Dense(2)
])
training_loss = []
validation_loss = []
for epoch in range(100):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
guess = my_model(x_batch_train)
loss_value = my_custom_loss(y_batch_train, guess)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, my_model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, my_model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss at step %s: %s' % (step, float(loss_value)))
training_loss.append(loss_value)
guess_validation = model(x_validation)
validation_loss.append(my_custom_loss(y_validation, guess_validation))