TensorBoard: Difference between revisions
No edit summary |
No edit summary |
||
Line 16: | Line 16: | ||
tf.summary.scalar("training_loss", m_loss.numpy(), step=int(ckpt.step)) | tf.summary.scalar("training_loss", m_loss.numpy(), step=int(ckpt.step)) | ||
</syntaxhighlight> | </syntaxhighlight> | ||
==PyTorch== | |||
PyTorch also supports output tensorboard logs. | |||
See [https://pytorch.org/docs/stable/tensorboard.html https://pytorch.org/docs/stable/tensorboard.html]. | |||
There is also [https://github.com/lanpa/tensorboardX lanpa/tensorboardX] but I haven't tried it. | |||
==Resources== | ==Resources== | ||
* [https://www.tensorflow.org/tensorboard/get_started Getting started with TensorBoard] | * [https://www.tensorflow.org/tensorboard/get_started Getting started with TensorBoard] | ||
* [https://www.youtube.com/watch?v=eBbEDRsCmv4 Hands-on TensorBoard (TensorFlow Dev Summit 2017)] | * [https://www.youtube.com/watch?v=eBbEDRsCmv4 Hands-on TensorBoard (TensorFlow Dev Summit 2017)] |
Revision as of 23:31, 22 July 2020
TensorBoard is a way to visualize your model and various statistics during or after training.
Custom Usage
If you're using a custom training loop (i.e. gradient tape), then you'll need to set everything up manually.
First create a SummaryWriter
train_log_dir = os.path.join(args.checkpoint_dir, "logs", "train")
train_summary_writer = tf.summary.create_file_writer(train_log_dir)
Scalars
Add scalars using tf.summary.scalar
:
with train_summary_writer.as_default():
tf.summary.scalar("training_loss", m_loss.numpy(), step=int(ckpt.step))
PyTorch
PyTorch also supports output tensorboard logs.
See https://pytorch.org/docs/stable/tensorboard.html.
There is also lanpa/tensorboardX but I haven't tried it.