TensorBoard: Difference between revisions

From David's Wiki
No edit summary
Line 21: Line 21:
See [https://pytorch.org/docs/stable/tensorboard.html https://pytorch.org/docs/stable/tensorboard.html].   
See [https://pytorch.org/docs/stable/tensorboard.html https://pytorch.org/docs/stable/tensorboard.html].   
There is also [https://github.com/lanpa/tensorboardX lanpa/tensorboardX] but I haven't tried it.
There is also [https://github.com/lanpa/tensorboardX lanpa/tensorboardX] but I haven't tried it.
<syntaxhighlight lang="python">
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir="./runs")
writer.add_scalar("train_loss", loss_np, step)
# Optionally flush e.g. at checkpoints
writer.flush()
# Close the writer (will flush)
writer.close()
</syntaxhighlight>


==Resources==
==Resources==
* [https://www.tensorflow.org/tensorboard/get_started Getting started with TensorBoard]
* [https://www.tensorflow.org/tensorboard/get_started Getting started with TensorBoard]
* [https://www.youtube.com/watch?v=eBbEDRsCmv4 Hands-on TensorBoard (TensorFlow Dev Summit 2017)]
* [https://www.youtube.com/watch?v=eBbEDRsCmv4 Hands-on TensorBoard (TensorFlow Dev Summit 2017)]

Revision as of 14:00, 29 July 2020

TensorBoard is a way to visualize your model and various statistics during or after training.

Custom Usage

If you're using a custom training loop (i.e. gradient tape), then you'll need to set everything up manually.

First create a SummaryWriter

train_log_dir = os.path.join(args.checkpoint_dir, "logs", "train")
train_summary_writer = tf.summary.create_file_writer(train_log_dir)

Scalars

Add scalars using tf.summary.scalar:

with train_summary_writer.as_default():
  tf.summary.scalar("training_loss", m_loss.numpy(), step=int(ckpt.step))

PyTorch

PyTorch also supports output tensorboard logs.
See https://pytorch.org/docs/stable/tensorboard.html.
There is also lanpa/tensorboardX but I haven't tried it.

from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir="./runs")

writer.add_scalar("train_loss", loss_np, step)

# Optionally flush e.g. at checkpoints
writer.flush()

# Close the writer (will flush)
writer.close()

Resources