TensorBoard: Difference between revisions
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
TensorBoard is a way to visualize your model and various statistics during or after training. | |||
== | ==CLI Usage== | ||
===CLI=== | |||
<pre> | |||
tensorboard --logdir [logs] | |||
</pre> | |||
;Flags | |||
*<code>--samples_per_plugin</code> indices the number of samples to show for each tab. Non-scalar objects are sampled using reservoir sampling. | |||
** <code>--samples_per_plugin images=10000</code> samples approximately 10000 images. | |||
==Training Usage== | |||
If you're using a custom training loop (i.e. gradient tape), then you'll need to set everything up manually. | If you're using a custom training loop (i.e. gradient tape), then you'll need to set everything up manually. | ||
Line 14: | Line 25: | ||
with train_summary_writer.as_default(): | with train_summary_writer.as_default(): | ||
tf.summary.scalar("training_loss", m_loss.numpy(), step=int(ckpt.step)) | tf.summary.scalar("training_loss", m_loss.numpy(), step=int(ckpt.step)) | ||
</syntaxhighlight> | |||
==PyTorch== | |||
PyTorch also supports output tensorboard logs. | |||
See [https://pytorch.org/docs/stable/tensorboard.html https://pytorch.org/docs/stable/tensorboard.html]. | |||
There is also [https://github.com/lanpa/tensorboardX lanpa/tensorboardX] but I haven't tried it. | |||
<syntaxhighlight lang="python"> | |||
from torch.utils.tensorboard import SummaryWriter | |||
writer = SummaryWriter(log_dir="./runs") | |||
writer.add_scalar("train_loss", loss_np, step) | |||
# Optionally flush e.g. at checkpoints | |||
writer.flush() | |||
# Close the writer (will flush) | |||
writer.close() | |||
</syntaxhighlight> | </syntaxhighlight> | ||
==Resources== | ==Resources== | ||
* [https://www.tensorflow.org/tensorboard/get_started Getting started with TensorBoard] | * [https://www.tensorflow.org/tensorboard/get_started Getting started with TensorBoard] | ||
* [https://www.youtube.com/watch?v=eBbEDRsCmv4 Hands-on TensorBoard (TensorFlow Dev Summit 2017)] |