5,321
edits
Line 48: | Line 48: | ||
Reducing memory usage | Reducing memory usage | ||
* Save loss using [https://pytorch.org/docs/stable/tensors.html#torch.Tensor.item <code>.item()</code>] which returns a standard Python number | * Save loss using [https://pytorch.org/docs/stable/tensors.html#torch.Tensor.item <code>.item()</code>] which returns a standard Python number | ||
* For non-scalar items, use <code>my_var.detach().cpu().numpy()</code> | |||
* <code>detach()</code> deletes the item from the autograd edge | |||
* <code>cpu()</code> copies the tensor to the CPU | |||
* <code>numpy()</code> returns a numpy view of the tensor | |||
==TensorBoard== | |||
{{main | TensorBoard}} | |||
See [https://pytorch.org/docs/stable/tensorboard.html PyTorch Docs: Tensorboard] | |||
<syntaxhighlight lang="python"> | |||
from torch.utils.tensorboard import SummaryWriter | |||
writer = SummaryWriter(log_dir="./runs") | |||
writer.add_scalar("train_loss", loss_np, step) | |||
# Optionally flush e.g. at checkpoints | |||
writer.flush() | |||
# Close the writer (will flush) | |||
writer.close() | |||
</syntaxhighlight> |