Difference between revisions of "PyTorch"
Jump to navigation
Jump to search
Line 48: | Line 48: | ||
Reducing memory usage | Reducing memory usage | ||
* Save loss using [https://pytorch.org/docs/stable/tensors.html#torch.Tensor.item <code>.item()</code>] which returns a standard Python number | * Save loss using [https://pytorch.org/docs/stable/tensors.html#torch.Tensor.item <code>.item()</code>] which returns a standard Python number | ||
+ | * For non-scalar items, use <code>my_var.detach().cpu().numpy()</code> | ||
+ | |||
+ | * <code>detach()</code> deletes the item from the autograd edge | ||
+ | * <code>cpu()</code> copies the tensor to the CPU | ||
+ | * <code>numpy()</code> returns a numpy view of the tensor | ||
+ | |||
+ | ==TensorBoard== | ||
+ | {{main | TensorBoard}} | ||
+ | See [https://pytorch.org/docs/stable/tensorboard.html PyTorch Docs: Tensorboard] | ||
+ | |||
+ | <syntaxhighlight lang="python"> | ||
+ | from torch.utils.tensorboard import SummaryWriter | ||
+ | writer = SummaryWriter(log_dir="./runs") | ||
+ | |||
+ | writer.add_scalar("train_loss", loss_np, step) | ||
+ | |||
+ | # Optionally flush e.g. at checkpoints | ||
+ | writer.flush() | ||
+ | |||
+ | # Close the writer (will flush) | ||
+ | writer.close() | ||
+ | </syntaxhighlight> |
Revision as of 09:01, 29 July 2020
PyTorch is a popular machine learning library developed by Facebook
Contents
Installation
# If using conda, python 3.5+, and CUDA 10.0 (+ compatible cudnn)
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Getting Started
import torch
import torch.nn as nn
# Training
for epoch in range(epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
Importing Data
Usage
torch.nn.functional
F.grid_sample
Doc
This function allows you to perform interpolation on your input tensor.
It is very useful for resizing images or warping images.
Memory Usage
Reducing memory usage
- Save loss using
.item()
which returns a standard Python number - For non-scalar items, use
my_var.detach().cpu().numpy()
detach()
deletes the item from the autograd edgecpu()
copies the tensor to the CPUnumpy()
returns a numpy view of the tensor
TensorBoard
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir="./runs")
writer.add_scalar("train_loss", loss_np, step)
# Optionally flush e.g. at checkpoints
writer.flush()
# Close the writer (will flush)
writer.close()