PyTorch
PyTorch is a popular machine learning library developed by Facebook
Installation
# If using conda, python 3.5+, and CUDA 10.0 (+ compatible cudnn)
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Getting Started
import torch
import torch.nn as nn
# Training
for epoch in range(epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
Importing Data
Usage
torch.nn.functional
F.grid_sample
Doc
This function allows you to perform interpolation on your input tensor.
It is very useful for resizing images or warping images.
Building a Model
To build a model, do the following:
- Create a class extending
nn.Module
. - In your class include all other modules you need during init.
- If you have a list of modules, make sure to wrap them in
nn.ModuleList
ornn.Sequential
so they are properly recognized.
- If you have a list of modules, make sure to wrap them in
- Write a forward pass for your model.
Memory Usage
Reducing memory usage
- Save loss using
.item()
which returns a standard Python number - For non-scalar items, use
my_var.detach().cpu().numpy()
detach()
deletes the item from the autograd edgecpu()
copies the tensor to the CPUnumpy()
returns a numpy view of the tensor
TensorBoard
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir="./runs")
# Calculate loss. Increment the step.
writer.add_scalar("train_loss", loss.item(), step)
# Optionally flush e.g. at checkpoints
writer.flush()
# Close the writer (will flush)
writer.close()
PyTorch3D
PyTorch3D is a library by Facebook AI Research which contains differentiable renderers for meshes and point clouds.
It is build using custom CUDA kernels.