\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

PyTorch is a popular machine learning library developed by Facebook

Installation

See PyTorch Getting Started

# If using conda, python 3.5+, and CUDA 10.0 (+ compatible cudnn)
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

Getting Started

import torch
import torch.nn as nn

# Training
for epoch in range(epochs):
  running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

Importing Data

See Data Loading Tutorial

Usage

torch.nn.functional

PyTorch Documentation

F.grid_sample

Doc
This function allows you to perform interpolation on your input tensor.
It is very useful for resizing images or warping images.

Memory Usage

Reducing memory usage

  • Save loss using .item() which returns a standard Python number
  • For non-scalar items, use my_var.detach().cpu().numpy()
  • detach() deletes the item from the autograd edge
  • cpu() copies the tensor to the CPU
  • numpy() returns a numpy view of the tensor

TensorBoard

See PyTorch Docs: Tensorboard

from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir="./runs")

writer.add_scalar("train_loss", loss_np, step)

# Optionally flush e.g. at checkpoints
writer.flush()

# Close the writer (will flush)
writer.close()