PyTorch: Difference between revisions

From David's Wiki
Line 176: Line 176:
</syntaxhighlight>
</syntaxhighlight>


==PyTorch3D==
==Libraries==
A list of useful libraries
 
===torchvision===
https://pytorch.org/vision/stable/index.html
 
Official tools for image manipulation such as blur, bounding boxes.
 
===torchmetrics===
https://torchmetrics.readthedocs.io/en/stable/
 
Various metrics such as PSNR, SSIM, LPIPS
 
===PyTorch3D===
{{main | PyTorch3D}}
{{main | PyTorch3D}}
[https://github.com/facebookresearch/pytorch3d PyTorch3D] is a library by Facebook AI Research which contains differentiable renderers for meshes and point clouds
[https://github.com/facebookresearch/pytorch3d PyTorch3D]
It is built using custom CUDA kernels and only runs on Linux.
 
Facebook library with differentiable renderers for meshes and point clouds.

Revision as of 15:11, 26 July 2023

PyTorch is a popular machine learning library developed by Facebook

Installation

See PyTorch Getting Started

# If using conda, python 3.5+, and CUDA 10.0 (+ compatible cudnn)
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

Getting Started

Example
import torch
import torch.nn as nn

model = nn.Sequential(nn.Linear(5, 5),nn.ReLU(),nn.Linear(5, 1))
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

# Training
for epoch in range(epochs):
    for i, data in enumerate(trainloader):
        # get the inputs; e.g. data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward
        outputs = model(inputs)
        loss = criterion(outputs, labels)
      <br />
        # backward
        loss.backward()
        optimizer.step()

Importing Data

See Data Loading Tutorial

Usage

Note that there are some useful functions under torch.nn.functional which is typically imported as F.

torch.meshgrid

Note that this is transposed compared to np.meshgrid.

torch.multinomial

torch.multinomial
If you need to sample with a lot of categories and with replacement, it may be faster to use `torch.cumsum` to build a CDF and `torch.searchsorted`.

torch.searchsorted example
# Create your weights variable.
weights_cdf = torch.cumsum(weights, dim=0)
weights_cdf_max = weights_cdf[0]
sample = torch.searchsorted(weights_cdf,
                            weights_cdf_max * torch.rand(num_samples))

F.grid_sample

Doc
This function allows you to perform interpolation on your input tensor.
It is very useful for resizing images or warping images.

Building a Model

To build a model, do the following:

  • Create a class extending nn.Module.
  • In your class include all other modules you need during init.
    • If you have a list of modules, make sure to wrap them in nn.ModuleList or nn.Sequential so they are properly recognized.
  • Wrap any parameters for you model in nn.Parameter(weight, requires_grad=True).
  • Write a forward pass for your model.

Multi-GPU Training

See Multi-GPU Examples.

nn.DataParallel

The basic idea is to wrap blocks in nn.DataParallel.
This will automatically duplicate the module across multiple GPUs and split the batch across GPUs during training.

However, doing so causes you to lose access to custom methods and attributes.

To save and load the model, just use model.module.save_state_dict() and model.module.load_state_dict().

nn.parallel.DistributedDataParallel

nn.parallel.DistributedDataParallel
DistributedDataParallel vs DataParallel ddp tutorial

The PyTorch documentation suggests using this instead of nn.DataParallel. The main difference is this uses multiple processes instead of multithreading to work around the Python Interpreter.
It also supports training on GPUs across multiple nodes, or computers.

Using this is quite a bit more work than nn.DataParallel.
You may want to consider using PyTorch Lightning which abstracts this away.

Optimizations

Reducing GPU memory usage

  • Save loss using .item() which returns a standard Python number
  • For non-scalar items, use my_var.detach().cpu().numpy()
  • detach() removes the item from the autograd edge.
  • cpu() moves the tensor to the CPU.
  • numpy() returns a numpy view of the tensor.

When possible, use functions which return new views of existing tensors rather than making duplicates of tensors:

Note that permute does not change the underlying data.
This can result in a minor performance hit which PyTorch will warn you about if you repeatedly use a contiguous tensor with a channels last tensor.
To address this, call contiguous on the tensor with the new memory format.

During inference
  • Use `model.eval()`
  • Use `with torch.no_grad():`

Float16

Float16 uses half the memory of float32.
New Nvidia GPUs also have dedicated hardware instructions called tensor cores to speed up float16 matrix multiplication.
Typically it's best to train using float32 though for stability purposes.
You can do truncate trained models and inference using float16.

Note that bfloat16 is different from IEEE float16. bfloat16 has fewer mantissa bits (8 exp, 7 mantissa) and is used by Google's TPUs. In contrast, float16 has 5 exp and 10 mantissa bits.

Classification

In classification, your model outputs a vector of logits.
These are relative scores for each potential output class.
To compute the loss, pass the logits into a cross-entropy loss.

To compute the accuracy, you can use torch.argmax to get the top prediction or torch.topk to get the top-k prediction.

Debugging

If you get a cuda kernel error, you can rerun with the environment variable CUDA_LAUNCH_BLOCKING=1 to get the correct line in the stack trace.

CUDA_LAUNCH_BLOCKING=1 python app.py


For the following error:

CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasGemmEx(...)`

First check all your tensor types and shapes.
If you've checked all your tensor shapes and types and you can try running with the environment variable:

CUBLAS_WORKSPACE_CONFIG=:0:0

References:

TensorBoard

See PyTorch Docs: Tensorboard

from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir="./runs")

# Calculate loss. Increment the step.

writer.add_scalar("train_loss", loss.item(), step)

# Optionally flush e.g. at checkpoints
writer.flush()

# Close the writer (will flush)
writer.close()

Libraries

A list of useful libraries

torchvision

https://pytorch.org/vision/stable/index.html

Official tools for image manipulation such as blur, bounding boxes.

torchmetrics

https://torchmetrics.readthedocs.io/en/stable/

Various metrics such as PSNR, SSIM, LPIPS

PyTorch3D

PyTorch3D

Facebook library with differentiable renderers for meshes and point clouds.