PyTorch: Difference between revisions
Line 39: | Line 39: | ||
==Usage== | ==Usage== | ||
===torch.nn.functional=== | |||
[https://pytorch.org/docs/stable/nn.functional.html PyTorch Documentation] | |||
====F.grid_sample==== | |||
[https://pytorch.org/docs/stable/nn.functional.html#grid-sample Doc]<br> | |||
This function allows you to perform interpolation on your input tensor.<br> | |||
It is very useful for resizing images or warping images. | |||
==Memory Usage== | ==Memory Usage== | ||
Reducing memory usage | Reducing memory usage | ||
* Save loss using [https://pytorch.org/docs/stable/tensors.html#torch.Tensor.item <code>.item()</code>] which returns a standard Python number | * Save loss using [https://pytorch.org/docs/stable/tensors.html#torch.Tensor.item <code>.item()</code>] which returns a standard Python number |
Revision as of 15:07, 5 March 2020
PyTorch is a popular machine learning library developed by Facebook
Installation
# If using conda, python 3.5+, and CUDA 10.0 (+ compatible cudnn)
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Getting Started
import torch
import torch.nn as nn
# Training
for epoch in range(epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
Usage
torch.nn.functional
F.grid_sample
Doc
This function allows you to perform interpolation on your input tensor.
It is very useful for resizing images or warping images.
Memory Usage
Reducing memory usage
- Save loss using
.item()
which returns a standard Python number