Jump to content

PyTorch: Difference between revisions

137 bytes added ,  22 February 2021
Line 70: Line 70:
The main difference is this uses multiple processes instead of multithreading to work around the Python Interpreter.   
The main difference is this uses multiple processes instead of multithreading to work around the Python Interpreter.   
It also supports training on GPUs across multiple ''nodes'', or computers.
It also supports training on GPUs across multiple ''nodes'', or computers.
Using this is quite a bit more work than nn.DataParallel. 
You may want to consider using PyTorch Lightning which abstracts this away.


==Optimizations==
==Optimizations==