UMIACS Servers: Difference between revisions

 
(One intermediate revision by the same user not shown)
Line 197: Line 197:
if command_exists module ; then
if command_exists module ; then
   module load tmux
   module load tmux
   module load cuda/10.1.243
   module load cuda/10.2.89
   module load cudnn/v7.6.5
   module load cudnn/v8.0.4
   module load Python3/3.7.6
   module load Python3/3.7.6
   module load git/2.25.1
   module load git/2.25.1
   module load gitlfs
   module load gitlfs
   module load gcc/8.1.0
   module load gcc/8.1.0
   #module load gcc/6.3.0
   module load openmpi/4.0.1
   module load ffmpeg
   module load ffmpeg
 
  module load rclone
fi
fi
if command_exists python3 ; then
if command_exists python3 ; then
Line 231: Line 231:


==Copying Files==
==Copying Files==
There are 3 ways that I copy files to the scratch drives:
There are 3 ways that I use to copy files:
* For small files, you can copy to your home directory under <code>/nfshomes/</code> via SFTP to mbrcsub00. I rarely do this because the home directory is only a few gigs.
* For small files, you can copy to your home directory under <code>/nfshomes/</code> via SFTP to the submission node. I rarely do this because the home directory is only a few gigs.
* For large files, I typically use [[rclone]] to copy to my terpmail Google Drive and then copy back to the scratch drives with a cpu-only job. Do not do this with thousands of small files; it will take forever since Google Drive has a limit on files per second. Also note that Google Drive has a daily limit of 750GB in transfers.
* For large files and folder, I typically use [[rclone]] to copy to the cloud and then copy back to the scratch drives with a cpu-only job.
* For mounting, I have a convoluted system where I start SSHD in a job and port forward the SSH port to my local PC. See above for more details.
** You can store project files on Google Drive or the UMIACS object storage.
** Note that Google Drive has a limit on files per second and a daily limit of 750GB in transfers.