General Documentation
- Welcome FAQ
- Secure Shell SSH
- Available Software
- Installing Software
- Guidelines and Policies
- Glossary
- Grant Support
- Sharing Data
- Containers & Singularity
- UserGroup Presentations
- Jupyter Notebook Usage
LSF Primer
Lilac Cluster Guide
Juno Cluster Guide
Cloud Resources
Backup Policy on server/node local drives
File lists
Page History
...
- Description : GPFS shared parallel filesystem, not replicated, and not backed up.
- Purpose: To store software related code and scripts, default quota size is small and fixed.
- Mount: /home/<user>
- Access: All Lilac nodes, including compute storage and login nodes.
- Default quota: 100GB
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /home/.snapshots/<user>
- Replicated: yes no
Lilac compute storage :
- Description : GPFS shared parallel filesystem, not replicated, and not backed up.
- Purpose: For jobs to read and write compute data from login and compute nodes, default quota size is larger with flexibility to request larger quota.
- Mount: /data/<lab group>
- Access: All Lilac nodes, including compute storage and login nodes.
- Default quota: 5TB ( Increased/Decreased on request )
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /data/.snapshots/<date>/<lab group>
- Replicated: yes no
Lilac warm storage :
- Description : GPFS shared parallel filesystem, not replicated but will be replicated in near future, and not backed up. Comparatively slower than lilac compute storage.
- Purpose: To store long term data. Only accessible from login nodes and cannot be accessed from compute nodes.
- Mount: /warm/<lab group>
- Access: Only lilac and luna login nodes.
- Default quota: 5TB ( Increased/Decreased on request )
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /warm/.snapshots/<date>/<lab group>
- Replicated: no ( will be replicated in near future )
Lilac local scratch storage :
...
Check Quota for GPFS filesystem:
...
Lilac home storage :
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true mmlsquota lilalilac:home --block-size auto
Lilac compute storage :
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true mmlsquota -j data_<lab group name> lilalilac --block-size auto
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true df -h /data/<lab group name> df -ih /data/<lab group name>
Lilac warm storage (oscar) :
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true mmlsquota -j warm_<lab group name> oscar --block-size auto
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true df -h /warm/<lab group name> df -ih /warm/<lab group name>
...
SABA/LUNA/LUX:
To copy files from other clusters, firstssh -A
into the other cluster to forward your keys.Code Block language bash theme Midnight firstline 1 title Command line linenumbers true ssh -A $USERNAME@$CLUSTER
We recommend
rsync -va
to copy files and directories.Make note of the source directory/source files and destination directory/files on Lilac and copy them as below:
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true rsync -av --progress $SOURCEPATH lilac:$DESTPATH
HAL:
Remember that thehal
cluster is outside the MSKCC network, and does not have access tolilac
.First - Make note of the source directory/source files on HAL and destination directory/files on Lilac:
To transfer data, ssh into lilac as below :
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true ssh -A $USERNAME@lilac.mskcc.org
Then pull files from HAL:
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true rsync -av --progress hal:$SOURCEPATH $DESTPATH
Tip - Make sure you calculate the size of data you will copy to
You can see the size of files and directories withlilac
, and that you have enough space onlilac
to avoid hitting your hard quota.lilac
uses data replication to for safety, so a file containing 1G of data consumes 2G of quota onlilac
.du
, which will show 2G for 1G of file data due to replication. To see file size without replication overhead usedu --apparent-size
instead:- Depending on the size and number of files to copy, you may run multiple
rsync
commands simultaneously to copy different directories. - The HPC private network is faster than the MSKCC campus network, so using short names (
lilac
,saba
,luna
,selene
, etc.) will often make transfers faster than using fully qualified domain names such asluna.mskcc.org
. This does not apply tohal
, though
- Make sure you calculate the size of data you will copy to
...