General Documentation
- Welcome FAQ
- Secure Shell SSH
- Available Software
- Installing Software
- Guidelines and Policies
- Glossary
- Grant Support
- Sharing Data
- Containers & Singularity
- UserGroup Presentations
- Jupyter Notebook Usage
LSF Primer
Lilac Cluster Guide
Juno Cluster Guide
Cloud Resources
Backup Policy on server/node local drives
File lists
Page History
...
- Description : GPFS shared parallel filesystem, not replicated, and not backed up.
- Purpose: To store software related code and scripts, default quota size is small and fixed.
- Mount: /home/<user>
- Access: All Lilac nodes, including compute storage and login nodes.
- Default quota: 100GB
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /home/.snapshots/<user>
- Replicated: no
Lilac compute storage :
- Description : GPFS shared parallel filesystem, not replicated, and not backed up.
- Purpose: For jobs to read and write compute data from login and compute nodes, default quota size is larger with flexibility to request larger quota.
- Mount: /data/<lab group>
- Access: All Lilac nodes, including compute storage and login nodes.
- Default quota: 5TB 1TB ( Increased/Decreased on request )
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /data/.snapshots/<date>/<lab group>
- Replicated: no
Lilac warm storage :
- Description : GPFS shared parallel filesystem, not replicated but will be replicated in near future, and not backed up. Comparatively slower than lilac compute storage.
- Purpose: To store long term data. Only accessible from login nodes and cannot be accessed from compute nodes.
- Mount: /warm/<lab group>
- Access: Only lilac and luna login nodes.
- Default quota: 5TB 1TB ( Increased/Decreased on request )
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /warm/.snapshots/<date>/<lab group>
- Replicated: no ( will be replicated in near future )
Lilac local scratch storage :
- Description : XFS filesystem, not replicated, and not backed up. Local and not a shared filesystem, slower than GPFS.
- Purpose: To store local temporary data related to compute jobs. Since this is not a shared filesystem, the temporary data needs to be cleaned up and copied back to shared filesystem after job completion.
- Mount: /scratch/
- Access: Only lilac compute nodes.
- Default quota: No quota and limited to free disk space in /scratch.
- Snapshots: No snapshots.
- Replicated: no
How to :
Check Quota for GPFS filesystem:
...
Lilac home storage :
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true mmlsquota
...
lilac:home
...
...
Lilac compute storage :
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true mmlsquota
...
-j data_<lab group name> --block-size auto
...
language | bash |
---|---|
theme | Midnight |
firstline | 1 |
title | Command line |
linenumbers | true |
...
lilac
Lilac warm storage (oscar) :
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true mmlsquota
...
-j warm_<lab group name> --block-size auto
...
language | bash |
---|---|
theme | Midnight |
firstline | 1 |
title | Command line |
linenumbers | true |
...
oscar
Tip |
---|
mmlsquota gives information about quota on number of files too, along with information about block |
...
Copy files from other clusters:
...
HAL cluster is outside the firewall, so lilac cannot be accessed directly from HAL cluster.
Related articles
Content by Label | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Page properties | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Related issues | info|||||||||||||||||||||||||||||
quota.
Once the number of blocks or number of files reach the value mentioned in "quota" - Storage system will give 7 days as a grace period, to fill up until the max value mentioned in "limit" is reached. Storage system will not allow any more data to be written when:
|
2. Copy files from other clusters:
Juno:
To copy files from other clusters, firstssh -A
into the other cluster to forward your keys.Code Block language bash theme Midnight firstline 1 title Command line linenumbers true ssh -A $USERNAME@$CLUSTER
We recommend
rsync -va
to copy files and directories.Make note of the source directory/source files and destination directory/files on Lilac and copy them as below:
Code Block language bash theme Midnight firstline 1 title Command line linenumbers true rsync -av --progress $SOURCEPATH lilac:$DESTPATH
Tip - Depending on the size and number of files to copy, you may run multiple
rsync
commands simultaneously to copy different directories. - The HPC private network is faster than the MSKCC campus network, so using short names like
lilac
will often make transfers faster than using the fully qualified domain name liilac.mskcc.org.
- Depending on the size and number of files to copy, you may run multiple