This page documents the various questions related to lilac storage. Lilac storage is primarily divided into 4 categories.
Lilac home storage :
- Description : GPFS shared parallel filesystem, replicated, and not backed up.
- Purpose: To store software related code and scripts, default quota size is small and fixed.
- Mount: /home/<user>
- Access: All Lilac nodes, including compute storage and login nodes.
- Default quota: 100GB
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /home/.snapshots/<user>
Lilac compute storage :
- Description : GPFS shared parallel filesystem, replicated, and not backed up.
- Purpose: For jobs to read and write compute data from login and compute nodes, default quota size is larger with flexibility to request larger quota.
- Mount: /data/<lab group>
- Access: All Lilac nodes, including compute storage and login nodes.
- Default quota: 5TB ( Increased/Decreased on request )
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /data/.snapshots/<date>/<lab group>
Lilac warm storage :
- Description : GPFS shared parallel filesystem, not replicated, and not backed up. Comparatively slower than lilac compute storage.
- Purpose: To store long term data. Only accessible from login nodes and cannot be accessed from compute nodes.
- Mount: /warm/<lab group>
- Access: Only lilac and luna login nodes.
- Default quota: 5TB ( Increased/Decreased on request )
- Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /warm/.snapshots/<date>/<lab group>
Lilac local scratch storage :
- Description : XFS filesystem, not replicated, and not backed up. Local and not a shared filesystem, slower than GPFS.
- Purpose: To store local temporary data related to compute jobs. Since this is not a shared filesystem, the temporary data needs to be cleaned up and copied back to shared filesystem after job completion.
- Mount: /scratch/
- Access: Only lilac compute nodes.
- Default quota: No quota and limited to free disk space in /scratch.
- Snapshots: No snapshots.
How to :
Check Quota for GPFS filesystem:
Since blocks on Lilac GPFS home/compute storage are replicated, quota is double the apparent size of data.
Lilac home storage :
Command linemmlsquota lila:home --block-size auto
Lilac compute storage :
Command linemmlsquota lila:data_<lab group name> --block-size auto
Command linedf -h /data/<lab group name>
Lilac warm storage (oscar) :
Command linemmlsquota oscar:warm_<lab group name> --block-size auto
Command linedf -h /warm/<lab group name>
mmlsquota gives information about quota on number of files too, along with information about block quota
Copy files from other clusters:
HAL cluster is outside the firewall, so lilac cannot be accessed directly from HAL cluster.
Related articles