You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This page documents the various questions related to lilac storage. Lilac storage is primarily divided into 4 categories.

Lilac home storage : 

    • Description : GPFS shared parallel filesystem, replicated, and not backed up.
    • Purpose: To store software related code and scripts, default quota size is small and fixed.
    • Mount: /home/<user>
    • Access: All Lilac nodes, including compute storage and login nodes.
    • Default quota: 100GB
    • Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /home/.snapshots/<user>

Lilac compute storage

    • Description : GPFS shared parallel filesystem, replicated, and not backed up.
    • Purpose: For jobs to read and write compute data from login and compute nodes, default quota size is larger with flexibility to request larger quota.
    • Mount: /data/<lab group>
    • Access: All Lilac nodes, including compute storage and login nodes.
    • Default quota: 5TB ( Increased/Decreased on request )
    • Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /data/.snapshots/<date>/<lab group>

Lilac warm storage : 

    • Description : GPFS shared parallel filesystem, not replicated, and not backed up. Comparatively slower than lilac compute storage.
    • Purpose: To store long term data. Only accessible from login nodes and cannot be accessed from compute nodes.
    • Mount: /warm/<lab group>
    • Access: Only lilac and luna login nodes.
    • Default quota: 5TB ( Increased/Decreased on request )
    • Snapshots: 7 days of snapshots. ( not backed up ). Can be accessed in /warm/.snapshots/<date>/<lab group>

Lilac local scratch storage : 

    • Description : XFS filesystem, not replicated, and not backed up. Local and not a shared filesystem, slower than GPFS.
    • Purpose: To store local temporary data related to compute jobs. Since this is not a shared filesystem, the temporary data needs to be cleaned up and copied back to shared filesystem after job completion.
    • Mount: /scratch/
    • Access: Only lilac compute nodes.
    • Default quota: No quota and limited to free disk space in /scratch.
    • Snapshots: No snapshots.

How to :

  1. Check Quota for GPFS filesystem:

    Since blocks on Lilac GPFS home/compute storage are replicated, quota is double the apparent size of data.

        • Lilac home storage :

          Command line
          mmlsquota lila:home --block-size auto
        • Lilac compute storage : 

          Command line
          mmlsquota lila:data_<lab group name> --block-size auto
          
          
          Command line
          df -h /data/<lab group name>
          
          
        • Lilac warm storage (oscar) :

          Command line
          mmlsquota oscar:warm_<lab group name> --block-size auto
          
          
          Command line
          df -h /warm/<lab group name>
          
          

        mmlsquota gives information about quota on number of files too, along with information about block quota 

  2. Copy files from other clusters:

    HAL cluster is outside the firewall, so lilac cannot be accessed directly from HAL cluster.