- Welcome FAQ
- Secure Shell SSH
- Available Software
- Installing Software
- Guidelines and Policies
- Grant Support
- Sharing Data
- Containers & Singularity
- UserGroup Presentations
- Jupyter Notebook Usage
Lilac Cluster Guide
Juno Cluster Guide
Backup Policy on server/node local drives
This page documents the various GPU settings and modes for the Lilac cluster in more detail than the Lilac Cluster Intro page provides. This page looks specifically at the new GTX 1080 GPUs, although many of the options apply to other Nvidia GPUs as well. Where applicable, this primer will talk about how these properties interact with the Lilac Cluster
- GPU: The index of the GPU available to your processes. Indexing always starts from 0, but your job may not have GPU 0 assigned to it, so your indices may be different.
- This example: GPU = 0
- Important: The GPU is not always mapped to the physical hardware slots
- There are ways to re-order the GPUs, or even mask some GPUs from showing up to a given job.
- Lilac jobs are given GPUs and typically indexed starting at 0, even if your job is running on GPUs in different physical slots on the hardware.
- This number cannot change during a job, so you don't have to worry about your GPUs being re-assigned or shuffled while a job is running.
- Name: The human-readable name of the GPU, quite often this is simply the model name, truncated to fit.
- This example: Name = GeForce GTX 1080
- Memory-Usage: How much memory is being consumed, and how much memory is available on this GPU. As you run jobs on the GPU, this memory is consumed. This is NOT the same as the RAM you request as part of your job, this is on GPU memory and is constant per GPU.
- This example: Memory-Usage = 2MiB / 8113MiB
- Fun Trivia: MiB is "mebibyte", which is a base 2 unit of memory instead of the base 10 that megabyte are in. These are often used interchangeably, even though in reality they are not equal, just close. 1 MB = 1000 kB, 1MiB = 1024 kB.
- Compute M.: "Compute Mode" of the GPU, how the GPU handles process and thread execution on it. Valid options for the GTX 1080s:
Defaultwhich correspond to
sharedmode as referenced in the Intro to Lilac documentation.
- This example: Compute M. = E. Process
- The Lilac cluster's mode is natively
E. Processor "exclusive process"
- Some nVidia GPUs support a thread exclusive mode, but the GTX 1080s do not, so no further discussion of it will be held here.
You can enable this feature with your Lilac jobs by adding the following flag to your bsub submission, or as a
#BSUB directive in your job script:
This sets the environment variable that tells LSF to start the CUDA MPS daemon on the nodes and GPUs assigned to your job.