Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

All staff, including members in a lab, core, or service group has a dedicated desktop computer. We have full access to the resources of the High Performance Compute Group, which include a high-performance, parallel computing infrastructure with supporting storage for data management. These assets enable detailed simulations, complex analysis, and comprehensive database searches. These methods allow for identification of functional pathways, markers in medical genetics and nd oncology, and the creation of comprehensive polymorphism databases.

The hardware resources consist of a total of 2016 2772 CPU cores (4,320 compute threads) 204 GPUs and 2.2PB and 372 GPUs with 3.6PB of fast disk storage, and 36.8PB 0PB of ‘warm’ archive storage space. This includes:

18 32 nodes with 36 cores, 4 GPUs, 512GB RAM and 2TB local storage

30 nodes with 16 cores, 4 GPUs, 256GB RAM and 3 TB local storage

16 nodes with 8 cores, 32GB RAM, and 500GB of local storage

32 nodes with 12 cores, 96GB RAM and 1TB local storage

36 nodes with 24 cores, 512GB RAM and 4TB local storage

6 nodes with 32 cores, 2 GPUs, 512GB RAM and 4TB local storage

2 compute nodes with 32 cores, 1TB RAM and 1.6TB local storage

24 compute nodes 4 – 8 core, ~256GB oRAMRAM

7 nodes 2 x 22 core, 512GB RAM, 8TB local nVME storage

The HPC group also provides several large general purpose compute servers each with 8 to 40 cores and 64 to 1,024GB of RAM. A GPFS based storage cluster provides 2.2PB of disk space for use in computation and for archiving. An Isilon storage cluster provides an additional 1.2PB of disk space for use as compute storage, archiving and data sharing. The compute nodes are managed by the resource managers SGE and Torque. In addition there are several special purpose servers that provide resources such as a mail services, web applications and databases.

...