For your grant or project goals, you may at times be required to provide details about accessible applications and tools for your research. You can cut/paste the following for this purpose. If you require additional details or written support, please contact us directly.

Computational equipment:

All staff, including members in a lab, core, or service group has a dedicated desktop computer. We have full access to the resources of the High Performance Compute Group, which include a high-performance, parallel computing infrastructure with supporting storage for data management. These assets enable detailed simulations, complex analysis, and comprehensive database searches. These methods allow for identification of functional pathways, markers in medical genetics nd oncology, and the creation of comprehensive polymorphism databases.

The hardware resources consist of a total of 2772 CPU cores and 372 GPUs with 3.6PB of fast disk storage, and 6.0PB of ‘warm’ archive storage space. This includes:

32 nodes with 36 cores, 4 GPUs, 512GB RAM and 2TB local storage

30 nodes with 16 cores, 4 GPUs, 256GB RAM and 3 TB local storage

16 nodes with 8 cores, 32GB RAM, and 500GB of local storage

32 nodes with 12 cores, 96GB RAM and 1TB local storage

36 nodes with 24 cores, 512GB RAM and 4TB local storage

6 nodes with 32 cores, 2 GPUs, 512GB RAM and 4TB local storage

2 compute nodes with 32 cores, 1TB RAM and 1.6TB local storage

24 compute nodes 4 – 8 core, ~256GB RAM

7 nodes 2 x 22 core, 512GB RAM, 8TB local nVME storage

The HPC group also provides several large general purpose compute servers each with 8 to 40 cores and 64 to 1,024GB of RAM. A GPFS based storage cluster provides 2.2PB of disk space for use in computation and for archiving. An Isilon storage cluster provides an additional 1.2PB of disk space for use as compute storage, archiving and data sharing. The compute nodes are managed by the resource managers SGE and Torque. In addition there are several special purpose servers that provide resources such as a mail services, web applications and databases.

Software resources include commercial and academic software used in the analysis of sequences, structures, gene expression profiles and pathways. Available software and computer languages include, but are not limited to, Matlab, R, Python, ImageMagick, ImageJ, MySQL, MongoDB, Docker, Ruby, Java, Perl.

Accounting:

The HPC Center is jointly funded by the Sloan Kettering Institute (SKI) and The Memorial Sloan Kettering Cancer Center (MSKCC). Each corporate entity (clinical and research) have shared high-performance computing needs, so they have an established agreement in place for funding the HPC Center (based on current and historical needs): SKI and MSK share the HPC Center’s operational costs at approximately 30/70%, respectively. Operational support for the HPC Center includes; salaries, license fees for research applications, and maintenance costs for infrastructure. Capital support is provided for yearly hardware requirements and is split 50/50 between SKI and MSKCC. This does not include any additional funding that may be awarded by HPC center grant initiatives or seed funding providing investments towards necessary research capital.

The SKI funding provided towards/for research operations and capital must be recouped in order to maintain financial viability in supporting the institutions research IT needs. Thus, research administration and accounting require resource-providing departments (or “core centers”), to recover 80% of the SKI funding contributions. Cost recovery has been established through user and group monthly fees, plus storage usage fees. Storage for data is provided in three tiers: compute, non-compute, and archive storage at $35/TB/month, $8/TB/month and $16/TB/month, respectively. To date, this has generated sufficient fund recovery to satisfy SKI’s institutional financial requirements while heavily subsidizing actual technology costs to the end principle investigators.