General Documentation
- Welcome FAQ
- Secure Shell SSH
- Available Software
- Installing Software
- Guidelines and Policies
- Glossary
- Grant Support
- Sharing Data
- Containers & Singularity
- UserGroup Presentations
- Jupyter Notebook Usage
LSF Primer
Lilac Cluster Guide
Juno Cluster Guide
Cloud Resources
Backup Policy on server/node local drives
File lists
Page History
Logging in
All cluster access is via ssh to via ssh to luna.mskcc.org
ssh username@luna.mskcc.org
General Policies
All jobs must run through the LSF scheduler.
There are other resources available for running your jobs directly.
...
If a job is found running outside of the scheduler on the nodes or on any of the head nodes, the job will be terminated and you will be notified.
...
There will be no responsibility or attempt to recover data for jobs run outside of allowed resources.
All jobs must request memory, iounits and runtime resources.
Jobs are currently using soft limits and low defaults when submitted without these flags.
...
If your job exceeds these default ‘soft’ settings, they will very likely over-subscribe to nodes that do not have sufficient resources and cause failures for your jobs, and any others running on the same systems.
...
It is your responsibility to request the resources your job will need in order to ensure fair sharing and a stable environment to the rest of the users.
...
Consistent failure to request resources that cause issues for other users will result in account constraints that will prevent jobs from running
...
unless you to always explicitly set resource requests.
Do NOT run compute jobs on the login node.
You
...
may ssh to selene.mskcc.org
...
to run local (non-LSF) commands.
Jobs found running on the login nodes will be killed.
...
You will be notified of the job being killed with reference to these policies.
Each user home directory has a 100G quota.
Additional disk space for lab data is available upon request.
Do not use /tmp as a scratch space.
For more information,
...
please click here.
new LSF Rules:
Memory Request Rules:
Both Soft memory
...
limits -R “rusage[mem=GB]
...
and Hard memory
...
limits -M
...
GB should be requested.
...
– The default if nothing is set:
...
soft is 8 GB and hard is
...
8 GB
...
– If hard is
...
set but not soft: soft = hard
...
– If
...
soft is set but not hard:
...
hard =
...
1.3*soft
bsub arguments
...
– to set soft (scheduling) mem,
...
use -R”usage[mem=x]
...
” where x is an integer
...
– to set
...
hard mem limit,
...
use -M
...
x where x is an integer
Runtime Request Rules for short jobs:
...
– Default if nothing is set:
...
Nothing,
...
your job will be considered long and be scheduled as such
...
– If estimated runtime is set but not run limit: Run limit = 2 * estimated runtime
...
– If the estimated runtime is set to < 60 min, but no run limit: Run limit = 2*estimated runtime
...
– If the estimated runtime is set to > 59 min: no run limit is set
...
– If run limit is set but not estimated runtime: Nothing, estimated runtime is not needed if run limit is set.
bsub arguments
...
– To set estimated runtime
...
use -We
...
x where x is HOURS:MINUTES or just minutes (example -We 1:20 or
...
-We 80)
...
– To set run limit
...
use -W
...
x where x is HOURS:MINUTES or just minutes (example -W 1:20 or
...
-W 80)
...
What to do if you need help
Please report any problems to to hpc-request@cbio.mskcc.org.
Globally-installed software
All saba and luna compute All saba and luna compute servers run CentOS 6 Linux. The The /opt/common directory is mounted common directory is mounted on all compute servers in both the saba and luna clusters. It includes many Next-Gen Sequence Analysis packages, such as bwaas bwa, samtools and samtools and additional versions of pythonof python, biopythonbiopython, R and BioConductor, among others. In R and BioConductor, among others. In an effort to keep track of frequently changing software versions, we are now using this path structure:
...
You can set these paths in your .bashrc or use the full path in your code. If you put paths in your .bashrc make sure that you source it or somehow add the path to your grid engine scripts.
If you need additional software, check with us to see if we already have it installed. We can install packages for you globally or help you with your own install.