Quick start#

Command line#

You need command line access to submit. You do not need a reservation to access a command line, you can connect to the BioHPC Login (head) node cbsulogin?.biohpc.cornell.edu.

One-time setup#

Setting up SLURM-specific settings#

From a command line, run the following lines, logout, then back in, and henceforth you can skip the --cluster cbsueccosl01 option:

echo 'export SLURM_CLUSTERS="cbsueccosl01"' >> $HOME/.bash_profile

In the following, replace netid with your actual NetID.

echo netid@cornell.edu >> $HOME/.forward

Enabling software via module#

git clone https://github.com/labordynamicsinstitute/biohpc-modules $HOME/.modulefiles.d

See Customizing modules for more details.

Submitting jobs#

You can submit from the command line (SSH) at the login nodes cbsulogin?.biohpc.cornell.edu (see access description. All commands (sbatch, squeue, sinfo, etc) have to be run with option --cluster cbsueccosl01 (but see one-time setup).

There is only one partition (queue) containing all nodes, default parameters (changeable through SLURM options at submission, see below) are:

  • 1 core and 4 GB RAM per job

  • infinite run time.

Interactive shell#

Interactive shell can be requested with command

srun --cluster cbsueccosl01 --pty bash -l

or if you ran the above TIP:

srun --pty bash -l

If you need a specific node, use

srun -w cbsueccoXX --pty bash -l

You can use all valid SLURM command line options (the same as are listed in a sbatch file) as well. In their absence, you will get default values (limitations).[1] For instance, the above invocations get unlimited memory, but are limited to 1 task on 1 CPU. If you needed to use more within your interactive shell, you might want to specify

srun -w cbsueccoXX --nprocs 8 --pty bash -l

Interactive GUI jobs#

You should be able to get interactive GUI jobs (Stata, MATLAB) to work as follows:

salloc -N 1 
ssh -X $SLURM_NODELIST /usr/local/stata18/xstata

or

salloc -N 1 
ssh -X $SLURM_NODELIST /local/opt/MATLAB/R2023a/bin/matlab

Warning

It is technically feasible to login to each node without using SLURM. However, this confuses the job scheduler. Do not abuse this, exclusion from use of the cluster may be the consequence.

To see running jobs#

squeue

To cancel a running job#

Use

scancel (ID)

where the ID can be gleaned from the squeue command.

Additional information#