Quick start#
Command line#
You need command line access to submit. You do not need a reservation to access a command line, you can connect to the BioHPC Login (head) node cbsulogin?.biohpc.cornell.edu
.
One-time setup#
Setting up SLURM-specific settings#
From a command line, run the following lines, logout, then back in, and henceforth you can skip the --cluster cbsueccosl01
option:
echo 'export SLURM_CLUSTERS="cbsueccosl01"' >> $HOME/.bash_profile
echo netid@cornell.edu >> $HOME/.forward
(replace your netid
in the second command).
Enabling software via module
#
git clone https://github.com/labordynamicsinstitute/biohpc-modules $HOME/.modulefiles.d
See Customizing modules
for more details.
Submitting jobs#
You can submit from the command line (SSH) at the login nodes cbsulogin?.biohpc.cornell.edu
(see access description. All commands (sbatch, squeue, sinfo
, etc) have to be run with option --cluster cbsueccosl01
(but see one-time setup).
There is only one partition (queue) containing all nodes, default parameters (changeable through SLURM options at submission, see below) are:
1 core and 4 GB RAM per job
infinite run time.
Interactive shell#
Interactive shell can be requested with command
srun --cluster cbsueccosl01 --pty bash -l
or if you ran the above TIP:
srun --pty bash -l
If you need a specific node, use
srun -w cbsueccoXX --pty bash -l
To see running jobs#
squeue
To cancel a running job#
Use
scancel (ID)
where the ID can be gleaned from the squeue
command.