Job scheduler on BioHPC

Job scheduler on BioHPC#

A SLURM cluster cbsueccosl01 is maintained by Lars on behalf of Econ, on several nodes. Some are dedicated to the SLURM scheduler, others “borrowed”; the latter might not always be available. There are between 48 and 144 “slots” (cpus) available for compute jobs. The following table shows the allocated nodes.

Nodename allocation cpu benchmark (single thread) cores RAM local storage in TB model cores per CPU CPUs vintage
Loading... (need help?)

Latest availability#

The above list is static: to see at any point in time the available nodes, type

sinfo --cluster cbsueccosl01

in a terminal window on the head node,[1] to obtain a result such as

$ sinfo --cluster cbsueccosl01
CLUSTER: cbsueccosl01
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
regular*     up   infinite      3    mix cbsuecco[07-08],cbsueccosl03
regular*     up   infinite      1  alloc cbsueccosl04
regular*     up   infinite      2   idle cbsuecco01,cbsueccosl01

which shows that currently, 6 nodes are available for jobs, of which 2 are idle, three have some jobs running on them, but can still accept smaller jobs (mix means there are free CPUs), and one is completely used (alloc).

The most current status (as per the date and time noted) is:

As of 2024-12-19:

CLUSTER: cbsueccosl01
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
regular*     up   infinite      6    mix cbsuecco[07,09-12],cbsueccosl01
regular*     up   infinite      5   idle cbsuecco[01-02,08],cbsueccosl[03-04]

Who can use#

Everybody in the ECCO group can submit jobs.