Job scheduler on BioHPC#

A SLURM cluster cbsueccosl01 is maintained by Lars on behalf of Econ, on several nodes. Some are dedicated to the SLURM scheduler, others “borrowed”; the latter might not always be available.

As of February 21, 2026 at 01:00 PM, there are 672 “slots” (cpus) available for compute jobs (out of a maximum possible 704) - see Table.

Who can use#

Everybody in the ECCO group can submit jobs.

Current load#

The most current status (as per the date and time noted) is:

As of 2026-02-21 08:00:

CLUSTER: cbsueccosl01
PARTITION   AVAIL  TIMELIMIT  NODES  STATE NODELIST
slow*          up   infinite      3    mix cbsuecco[01,07-08]
fast           up   infinite      2    mix cbsuecco[09,14]
fast           up   infinite      4  alloc cbsuecco[10-13]
lgmem          up   infinite      1    mix cbsuecco02
interactive    up   infinite      2   idle cbsueccosl[03-04]

For more details, see the SLURM Queue page. For explanation of the “Partition”, see Queues section below.

Manually query the latest availability#

To see availability at any point in time, type

sinfo --cluster cbsueccosl01

in a terminal window on the head node,[1] to obtain a result such as

$ sinfo --cluster cbsueccosl01
CLUSTER: cbsueccosl01
PARTITION   AVAIL  TIMELIMIT  NODES  STATE NODELIST
slow*          up   infinite      3    mix cbsuecco[01,07-08]
fast           up   infinite      4    mix cbsuecco[10-11,13-14]
fast           up   infinite      2  alloc cbsuecco[09,12]
lgmem          up   infinite      1    mix cbsuecco02
interactive    up   infinite      2   idle cbsueccosl[03-04]

which shows that currently, 2 nodes in the interactive partition (queue) are idle (no jobs running), eight have some jobs running on them, but can still accept smaller jobs (mix means there are free CPUs), and two are completely used (alloc).

Queues#

The List of nodes shows various partitions. These are the job queues on SLURM.

  • slow is the default

  • all jobs submitted using srun (rather than sbatch) will be treated as interactive and sent to the interactive partition which has a limit of one CPU per job.

  • lgmem partition requires at least 256GB of RAM to be requested, and will then route to the node with the largest memory. Note that this is a very slow (old) node, so don’t do this if you don’t need it.

  • There are no time limits on any partitions, default RAM per job is 4 GB.

  • In order to submit to a specific partition,

    • use the -p option with sbatch, e.g. sbatch -p lgmem run.sh.

    • specify the partition in the SBATCH file with #SBATCH --partition lgmem.

    • If you don’t specify a partition, it will be sent to the default (slow).

List of nodes#

The following table shows the allocated nodes. Nodes marked flex may not be available, because an owner has reserved them. Nodes marked slurm are always available.

Note

HT means “hyper-threading”, and effectively multiplies the number of cores by 2, but may not always lead to performance improvement. MATLAB ignores hyper-threading, and will only use the physical number of cores listed in the cores column. The various queues can be requested, but most jobs should use the default queue.

Nodename allocation partition cores RAM local storage in TB model cores per CPU CPUs HT cpu benchmark (single thread) vintage
Loading... (need help?)

Size of the cluster#

Total cores possible across all SLURM nodes: 704
Total RAM possible across all SLURM nodes: 3840 GB
Total cores currently available across all SLURM nodes: 672
Total RAM currently available across all SLURM nodes: 3584 GB