Job scheduler on BioHPC#
A SLURM cluster cbsueccosl01 is maintained by Lars on behalf of Econ, on several nodes. Some are dedicated to the SLURM scheduler, others “borrowed”; the latter might not always be available.
As of October 31, 2025 at 12:00 PM, there are 640 “slots” (cpus) available for compute jobs (out of a maximum possible 704) - see Table.
Who can use#
Everybody in the ECCO group can submit jobs.
Current load#
The most current status (as per the date and time noted) is:
As of 2025-10-31 08:00:
CLUSTER: cbsueccosl01
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
regular*     up   infinite      1    mix cbsuecco14
regular*     up   infinite     12   idle cbsuecco[01-02,07-13],cbsueccosl[01,03-04]
For more details, see the SLURM Queue page.
Manually query the latest availability#
To see availability at any point in time, type
sinfo --cluster cbsueccosl01
in a terminal window on the head node,[1] to obtain a result such as
$ sinfo --cluster cbsueccosl01
CLUSTER: cbsueccosl01
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
regular*     up   infinite      3    mix cbsuecco[07-08],cbsueccosl03
regular*     up   infinite      1  alloc cbsueccosl04
regular*     up   infinite      2   idle cbsuecco01,cbsueccosl01
which shows that currently, 6 nodes are available for jobs, of which 2 are idle, three have some jobs running on them, but can still accept smaller jobs (mix means there are free CPUs), and one is completely used (alloc).
List of nodes#
The following table shows the allocated nodes. Nodes marked flex may not be available. Nodes marked slurm are always available. HT means “hyper-threading”, and effectively multiplies the number of cores by 2, but may not always lead to performance improvement. MATLAB ignores hyper-threading, and will only use the physical number of cores listed in the cores column.
| Nodename | allocation | cores | RAM | local storage in TB | model | cores per CPU | CPUs | HT | cpu benchmark (single thread) | vintage | 
|---|---|---|---|---|---|---|---|---|---|---|
| Loading... (need help?) | 
Size of the cluster#
Total cores possible across all SLURM nodes: 704
Total RAM possible across all SLURM nodes: 3712 GB
Total cores currently available across all SLURM nodes: 640
Total RAM currently available across all SLURM nodes: 3200 GB
