Job scheduler on BioHPC#
A SLURM cluster cbsueccosl01
is maintained by Lars on behalf of Econ, on several nodes. Some are dedicated to the SLURM scheduler, others “borrowed”; the latter might not always be available. There are between 48 and 144 “slots” (cpus) available for compute jobs (see Table).
Who can use#
Everybody in the ECCO group can submit jobs.
Current load#
The most current status (as per the date and time noted) is:
As of 2025-03-31 07:00:
CLUSTER: cbsueccosl01
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
regular* up infinite 5 alloc cbsuecco[02,08],cbsueccosl[01,03-04]
Manually query the latest availability#
To see availability at any point in time, type
sinfo --cluster cbsueccosl01
in a terminal window on the head node,[1] to obtain a result such as
$ sinfo --cluster cbsueccosl01
CLUSTER: cbsueccosl01
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
regular* up infinite 3 mix cbsuecco[07-08],cbsueccosl03
regular* up infinite 1 alloc cbsueccosl04
regular* up infinite 2 idle cbsuecco01,cbsueccosl01
which shows that currently, 6 nodes are available for jobs, of which 2 are idle, three have some jobs running on them, but can still accept smaller jobs (mix
means there are free CPUs), and one is completely used (alloc
).
(fulltable=)
List of nodes#
The following table shows the allocated nodes. Nodes marked flex
may not be available. Nodes marked slurm
are always available.
Nodename | allocation | cpu benchmark (single thread) | cores | RAM | local storage in TB | model | cores per CPU | CPUs | vintage | |
---|---|---|---|---|---|---|---|---|---|---|
Loading... (need help?) |