Platform Load Sharing Facility (or simply LSF) is a job scheduler and workload manager developed by IBM. It is similar to Slurm.

Usage[edit | edit source]

Tasks[edit | edit source]

Command Description
bjobs -u all -a Shows all jobs of all users
bjobs -p -u all -a Shows all jobs with pending reason
bjobs 101 102 Shows jobs with job_ID 101 and 102
bsub Submits a batch job
bhosts Shows all host status
badmin LSF administration shell
bhist -la -u username Shows previous jobs by username

Admin[edit | edit source]

Use the lsadmin and badmin commands to do most admin related things.

Startup[edit | edit source]

Use lsfstartup.

A Load Information Manager (LIM) daemon needs to be running on each server host. This daemon collects host load and configuration information and forwards it to the master LIM service on the master host.

A Remote Execution Server (RES) daemon needs to be running on each server host in order for them to accept remote execution requests.

To start LIM and RES on all hosts, run:

# lsadmin limstartup all
# lsadmin resstartup all

sbatchd needs to be running on all hosts as well and can be started by running

# badmin hstartup all

Restart[edit | edit source]

Use lsfrestart.

If some hosts are in a 'closed' state, you may need to restart LIM and RES on the host.

node01# lsadmin resrestart
node01# lsadmin limrestart

You can also restart on all nodes with 'all':

## restart the 'lsf' service and also lim and res.
# lsadmin resrestart all
# lsadmin limrestart all

Shutdown[edit | edit source]

Use lsfshutdown to prevent users from submitting jobs.

To fully shut down, you must turn off sbatchd, LIM, and RES.

# badmin  hshutdown all
# lsadmin resshutdown all
# lsadmin limshutdown all

Tasks[edit | edit source]

Extend Job Run Time[edit | edit source]

If a job was created with too little wall time (eg. bsub -W), you will see the following output when listing jobs using bjobs -WL, -WF, or -WP:

# bjobs -u all -WL
916742  user001 RUN   interactiv compute001  node001     bash       Jan  5 22:58     -       
933794  user001 RUN   normal     compute001  63*node001  *_de_novo. Jan 13 21:28  59:47 X    
934759  user001 RUN   normal     compute001  56*node001  *test_pasa Jan 17 16:21     -       
936079  user001 RUN   normal     compute001  56*node001  *e_guided. Jan 21 07:03  237:22 E

The TIME_LEFT column shows the time left in hours and minutes. The state is one of:

  • E: The job has an estimated run time that has not been exceeded.
  • L: The job has a hard run time limit specified but either has no estimated run time or the estimated run time is more than the hard run time limit.
  • X: The job has exceeded its estimated run time and the time displayed is the time remaining until the job reaches its hard run time limit.
  • -: A dash indicates that the job has no estimated run time and no run limit, or that it has exceeded its run time but does not have a hard limit and therefore runs until completion.

A job's run time limit can be adjusted using bmod -W HH:MM Job_ID or removed using bmod -Wn Job_ID .

Administration[edit | edit source]

Logs[edit | edit source]

LSF events and accounting logs are stored in /usr/share/lsf/work/hostname/logdir. Logs can grow quite large and can fill the system drive if left unchecked.

Delete old files.


Metrics to InfluxDB[edit | edit source]

Here's a quick and dirty script to dump data into InfluxDB that captures CPU allocation by user/partition/status. This should allow graphing of node usage as CPU cores allocated per node and by user.


bjobs -u all -a \
        | tail -n+2 \
        | awk '
        # node
        split($6, z, "*");

        # No "*" means 1 core
        if ("" ~ z[2]) {
                x[$2, $3, $4, node] += 1
        } else {
                # Number of cpus given
                x[$2, $3, $4, node] += z[1]
        for (i in x) {
                split(i, y, SUBSEP);
                print "lsf,username="y[1]",status="y[2]",queue="y[3]",node="y[4]" value="x[y[1], y[2], y[3], y[4]]
}' \
        | while read i ; do
                echo "curl -X POST 'http://influxdb/write?db=lsf' --data-binary '$i `date +%s"000000000"`'" | sh

See Also[edit | edit source]