Submitting a Job on Quest
Examples of submitting interactive and batch jobs to the Quest compute nodes.
Small test scripts and applications can be run directly on the Quest login nodes, but if you are going to use any significant computational resources (more than 4 cores and/or 4GB of RAM) or run for more than about an hour, you need to submit a job to request computational resources from the Quest compute nodes. Jobs can be submitted to the Quest compute nodes in two ways: Interactive jobs, which are particularly useful for GUI applications, or Batch jobs, which are the most common jobs on Quest. Interactive jobs are appropriate for GUI applications like Stata, or interactively testing and prototyping scripts; they should generally use a small number of cores (fewer than 6) and be of short duration (a few hours). Batch jobs are appropriate for jobs with no GUI interface, and they can accommodate large core counts and long duration (up to a week).
The program that schedules jobs and manages resources on Quest is Slurm. To submit, monitor, modify, and delete jobs on Quest you must use Slurm commands, such as #SBATCH.
To submit a batch job, you first write a submission script specifying the resources you need and what commands to run, then you submit this script to the scheduler by running an sbatch command on the command line.
Example Submission Script
A submission script for a batch job could look like the following. When substituting values, replace <> too. These commands would be saved in a file such as jobscript.sh.
#!/bin/bash #SBATCH -A p20XXX # Allocation #SBATCH -p short # Queue #SBATCH -t 04:00:00 # Walltime/duration of the job #SBATCH -N 1 # Number of Nodes #SBATCH --ntasks-per-node=6 # Number of Cores (Processors) #SBATCH --mail-user=<my_email> # Designate email address for job communications #SBATCH --mail-type=<event> # Events options are job BEGIN, END, NONE, FAIL, REQUEUE #SBATCH --output=<file_path> # Path for output must already exist #SBATCH --error=<file_path> # Path for errors must already exist #SBATCH --job-name="test" # Name of job # unload any modules that carried over from your command line session module purge # add a project directory to your PATH (if needed) export PATH=$PATH:/projects/p20XXX/tools/ # load modules you need to use module load python/anaconda module load java # A command you actually want to execute: java -jar <someinput> <someoutput> # Another command you actually want to execute, if needed: python myscript.py
The first line of the script loads the bash shell. Lines that begin with #SBATCH are interpreted by Slurm. Until Slurm places the job on a compute node, no other line in this script is executed. In these lines, # is needed; it is not a comment character when used with #SBATCH.
After the Slurm commands, the rest of the script works like a regular Bash script. You can modify environment variables, load modules, change directories, and execute program commands. Lines in the second half of the script that start with # are comments.
In the example above, export PATH=$PATH:/projects/p20XXX/tools/ is used to put additional tools stored in a project directory on the user's path so that they can be easily called. Slurm jobs start from the submit directory by default. Your script can cd to a different directory instead if your code is located in a different directory than your submission script.
Find a downloadable copy of this example script on GitHub.
Commands and Options
|#!/bin/bash||REQUIRED: The first line of your script, specifying the type of shell (in this case, bash)|
|#SBATCH -A <allocation ID>||REQUIRED: Tells the scheduler the allocation name, so that it can determine your access|
|#SBATCH -t <hh:mm:ss>||REQUIRED: Provides the scheduler with the time needed for your job to run so resources can be allocated. On general access allocations, Quest allows jobs of up to 7 days (168 hours).|
|#SBATCH -p <queuename>||REQUIRED: Common values are short, normal, long, or buyin. Note that under Slurm, queues are called "partitions". See Quest Queues for details on the queue to choose for different length jobs.|
|#SBATCH --job-name="name_of_job"||Gives the job a descriptive name, useful for reporting, such as when using the command squeue.|
|#SBATCH --mail-type=<event>||Event options are job BEGIN, END, NONE, FAIL, REQUEUE. You must include you email address in your .forward
file in your /home/NetID directory or use the command below. Specify multiple values with a comma separated list (no spaces).
|#SBATCH --mail-user=<my_email>||Specifies email address.|
|#SBATCH -N <number_of_nodes>
#SBATCH -n <count>
|The first option specifies how many nodes and how many processors (cores) per node. The second option specifies how many processors total without restricting them to being on a specific number of nodes. Use only one of these options, NOT both. If neither of these options are used, one core on one node will be allocated. If your code is not parallelized, one core on one node may be appropriate for your job.|
|#SBATCH --mem=<limit>||Specifies the amount of memory in MB needed by a single node job, where <n>> is the number of MB of RAM you're requesting. (details below)|
|#SBATCH --mem-per-cpu=<memory>||Specifies the amount of memory in MB needed for each processor, appropriate for multi-threaded applications. (details below)|
|#SBATCH --output=<file path>||Writes the output log for the job (whatever would go to stdout) into a file - note the path must exist. If not specified, stdout is written to a file in the directory you submitted the job from that is named according to <JOBNAME.oJOBID>. If --error is not specified (below), errors will also be written to the output file.|
|#SBATCH --error=<file path>||Writes the error file for the job (whatever would go to stderr) into a file name. The error file is very important for diagnosing jobs that fail to run properly. If not specified, stderr will be written to the output file (above).|
Setting memory for your job
When Slurm reserves resources for your job on the compute nodes, it reserves a hard upperlimit of memory your job will have access to. Jobs on the compute nodes cannot access memory beyond what Slurms reserves for them; if your job tries to access more memory than has been reserved, it will either run very slowly or terminate, depending on how the software you are running was written to handle this type of situation.
The amount of memory reserved by Slurm can be specified in your job submission script with the directives #SBATCH --mem=<limit> or #SBATCH --mem-per-cpu=<memory>. If your job submission script does not specify how much memory your job requires, Slurm will reserve a default amount of memory for your job, which may not be enough. The defaults vary by the different architecture types on Quest:
For general access jobs which can land on any of these architectures, jobs that request more memory than is available on all architectures will be limited to running on the subset of available general access nodes that do have that much memory available. Note that in general the more resources your job requests, the longer the wait time for your job to be placed on a suitable compute node.
Submitting Your Batch Job
After you've written and saved your submission script, you can submit your job. At the command line type
sbatch <name_of_script>where, in the example above <name_of_script> would be jobscript.sh. Upon submission the scheduler will return your job number:
Submitted batch job 549005If you would prefer the return value of your job submission to be just the job number, use qsub:
qsub <name_of_script> 549005This may be desirable if you have a workflow that accepts the return value as a variable for job monitoring or dependencies.
If there is an error in your job submission script, the job will not be accepted by the scheduler and you will receive an error message right away, for example:
sbatch: error: Batch job submission failed: Invalid account or account/partition combination specifiedIf your job submission receives an error, you will need to resubmit your job. If no error is recieved, your job has entered the queue and will run.
For more examples and options, see Examples of Jobs on Quest.
To launch an interactive job from the command line use the srun command:
srun --account=<account> --time=<hh:mm:ss> --partition=<queue_name> --mem=<xG> --pty bash -lThis will launch a terminal session on the compute node as a single core job. To request additional cores for multi-threaded applications, include the -N and -n flag:
srun --account=<account> --time=<hh:mm:ss> --partition=<queue_name> -N 1 -n 6 --mem=<xG> --pty bash -l
For best practices with srun always include the following flags:
|--pty||Launch an interactive terminal session on the compute node|
|--time=<hh:mm:ss>||Duration of this interactive job. The job will be killed if you exit the terminal session before the time is up. Note that your session will be killed without warning at the end of your requested time period.|
|--partition=<partition>||Queue/partition for the job|
|--mem=<xG>||Amount of memory requested per node.|
To request more than the default single node/single core:
|-N <number of nodes>||Requests a number of nodes to run the job. If this is not specified but -n is, the tasks may land on multiple nodes. For most non-mpi based applications, request a single node.|
|-n <number of cores>||Requests the number of tasks/processors/cores for the job. If your work supports multi-threading, request the number of threads that you will need.|
Note that by reserving more resources than you actually utilize, you decrease your priority on future jobs unnecessarily.
Interactive Job Examples
Example 1: Interactive Job to Run a Bash Command Line session
srun --account=p12345 --partition=short -N 1 -n 4 --mem=12G --time=01:00:00 --pty bash -l
This would run an interactive bash session on a single compute node with four cores, and access to 12GB of RAM for up to an hour, debited to the p12345 account.
Example 2: Interactive job to run a GUI program
If you're connecting to Quest using SSH via a terminal program, then you need to make sure to enable X-forwarding when you connect to Quest by using the -Y option:
ssh -Y <netid>@quest.northwestern.edu
If you use FastX to connect instead, then X-forwarding will be enabled by default in the GNOME terminal.
For an interactive job with a GUI component, you will need to use the --x11 flag for srun that allows for X tunneling from Quest to your desktop display. For example:
srun --x11 --account=p12345 -N 1 -n 4 --partition=short --mem=12G --time=01:00:00 --pty bash -l
This requires an X window server to be running on your desktop, which is the case if you're using FastX. Another option for mac users is XQuartz. To confirm that X-forwarding is enabled, try the command:
If the clock graphic appears on your screen, that confirms that x-forwarding is successfully working.
Note that when you enter the srun for an interactive job, there may be a pause while the scheduler looks for available resources. Then you will be provided information about the compute node you're assigned, and you will be automatically connected to it. The command prompt in your terminal will change to reflect this new connection. You can then proceed with your work as if you were on a login node.