Definitions of Quest's partitions/queues.
Quest offers several partitions or queues where you can run your job. Based on the duration of your job, number of cores, and type of access to Quest, you should select the most appropriate partition for your job. A partition must be specified when you submit your job or the scheduler will return the error, "sbatch: error: Batch job submission failed: No partition specified or system default partition". Users with full access to Quest or the Genomics Compute Cluster must specify the appropriate partition for those resources. To specify the partition, please include a -p option in your job submission script:
#SBATCH -p <PartitionName>
Partition Definitions: General Access ("p" accounts)
|short||04:00:00||Short jobs have access to more cores on Quest than longer jobs do and are usually scheduled sooner.|
|normal||48:00:00||Normal jobs may run up to 2 days.|
|long||162:00:00||Long jobs may run for up to 7 days.|
|gengpu||48:00:00||This partition can be used only if your job requires GPUs.|
|genhimem||48:00:00||This partition can be used only if your job requires more than 120 GB per node.|
Partition Definitions: Full Access (buy-ins, or "b" accounts)
|Allocation name (e.g. "b1234")
|Allocation-specific||Using the allocation name as the partition name is only available to users with full access to Quest. The resources available and any limits on jobs are governed by the specific policies of the full-access allocation.
Example: #SLURM -p b1234
When using the buyin partition, you must also specify the appropriate buyin allocation ID in your job submission script, using the -A flag. Using the buyin partition is the same as using your allocation name as the partition name.
Example: #SLURM -p buyin
If your allocation has specific partition names, such as genomics, ciera-std, grail-std etc., you should use those partition names instead of your allocation name or buyin partition.
Additional specialized partitions exist for specific allocations. You may be instructed to use a partition name that isn't listed above.
Note that jobs that have not finished on their own by the end of the requested time are terminated by the scheduler.
When resources for a job are not immediately available, the job will be assigned a pending (PD) status while the scheduler waits for resources to become available. There is a hard limit of 5,000 total submitted jobs per user at one time.
General access allocations have access to 14 compute nodes with GPUs. There are 8 NVIDIA Tesla K40 GPUs (two on each node) and 40 Tesla K80 GPUs (four on each node). To request one K40 GPU, you should set gengpu as the partition and add the following line to your job submission script: #SLURM --gres=gpu:k40:1
If you need to run jobs longer than one week, contact Research Computing for a consultation. Some special accommodations can be made for jobs requiring the resources of up to a single node for a month or less.