Moab/PBS to Slurm
Moab/PBS to Slurm translation
Moab/PBS to Slurm commands
| Action | Moab/Torque | Slurm |
|---|---|---|
| Job Submission | msub/qsub | sbatch |
| Job deletion | canceljob/qdel | scancel |
| List all jobs in queue | showq/qstat | squeue |
| List all nodes | sinfo | |
| Show information about nodes | mdiag -n/pbsnodes | scontrol show nodes |
| Job start time | showstart | squeue --start |
| Job information | checkjob | scontrol show job <jobid> |
| Reservation information | showres |
scontrol show res (this option shows details) sinfo -T |
Moab/PBS to Slurm environmental variables
| Description | Moab/Torque | Slurm |
|---|---|---|
| Job ID | $PBS_JOBID | $SLURM_JOBID |
| node list | $PBS_NODEFILE |
Generate a listing of 1 node per line: Generate alisting of 1 core per line: srun hostname | sort > nodefile.$SLURM_JOBID
|
| submit directory | $PBS_O_WORKDIR | $SLURM_SUBMIT_DIR |
| number of nodes | $SLURM_NNODES | |
| number of processors (tasks) | $SLURM_NTASKS ($SLURM_NPROCS for backward compatibility) |
Moab/PBS to Slurm job script modifiers
| Description | Moab/Torque | Slurm |
|---|---|---|
| Walltime | #PBS -l walltime=1:00:00 | #SBATCH -t 1:00:00 (or --time=1:00:00) |
| Process count |
#PBS -l nodes=2:ppn=12 |
#SBATCH -n 24 ( or --ntasks=24) |
| Memory | #PBS -l nodes=2:ppn=12:m24576 |
#SBATCH --mem=24576 it is also possible to specify memory per tash with --mem-per-cpu; also see constraint section above for additional infomraiton on the use of this. |
| Mail options | #PBS -m abe |
#SBATCH --mail-type=FAIL,BEGIN,END |
| Mail user | #PBS -M user@mail.com | #SBATCH --mail-user=user@mail.com |
| Job name and STDOUT/STDERR |
#PBS -N myjob |
#SBATCH -o myjob-%j.out-%N NOTE: The %j and %N are replaced by the job number and the node (first node if a multi-node job. This gives the stderr and stdout a unique name for each job. |
| Account | #PBS -A owner-guest optional in Torque/Moab |
#SBATCH -A owner-guest (or --account=owner-guest) |
| Dependency | #PBS -W depend=afterok:12345 run after job 12345 finishes correctly |
#SBATCH -d afterok:12345 (or --dependency=afterok:12345) |
| Reservation | #PBS -l advres=u0123456_1 |
#SBATCH -R u0123456_1 (or --reservation=u0123456_1) |
| Partition | No direct equivalent |
#SBATCH -p lonepeak (or --partition=lonepeak) |
| Propagate all environment variables from terminal |
#PBS -V | All environment variables are propagated by default, except for modules which are purged at a job start to prevent possible inconsistencies. One can either load the needed modules in the job script, or have them in their .custom.[sh,csh] file. |
| Propagate specific environment variable |
#PBS -v myvar | #SBATCH --export=myvar use with caution as this will export ONLY variable myvar |
|
Target specific owner |
#PBS -l nodes=1:ppn=24:ucgd -A owner-guest | #SBATCH -A owner-guest -p kingspeak-guest -C "ucgd" |
|
Target specific nodes |
#SBATCH -w notch001,notch002 (or --nodelist=notch001,notch002) |
More Slurm Information
For more information on using Slurm at the CHPC, please look at the options here.