Skip to content

Kingspeak User Guide

On this page

The table of contents requires JavaScript to load.

Kingspeak cluster hardware overview

Kingpeak is operated in a condominium fashion with some (48) general CHPC nodes, along with additional nodes owned by different research groups. Kingspeak has a total of 385 nodes (8292 cores), with the nodes having 16, 20, 24, 28,or 32 cores each, and memory between 32 GB and 1 TB each. As of the end of 2017, we are no longer adding nodes to kingspeak.

The General CHPC nodes include:

  • 48 Dual Socket Nodes (832 total general cores)
  • 32 nodes with 16 cores and 64 GB memory
  • 16 nodes with 20 cores; twelve of these have 64 GB memory and four have 384 GB memory

Other information on kingspeak's hardware and cluster configuration:

  • Intel Xeon (Sandybridge/Ivybridge/Haswell/Broadwell) processors
  • Mellanox FDR Infiniband interconnect
  • Gigabit Ethernet interconnect for management
  • 2 general interactive nodes

 In addition there are four general GPU nodes, described on our GPU & Accelerators page.

Important differences from other CHPC clusters

  • Note the change in the naming convention in the paths to the cluster specific applications, in that this cluster is kingspeak.peaks, not kingspeak.arches.
  • Kingspeak has nodes of different core counts – see Slurm documentation on ways to use this mixed resource.

Kingspeak usage

CHPC resources are available to qualified faculty, students (under faculty supervision), and researchers from any Utah institution of higher education. Users can request accounts for CHPC computer systems by filling out an account request form. This can be found by following this link: account request form.  

The kingspeak cluster is run without allocation; all users can run on CHPC-owned nodes without the possibility of preemption.

Kingspeak access and environment

The kingspeak cluster can be accessed via ssh (secure shell) at the following address:

  • kingspeak.chpc.utah.edu

All CHPC machines mount the same user home directories. This means that the user files on Kingspeak will be exactly the same as the ones on other CHPC clusters. The advantage is obvious: users do not need to copy files between machines.

Kingspeak compute nodes mount the following scratch file systems:

  • /scratch/general/nfs1
  • /scratch/general/vast

As a reminder, the non-restricted scratch file systems are automatically scrubbed of files that have not been accessed for 60 days.

Your environment is setup through the use of modules. Please see the User Environment section of the General Cluster Information page for details in setting up your environment for batch and other applications. 

Using the batch system on kingspeak

The batch implementation on Kingspeak is Slurm.

The creation of a batch script on the kingspeak cluster

A shell script is a bundle of shell commands which are fed one after another to a shell (bashtcsh, …). As soon as the first command has successfully finished, the second command is executed. This process continues until either an error occurs or the complete array of individual shell commands has been executed. A batch script is a shell script which defines the tasks a particular job has to execute on a cluster.

Below this paragraph a batch script example for running in Slurm on the Kingspeak cluster is shown. The lines at the top of the file all begin with #SBATCH which are interpreted by the shell as comments, but give options to Slurm.

Example Slurm Script for Kingspeak:

#!/bin/csh

#SBATCH --time=1:00:00 # walltime, abbreviated by -t
#SBATCH --nodes=2 # number of cluster nodes, abbreviated by -N
#SBATCH -o slurm-%j.out-%N # name of the stdout, using the job number (%j) and the first node (%N)
#SBATCH --ntasks=16 # number of MPI tasks, abbreviated by -n # additional information for allocated clusters
#SBATCH --account=baggins # account - abbreviated by -A
#SBATCH --partition=kingspeak # partition, abbreviated by -p # # set data and working directories

setenv WORKDIR $HOME/mydata

setenv SCRDIR /scratch/genearl/vast/$USER/$SLURM_JOB_ID
mkdir -p $SCRDIR
cp -r $WORKDIR/* $SCRDIR
cd $SCRDIR

# load appropriate modules, in this case Intel compilers, MPICH2
module load intel mpich2
# for MPICH2 over Ethernet, set communication method to TCP - for general lonepeak nodes
# see above for network interface selection options for other MPI distributions
setenv MPICH_NEMESIS_NETMOD tcp
# run the program
# see above for other MPI distributions
mpirun -np $SLURM_NTASKS my_mpi_program > my_program.out

For more details and example scripts please see our Slurm documentation. Also, to help with specifying your job and instructions in your slurm script, please review CHPC Policy 2.1.5 Kingspeak Job Scheduling Policy.

Job submission on kingspeak

In order to submit a job on kingspeak one has to login first into a kingspeak interactive node (or specify --clusters=kingspeak when submitting through Slurm). Note that this is a change from the way job submission has worked in the past on our other clusters where you could submit from any interactive node to any cluster. 

To submit a script named slurmjob.kingspeak, just type:

sbatch slurmjob.kingspeak

Checking the status of your job in Slurm

To check the status of your job, use the "sinfo" command

sinfo

For information on compiling on the clusters at CHPC, please see  our Programming Guide.

Last Updated: 10/8/24