diff --git a/docs/running-jobs/running-apps-with-jobs.md b/docs/running-jobs/running-apps-with-jobs.md index 46f4eea7..bfec4a1d 100644 --- a/docs/running-jobs/running-apps-with-jobs.md +++ b/docs/running-jobs/running-apps-with-jobs.md @@ -1,7 +1,5 @@ # Running Applications with Jobs - - Because our HPC system is shared among many researchers, Research Computing manages system usage through jobs. **Jobs** are simply an allotment of resources that can be used to execute processes. Research Computing uses a program named the *Simple Linux Utility for Resource Management*, or **Slurm**, to create and manage jobs. In order to run a program on a cluster, you must request resources from Slurm to generate a job. Resources can be requested from a login node or a compile node. You must then provide commands to run your program on those requested resources. Where you provide your commands depends on whether you are running a [batch job](batch-jobs.md) or an [interactive job](interactive-jobs.md). @@ -12,6 +10,12 @@ When you run a batch job or an interactive job, it will be placed in a queue unt A detailed guide on the Slurm queue and accounting tools can be found in the [Useful Slurm Commands](slurm-commands.md) page. ``` +```{note} +Alpine is a heterogeneous system, meaning compute nodes have different hardware configurations. Nodes with similar capabilities are grouped into partitions, each offering different resources. + When submitting a job, you must choose the partition that best supports your job’s needs. For more information, see the [Alpine Hardware](https://curc.readthedocs.io/en/latest/clusters/alpine/alpine-hardware.html#alpine-hardware) page. + ``` + + ## Batch Jobs The primary method of running applications on Research Computing resources is through a batch job. A **batch job** is a job that runs on a compute node with little or no interaction with the users. You should use batch jobs for: