diff --git a/index.qmd b/index.qmd index 4818a28..cf1fa05 100644 --- a/index.qmd +++ b/index.qmd @@ -60,7 +60,7 @@ my-custom-stuff: * Fast data storage * Standardised software installation -## Why +## Why? * Run jobs too large for desktop workstations * Run many jobs at once * Efficiency (cheaper to have central machines running 24/7) @@ -83,11 +83,9 @@ my-custom-stuff: * 83 GPUs * ~20TB RAM -## SWC HPC nodes - ## Logging in -Log into bastion node (not necessary within SWC) +Log into bastion node (not necessary within SWC network) ```bash ssh @ssh.swc.ucl.ac.uk ``` @@ -96,19 +94,22 @@ ssh @ssh.swc.ucl.ac.uk Log into HPC gateway node ```bash -ssh @hpc-gw1.hpc.swc.ucl.ac.uk +ssh @hpc-gw1 ``` + . . . + This node is fine for light work, but no intensive analyses ## File systems {.smaller} -* `/nfs/nhome` or `/nfs/ghome` - "home drive" (SWC/GCNU) -* `/nfs/winstor/` - "Old" research data storage (read-only soon) +* `/nfs/nhome/live/` or `/nfs/ghome/live/` + * "Home drive" (SWC/GCNU), also at `~/` +* `/nfs/winstor/` - Old SWC research data storage (read-only soon) * `/nfs/gatsbystor` - GCNU data storage -* `/ceph` - Current research data storage +* `/ceph/` - Current research data storage * `/ceph/scratch` - Not backed up, for short-term storage -* `ceph/apps` - HPC applications +* `/ceph/apps` - HPC applications . . . @@ -190,7 +191,7 @@ module list ## SLURM -* Simple Linux Utility for Resource Managemen +* Simple Linux Utility for Resource Management * Job scheduler * Allocates jobs to nodes * Queues jobs if nodes are busy @@ -248,6 +249,7 @@ Start an interactive job (`bash -i`) in the cpu partition (`-p cpu`) in pseudote ```bash srun -p cpu --pty bash -i ``` +. . . Always start a job (interactive or batch) before doing anything intensive to spare the gateway node. @@ -258,10 +260,15 @@ Clone a test script cd ~/ git clone https://github.com/neuroinformatics-unit/slurm-demo ``` + +. . . + Check out list of available modules ```bash module avail ``` +. . . + Load the miniconda module ```bash module load miniconda @@ -272,13 +279,14 @@ module load miniconda Create conda environment ```bash +cd slurm-demo conda env create -f env.yml ``` Activate conda environment and run Python script ```bash conda activate slurm_demo -python python multiply.py 5 10 --jazzy +python multiply.py 5 10 --jazzy ``` Stop interactive job ```bash