How much CPU and memory has my job used?

Use the seff utility to find out.

seff utility
#!/bin/bash
#SBATCH --job-name=primes_seff_test          # Job name
#SBATCH --mail-type=END,FAIL                 # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=email@york.ac.uk         # Where to send mail	
#SBATCH --ntasks=1                           # Run on a single CPU
#SBATCH --mem=1gb                            # Job memory request
#SBATCH --time=01:15:00                      # Time limit hrs:min:sec
#SBATCH --output=logs/primes_seff_job_%j.log # Standard output and error log
#SBATCH --account=its-system-2018	     # Project account

echo My working directory is `pwd`
echo Running job on host:
echo -e '\t'`hostname` at `date`
echo

./primes
 
echo
echo Job completed at `date`
echo
/opt/site/york/bin/seff $SLURM_JOB_ID
Example output
$ more logs/primes_seff_job_70936.log 
My working directory is /users/abs4/work/slurm/tests
Running job on host:
	node170.pri.viking.alces.network at Tue 29 Jan 11:49:42 GMT 2019

This machine calculated all 78498 prime numbers under 1000 in 103.79 seconds

Job completed at Tue 29 Jan 11:51:26 GMT 2019

Job ID: 70936
Cluster: viking
User/Group: abs4/clusterusers
State: RUNNING
Cores: 1
CPU Utilized: 00:00:00
CPU Efficiency: 0.00% of 00:01:44 core-walltime
Job Wall-clock time: 00:01:44
Memory Utilized: 0.00 MB (estimated maximum)
Memory Efficiency: 0.00% of 1.00 GB (1.00 GB/node)
WARNING: Efficiency statistics may be misleading for RUNNING jobs.
$ 

How much disc space am I using?

User the "myquota" command

Display disc quota
$ /opt/site/york/bin/myquota 
Scratch quota:
Disk quotas for usr abs4 (uid 10506):
     Filesystem    used   quota   limit   grace   files   quota   limit   grace
    /mnt/lustre  268.6G      3T    3.1T       - 2157882       0       0       -

Home quota:
Disk quotas for user abs4 (uid 10506): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
storage:/export/users
                2155420  52428800 78643200            1654  100000  150000        


Can I have my IT Services home directory and/or shared file-stores mounted on Viking?

No. Due to the large number of nodes and high speed interconnect between them, the University filestores would not be able to handle the large read/write throughput that running jobs on Viking can reach.  

We advise that you use tools like rsync to maintain file parity between Viking, external fileservers and your device's local disks. Group directories are available on Viking to reduce file and database duplication amongst research groups - to request such a directory (or if you would like further information) please contact itsupport@york.ac.uk.  

  • No labels