Notes: 

- On each directory, there is a "README-*" text file for more information.
- Scripts that start with "mc-*" are adapted to run on MC and CC clusters.
- Scripts that start with "grex-*" are adapted to run on Grex.

- While testing on CC clusters, you may need additional options like:
  --account=def-your-sponsor {if you have more than one sponsor}

- While testing on Grex, you may need additional options like:
  --partition=genoa
  --partition=skylake
  --partition=largemem
  --partition=gpu
  ...
  to specify the partition where to run your job.

Directories:

* interactive: directory for an example of interactive job via salloc
  A first job using salloc:
  - Ask for interactive job via salloc.
  - Load the module: lammps
  - run a test on the compute node.

* sleep-job: directory for running a sleep job
  A first job using sbatch
  The script prints the environment variables set by slurm.
  The script does not include any slurm directive: it will use the default.
  Not: if you are running the script on Grex or CC clusters, you may need to 
       add options like:
       sbatch --account=def-sponsor <your-script> --reservation=<name of the reservaion> ...

* serial-job: a directory for running a serial job

* openmp-job: a directory for running openmp job

* mpi-job: a directory for mpi openmp job

* hybrid: example of hybrid job {MPI+OpenMP}



* oom-kill: an example of a job killed with the error oom-kill
            - increase the module and re-submit the job.

* time-out: an example of a job that timed out
            - increase the wall time

* gpu-job: example of GPU jobs - lammps with singularity

* array-job: example of job farming with job arrays

* glost: example of job farming with glost

* performance: example of benchmarks for mpi and openmp programs.

