* On MC/CC cluster:

1. If you are connected to a login node, ask first for interactive job:

salloc --ntasks=1 --cpus-per-task=1 --mem=1000M --time=15:00 

If you used Jupyter - Interface Terminal, you are already on a compute node. 
You still can use it as a login node and submit interactive job.

2. Run the command: hostname

2. Load the modules:

module load StdEnv/2023  intel/2023.2.1  openmpi/4.1.5 lammps-omp/20240829

After loading the modules, run:

'''
module list
module show lammps
ls $EBROOTLAMMPS/bin
'''

3. Run the command: which lmp

4. Run the program using:
 
   srun lmp -in lammps-input.in
   or
   run the script using: sh ./mc-runlmp-interactive.sh

* On Grex:

1. If you are connected to a login node, ask for interactive job:

salloc --ntasks=1 --cpus-per-task=1 --mem=500M --time=15:00 {+other options}

2. Run the command: hostname

2. Load the modules:

module load arch/avx512  gcc/13.2.0  openmpi/4.1.6 lammps/2024-08-29p1

After loading the modules, run:

'''
module list
module show lammps
ls $MODULE_LAMMPS_PREFIX/bin
'''

3. Run the command: which lmp

4. Run the program using: 
 
   srun lmp -in lammps-input.in
   or
   run th script using: sh ./grex-runlmp-interactive.sh

The above shows how to run interactive job for testing and debugging:

'''
   salloc {+options}
   load modules
   run tests ...
   exit {to resume the interactive job}
'''

DONE.
