Here are the steps to rum LAMMPS using interactive job on Grex:

* Requirements: 
  - singularity module: module load singularity
  - singularity image: lammps_patch_3Nov2022.sif 
    {see the file: README-Container.txt for more information}
  - Input file: in.lj 
  - script to run lammps: ./run_lammps.sh

- First, ask for interactive job using salloc:

  salloc --partition=livi-b --gpus=1 --cpus-per-gpu=1 --mem=8000 --time=1:00:00 --x11 {+options}

  Other pertitions to use: gpu, agro-b, mcordgpu-b, stamps-b

- Once the job is granted, run the commands: 

    hostname
    nvidia-smi
    sq

    {the above should show the node and your interactive job}


* Running lammps:

module load singularity
singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd ./lammps_patch_3Nov2022.sif ./run_lammps.sh

* From another terminal, connect to the node where your job is running {g338 in this case} 
  and nvidia-smi. The output is lited below:

'''
[kerrache@g338 ~]$  nvidia-smi 
Tue May 13 10:26:31 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.57.01              Driver Version: 565.57.01      CUDA Version: 12.7     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla V100-SXM3-32GB           On  |   00000000:1E:00.0 Off |                    0 |
| N/A   53C    P0            275W /  350W |   18470MiB /  32768MiB |    100%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    618115      C   lmp                                         18466MiB |
+-----------------------------------------------------------------------------------------+
'''

Lammps is using 1 GPU. The usage is 100%. 

* Submit the job via sbatch:

  sbatch --partition=livi-b submit-lmp-gpu.sh

