Introduction#
VASP is a massively parallel plane-wave solid state DFT code. On Grex it is available only for the research groups that hold VASP licenses. To get access, PIs would need to send us a confirmation email from the VASP vendor, detailing the status of their license and a list of users allowed to use it.
System specific notes#
To find out which versions of VASP are available, use module spider vasp
.
At the time of reviewing this page, the following versions are avaiable: vasp/6.1.2-sol and vasp/6.3.2-vtst.
To load the module vasp/6.1.2-sol use:
module load arch/avx512 intel/2023.2 intelmpi/2021.10
module load vasp/6.1.2-sol
To load the module vasp/6.3.2-vtst use:
module load arch/avx512 gcc/13.2.0 openmpi/4.1.6
module load vasp/6.1.2-6.3.2-vtst
or
module load arch/avx512 intel/2023.2 intelmpi/2021.10
module load vasp/6.1.2-6.3.2-vtst
The first module was build with GCC and the later with Intel compiler.
There are three executables for VASP CPU version: vasp_gam , vasp_ncl , and vasp_std. Refer to the VASP manual as to what these mean. An example VASP SLURM script using the standard version of the VASP binary is below:
The following script assumes that VASP6 inputs (INCAR, POTCAR etc.) are in the same directory as the job script.
#!/bin/bash
#SBATCH --ntasks=16
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=3400M
#SBATCH --time=0-3:00:00
#SBATCH --job-name=vasp-test
# Adjust the number of tasks, time and memory required.
# The above spec is for 16 compute tasks, using 3400 MB per task .
# Load the modules:
module load arch/avx512 gcc/13.2.0 openmpi/4.1.6
module load vasp/6.1.2-6.3.2-vtst
echo "Starting run at: `date`"
which vasp_std
export MKL_NUM_THREADS=1
srun vasp_std > vasp_test.$SLURM_JOBID.log
echo "Program finished with exit code $? at: `date`"
Assuming the script above is saved as run-vasp.sh, it can be submitted with:
sbatch run-vasp.sh
For more information, visit the page running jobs on Grex