Introduction#
ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semi-empirical methods to DFT to single - and multi-reference correlated ab initio methods. It can also treat environmental and relativistic effects.
User Responsibilities and Access#
ORCA is a proprietary software, even if it is free it still requires you to agree to the ORCA license conditions. We have installed ORCA on Grex, but to access the binaries, each of the ORCA users has to confirm they have accepted the license terms.
The procedure is as follow:
- First, register at ORCA forum .
- You will receive a first email to verify the email address and activate the account. Follow the instructions in that email.
- After the registration is complete, go to the ORCA download page, and accept the license conditions. You will get a second email stating that the “registration for ORCA download and usage has been completed”.
- Then contact us (via the Alliance support for example) quoting the ORCA email and stating that you also would like to access ORCA on Grex.
The same procedure is applied to get access to ORCA on the Alliance’s clusters .
System specific notes#
To see the versions installed on Grex and how to load them, please use module spider orca and follow the instructions. Both ORCA-5 and ORCA-6 are available on Grex.
To load ORCA-5, use:
module load arch/avx512 intel/2023.2 openmpi/4.1.6 orca/5.0.4
To load ORCA-6, use:
module load arch/avx512 gcc/13.2.0 openmpi/4.1.6 orca/6.0.1
Note:
Using ORCA with SLURM#
In addition to the different keywords required to run a given simulation, users should make sure to set two additional parameters, like Number of CPUs and maxcore in their input files:
- maxcore: This option sets the “max” memory per core. This is the upper limit under ideal conditions where ORCA can (and apparently often does) overshoot this limit. It is recommended to use no more than 75 % of the physical memory available. So, if the base memory is 4 GB per core, one could use 3 GB. The synatxe is as follow:
%maxcore 3000
Basically, one can use 75 % of the total memory requested by SLURM divided by number of CPUs asked for.
- Number of CPUs:
ORCA can run in multiple processors with the aid of OpenMPI. All the modules are installed with the recommended OpenMPI version. To run ORCA in parallel, you can simply set the PAL keyword. For instance, a calculation using four processors requires:
!HF DEF2-SVP PAL4
or 8:
!HF DEF2-SVP PAL8
For more than eight processors (!PAL8), the explicit %PAL option has to be used:
!HF DEF2-SVP
%PAL NPROCS 16 END
When running ORCA calculations in parallel, always use the full path to ORCA:
module load arch/avx512 gcc/13.2.0 openmpi/4.1.6 orca/6.0.1
${MODULE_ORCA_PREFIX}/orca your-orca-input.in > your-orca-output.txt
- On the Alliance clusters, the path is defined via an environment variable EBROOTORCA that is set by the module. To use ORCA from the Alliance software stack, add the following lines to your script:
module purge
module load CCEnv
module load arch/avx512
module load StdEnv/2023
module load gcc/12.3 openmpi/4.1.5
module load orca/6.0.1
${EBROOTORCA}/orca your-orca-input.in > your-orca-output.txt
Example on input file#
# Benzene RHF Opt Calculation
%pal nprocs 8 end
! RHF TightSCF PModel
! opt
* xyz 0 1
C 0.000000000000 1.398696930758 0.000000000000
C 0.000000000000 -1.398696930758 0.000000000000
C 1.211265339156 0.699329968382 0.000000000000
C 1.211265339156 -0.699329968382 0.000000000000
C -1.211265339156 0.699329968382 0.000000000000
C -1.211265339156 -0.699329968382 0.000000000000
H 0.000000000000 2.491406946734 0.000000000000
H 0.000000000000 -2.491406946734 0.000000000000
H 2.157597486829 1.245660462400 0.000000000000
H 2.157597486829 -1.245660462400 0.000000000000
H -2.157597486829 1.245660462400 0.000000000000
H -2.157597486829 -1.245660462400 0.000000000000
*
Simple script for running ORCA on Grex#
This script assumes to have an input file orca.inp. The output is written to a file with the name orca.out. These files could be customized according to the names of your input files and whatever name you want for the output. Before submitting any ORCA job, please make sure to set the same number of CPUs in the input files according to your job script (as discussed above). Here is a simple script to run ORCA on Grex:
#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-3:00:00
#SBATCH --job-name="ORCA-test"
# Adjust the number of tasks, memory walltime above as necessary
# Load the modules:
module load arch/avx512 gcc/13.2.0 openmpi/4.1.6
module load orca/6.0.1
echo "Current working directory is `pwd`"
echo "Running on $NUM_PROCS processors."
echo "Starting run at: `date`"
${MODULE_ORCA_PREFIX}/orca orca.inp > orca.out
echo "Program finished with exit code $? at: `date`"
Advanced script for running ORCA on Grex#
This script assumes that you have one inpu file in the directory from where you submit the job. The script sets the output file based on the name of the input file. For example, using an input file my-orca.inp will generate an output with the name my-orca.out. To avoid wasting resources in case a user forgot to set the number of CPUs in the input file, the script generates a new input file and append the directorive for setting the number of CPUs in the input files, like %PAL NPROCS 8 END if the job script asks for 8 CPUs. Before using the script, please take the time to read it and understand how it works:
#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-3:00:00
#SBATCH --job-name="ORCA-test"
# Adjust the number of tasks, memory walltime above as necessary
# Load the modules:
module load arch/avx512 gcc/13.2.0 openmpi/4.1.6
module load orca/6.0.1
# Assign the input file:
ORCA_INPUT_NAME=`ls *.inp | awk -F "." '{print $1}'`
ORCA_RAW_IN=${ORCA_INPUT_NAME}.inp
# Specify the output file:
ORCA_OUT=${ORCA_INPUT_NAME}.out
echo "Current working directory is `pwd`"
NUM_PROCS=$SLURM_NTASKS
echo "Running on $NUM_PROCS processors."
echo "Creating temporary input file ${ORCA_IN}"
ORCA_IN=${ORCA_RAW_IN}_${SLURM_JOBID}
cp ${ORCA_RAW_IN} ${ORCA_IN}
echo "%PAL nprocs $NUM_PROCS" >> ${ORCA_IN}
echo " end " >> ${ORCA_IN}
echo " " >> ${ORCA_IN}
# The orca command should be called with a full path:
echo "Starting run at: `date`"
${MODULE_ORCA_PREFIX}/orca ${ORCA_IN} > ${ORCA_OUT}
echo "Program finished with exit code $? at: `date`"
Assuming the script above is saved as run-orca-grex.sh, it can be submitted with:
sbatch run-orca-grex.sh
Sample Script for running NBO with ORCA on Grex#
The input file should include the keyword NBO.
#!/bin/bash
#SBATCH --ntasks=32
#SBATCH --mem-per-cpu=4000M
#SBATCH --time=7-0:00:00
#SBATCH --job-name=nbo
# Load the modules:
module load arch/avx512 gcc/13.2.0 openmpi/4.1.6
module load orca/6.0.1
module load nbo/nbo7-2021
export GENEXE=`which gennbo.i8.exe`
export NBOEXE=`which nbo7.i8.exe`
# Assign the input file:
ORCA_INPUT_NAME=`ls *.inp | awk -F "." '{print $1}'`
ORCA_RAW_IN=${ORCA_INPUT_NAME}.inp
# Specify the output file:
ORCA_OUT=${ORCA_INPUT_NAME}.out
echo "Current working directory is `pwd`"
NUM_PROCS=$SLURM_NTASKS
echo "Running on $NUM_PROCS processors."
echo "Creating temporary input file ${ORCA_IN}"
ORCA_IN=${ORCA_RAW_IN}_${SLURM_JOBID}
cp ${ORCA_RAW_IN} ${ORCA_IN}
echo "%PAL nprocs $NUM_PROCS" >> ${ORCA_IN}
echo " end " >> ${ORCA_IN}
echo " " >> ${ORCA_IN}
# The orca command should be called with a full path:
echo "Starting run at: `date`"
${MODULE_ORCA_PREFIX}/orca ${ORCA_IN} > ${ORCA_OUT}
echo "Program finished with exit code $? at: `date`"
Assuming the script above is saved as run-orca-grex.sh, it can be submitted with:
sbatch run-nbo-orca-grex.sh
For more information, visit the page running jobs on Grex
Related links#
- ORCA forum
- ORCA on the Alliance’s clusters
- ORCA input libraries
- ORCA common problems
- SCF convergence-issues
- ORCA tutorial
- Running jobs on Grex