JDFTx is available on NERSC as a module, which can be loaded as
module use /global/cfs/cdirs/m4025/Software/Perlmutter/modules module load jdftx/gpu # or jdftx/cpu for the CPU partition
for Perlmutter. To build a customized copy of JDFTx, please see the run-cmake*.sh
scripts in the build directories loaded by these modules as the build instructions often need adjustment after every Cray update.
An example job for Perlmutter using two GPU nodes with four GPUs each:
#!/bin/bash #SBATCH -A account_g # replace with valid account #SBATCH -t 10 # replace with suitable time limit #SBATCH -C gpu #SBATCH -q regular #SBATCH -N 2 #SBATCH -n 8 #SBATCH --ntasks-per-node=4 #SBATCH -c 32 #SBATCH --gpus-per-task=1 #SBATCH --gpu-bind=none module use /global/cfs/cdirs/m4025/Software/Perlmutter/modules module load jdftx/gpu export SLURM_CPU_BIND="cores" export JDFTX_MEMPOOL_SIZE=8192 # adjust as needed (in MB) srun jdftx_gpu -i inputfile.in
Similarly, to use two CPU nodes fully:
#!/bin/bash #SBATCH -A account # replace with valid account #SBATCH -t 20 # replace with suitable time limit #SBATCH -C cpu #SBATCH -N 2 #SBATCH -n 4 #SBATCH --ntasks-per-node=2 #SBATCH -c 64 #SBATCH --hint=nomultithread module use /global/cfs/cdirs/m4025/Software/Perlmutter/modules module load jdftx/cpu export SLURM_CPU_BIND="cores" srun jdftx -i inputfile.in
(Replace Perlmutter by Cori and module load jdftx
on Cori.) If the builds don't work for you, please report an issue, as it may mean that the dependency modules have changed and we need to update the build.
Use the GNU compilers and MKL for the easiest compilation on TACC Stampede. The following commands may be used to invoke cmake (assuming bash shell):
#Select gcc as the compiler: module load gcc/4.7.1 module load mkl gsl cuda cmake fftw3 #Configure: CC=gcc CXX=g++ cmake \ -D EnableCUDA=yes \ -D EnableMKL=yes \ -D ForceFFTW=yes \ -D MKL_PATH="$TACC_MKL_DIR" \ -D FFTW3_PATH="$TACC_FFTW3_DIR" \ -D GSL_PATH="$TACC_GSL_DIR" \ ../jdftx-VERSION/jdftx make -j12
Make on the login nodes (as shown above) or on the gpu queue if you loaded cuda; it should work on any machine otherwise.