JDFTx  1.0.0
Downloading/Compiling

Download

The software is currently in alpha status. It is available as source tarballs and on subversion.

The source tarball for the latest release (0.99.alpha) can be downloaded from here.

Alternatively, you can check out the latest revision using subversion (svn)

   svn co https://[<user>@]svn.code.sf.net/p/jdftx/code/trunk/jdftx

The optional <user>@ is for developers only - if you are interested in participating in the development, please contact us with a sourceforge account so that we can give you commit permissions.

The advantage of using subversion is that you can get the most recent features by doing a svn update and recompiling.

Note: the svn repository was relocated to the new sourceforge server on April 11, 2013. If you checked out the code before that date, and haven't modified your local copy, please delete your working copy and check it out again from the above link. If you have pending changes, use the following command to point your working copy to the new server:

   svn switch --relocate \
       https://[<user>@]jdftx.svn.sourceforge.net/svnroot/jdftx/trunk/jdftx \
       https://[<user>@]svn.code.sf.net/p/jdftx/code/trunk/jdftx

Bugs / Feature requests

Please report any bugs by creating a ticket [here](tickets). Include program version (output of jdftx -v), input file and stack-trace (if jdftx produces one on exit).

Feature requests/suggetsions are also welcome.

Please check existing open tickets for similar bugs/requests and if there is one, post there instead of creating a new ticket.

Compiling

JDFTx is written to take advantage of the code readability and memory management enhancements of C++11, and hence requires a recent C++ compiler (works on g++4.6). The following programs should be installed before compilation (using ubuntu package names for convenience):

  • cmake (>=2.8)
  • g++ (>=4.6)
  • libgsl0-dev
  • libfftw3-dev
  • libatlas-base-dev (provides blas and cblas)
  • liblapack-dev
  • libopenmpi-dev (or equivalent for other MPI distributions)

If any of these packages are installed in non-standard locations, add the path to those installations by adding -D <package>_PATH=/path/to/package to the cmake command-line, where <package> is FFTW3, GSL etc. Note that cmake command-line options must be specified before the source path (i.e. between cmake and ../jdftx in the examples below).

FFT, BLAS and LAPACK may be provided by Intel's Math Kernel Library (MKL) instead. In that case, the last three packages of the above list are not required. To use MKL, add -D EnableMKL=yes to the command-line. If MKL is not installed to the default path of /opt/intel/mkl, then also add -D MKL_PATH=/path/to/mkl.

The following optional packages are recommended:

  • subversion
  • doxygen
  • LibXC (for additional exchange-correlation functionals. Works with LibXC version 2.0; add -D EnableLibXC=yes and if necessary -D LIBXC_PATH=/path/to/libxc during compilation)

For GPU support, install the CUDA SDK (version 4.0 or higher) and comment out the GCC version check from $CUDA_DIR/include/host_config.h if you get an unsupported compiler error. CUDA 5.0 officially supports g++ 4.6, so you can avoid this error with that combination.

Add "-D EnableCUDA=yes" to the cmake commandline below to compile JDFTx with CUDA support.

If you want to run on a GPU, it must be a discrete (not on-board) NVIDIA GPU with compute capability >= 1.3, since that is the minimum for double precision. Also, you will get a real speedup only if your device has a higher memory bandwidth than your CPU/motherboard/RAM combination, since plane-wave DFT is often a memory-bound computation. See https://developer.nvidia.com/cuda-gpus for compute capabilities of various GPUs. Finally, keep in mind that you also need a lot of memory on the GPU to actually fit systems of reasonable size (you probably need at least 1-2 GB of GRAM to handle moderate-sized systems).

With the above packages installed, the following sequence of commands should fetch and build jdftx in directory <path-to-JDFTx>/build:

:::bash
<path-to-JDFTx>$ svn co https://svn.code.sf.net/p/jdftx/code/trunk/jdftx
<path-to-JDFTx>$ mkdir build
<path-to-JDFTx>$ cd build
<path-to-JDFTx>/build$ cmake [insert options here] ../jdftx
<path-to-JDFTx>/build$ make

A successful build will produce an executable jdftx in <path-to-JDFTx>/build, and an additional executable jdftx_gpu if cmake was configured with EnableCUDA=yes.

To update your copy of jdftx:

:::bash
<path-to-JDFTx>$ cd jdftx
<path-to-JDFTx>/jdftx$ svn update
<path-to-JDFTx>/jdftx$ cd ../build
<path-to-JDFTx>/build$ cmake .
<path-to-JDFTx>/build$ make

Make sure you rerun cmake after updating from svn - this is required to include newly added source files to the build.

The above commands use the default compiler (typically g++) and reasonable optimization flags. JDFTx has also been tested with Intel C++ compiler version 13. You can select the intel compiler during configuration:

:::bash
<path-to-JDFTx>/build$ CC=icc CXX=icpc cmake [insert options here] ../jdftx

Make sure the environment variables for the intel compiler (path settings etc.) are loaded before issuing that command (see the compiler documentation / install notes). Of course, you would probably include -D EnableMKL=yes in the options to use Intel MKL.

At the default optimization level, the compiled executable is not locked to specific CPU features. You can enable machine specific optimizations (-march=native on gcc, -fast on icc) by adding "-D CompileNative=yes" to the cmake command-line option list. Note however that this might cause your executable to be usable only on mahcines with CPUs of the same or newer generation than the machine it was compiled on.

Compiling on TACC Stampede

The recently deployed Stampede cluster at TACC has compatible compilers, but an interesting combination of factors makes it non-trivial to compile JDFTx. (In particular, FFTW is available only with the intel compiler, the intel compiler runs in compatibility mode with gcc 4.4 and hence has incomplete C++11 capability, and the module system does not trivially allow the intel compiler to be used with the newer g++ which is otherwise available as a module). Here's what you need to do to get it compiled:

  • Update to revision 412 or newer
  • module load gsl cuda cmake
  • For the cmake command in the build directory, do (adjust source path, assumed ../jdftx here): CC=icc CXX=icpc cmake -D EnableMKL=yes -D MKL_PATH="$TACC_MKL_DIR" -D GSL_PATH="$TACC_GSL_DIR" -D EXTRA_CXX_FLAGS="-gxx-name=/opt/apps/gcc/4.7.1/bin/g++ -Wl,-rpath,/opt/apps/gcc/4.7.1/lib64" ../jdftx
  • make on the login nodes or on the gpu queue if you loaded cuda (it should work on any machine otherwise)

Compiling on NERSC Edison

These machines require you to use the Cray compiler wrappers, which introduce complications of their own. Additionally, the default LAPACK / BLAS on Edison leads to extremely low performance for JDFTx (seems to be some issue with those libraries getting confused by the hybrid MPI/threads and launching multiple threads per core). Therefore, you need to compile and install ATLAS (with LAPACK) to your home directory (or email me and I can send you a prebuilt version for that machine).

#Select the right compiler and load necessary modules
module swap PrgEnv-intel PrgEnv-gnu
module load gcc cmake gsl fftw
export ATLAS_LIB_DIR="<path_to_your_ATLAS_installation>"

#From inside your build directory
#(assuming relative paths as in the generic instructions above)
CC="cc -dynamic -lmpich" CXX="CC -dynamic -lmpich" cmake \
    -D GSL_PATH=${GSL_DIR} \
    -D FFTW3_PATH=${FFTW_DIR} \
    -D CMAKE_INCLUDE_PATH=${FFTW_INC} \
    -D CMAKE_LIBRARY_PATH=${ATLAS_LIB_DIR} \
    -D LAPACK_LIBRARIES="${ATLAS_LIB_DIR}/liblapack.so;${ATLAS_LIB_DIR}/libf77blas.so;${ATLAS_LIB_DIR}/libcblas.so;${ATLAS_LIB_DIR}/libatlas.so" \
    ../jdftx
make -j12