JDFTx  1.2.1
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages


JDFTx is distributed as source packages, or for development via git, and requires a POSIX-compatible environment and a C++11 compiler to install and get running. It also depends on several GNU and open-source libraries, which will most easily be installed using a package manager. So here are the prerequisites to installing JDFTx on your system:

  • GNU/Linux (any distribution): You already have a package manager (or you are on an advanced distribution and know how to do without one!). In high likelihood, your distribution is either:

    • Debian-based (eg. Debian, Ubuntu, Mint and their numerous flavors), and you could use "sudo apt-get install <package names>", or
    • Redhat-based (eg. RHEL, CentOS, Fedora etc.), and you could use "sudo yum install <package names>".

    See Linux Package Management for a detailed introduction.

  • Mac OS X: Install the Xcode commandline tools and MacPorts (or another package manager such as Homebrew or Fink; but then you should translate the package-names below from MacPorts). You would then install packages using "sudo port install <package names>".

  • Windows: Install Cygwin which provides a POSIX-compatible environment on Windows including a terminal with the bash shell (assumed here and in the tutorials). You can select packages in the Cygwin installer graphical interface, and can rerun the installer to add / remove packages later. In addition, you should select the xinit package to get a working X11 environment, which will be necessary for opearting many of the Unix tools in graphical mode.

  • Supercomputing cluster: All the dependencies will likely already be installed by the administrators, but you will have to load necessary modules and specify some paths manually. See the explicit compilation instructions for NERSC and TACC below for examples. If you compile sucessfully on other shared compute clusters, please post instructions in the Wiki. We would greatly appreciate it and include those instructions below as well.


Install the following packages, depending on your system type, using your package manager as discussed above:

OS Debian-based Redhat-based Mac OS Xa Windowsb
Build tools g++ cmake gcc-c++ cmake cmake gcc-g++ make cmake wget
GNU Scientific Library libgsl0-dev gsl-devel gsl libgsl-devel
Message Passing Interface libopenmpi-dev openmpi-bin openmpi-develcopenmpid libopenmpi-devel
Fast Fourier Transforms libfftw3-dev fftw-devel fftw-3 libfftw3-devel
Linear Algebra libatlas-base-dev liblapack-dev atlas-devel atlase liblapack-devel
Postprocessing octave
Plotting gnuplot
Visualization VESTAf
Code versioning (optional)git
Offline docs (optional) doxygeng

a Listing MacPorts package names, substitute appropriately if using Homebrew or Fink.
b Select packages in the Cygwin graphical installer
c In order to activate the MPI, you may need to issue a command such as "module load openmpi-SUFFIX" or "module load mpi/openmpi-SUFFIX", where you can find the available SUFFIX using the command "module avail openmpi" or "module avail mpi" (depending on the distribution version). You will need to invoke this command for each new shell where you plan to compile or run JDFTx, so it may be appropriate to add this to your shell startup script like .bashrc.
d Note that you may need to activate openmpi using "sudo port select mpi openmpi-VERSION"; where you can figure out the installed VERSION string using "port select --list mpi".
e ATLAS is installed from source on MacPorts, and this can be very slow. You may replace "atlas" with "lapack" if strapped for time during installation, but the ATLAS version is significantly faster at run time.
f VESTA is not available in package managers, but can be installed directly on all above platforms from its website
g With doxygen installed, you can generate offline documentation (this website) in the build/doc/html directory by running "make doc" in the build directory

Basic compilation

Download the latest JDFTx source package from GitHub into the directory where you want to install it (eg. /home/user/JDFTx), which we'll refer to as JDFTXDIR below. In a terminal, the following commands should unpack, configure and build jdftx:

cd JDFTXDIR                     #Replace JDFTXDIR with the path you chose
ls                              #This should show jdftx-VERSION.tar.gz
tar xvpzf jdftx-VERSION.tar.gz  #Unpack; replace X,Y,Z with the actual version numbers
                                #(Or for jdftx-VERSION.zip, use the unzip command)

mkdir build                             #Create a directory for the build
cd build                                #and enter the build directory
cmake [options] ../jdftx-VERSION/jdftx  #Configure; omit [options] or see Customization below
make -j4                                #Compile with 4 processes (adjust as needed)

Note that these commands will unpack the source to JDFTXDIR/jdftx-VERSION/jdftx, and once successful will produce executables jdftx, phonon and wannier in the JDFTXDIR/build directory. Note that for a basic compilation with all the dependencies mentioned above installed in system locations, you don't need to specify any [options] to cmake.

That should be it! You can run "make test" to check that the code is producing expected numbers for a few built-in test cases. In order to use the executable from any directory in your system, and to also use post-processing scripts distributed with JDFTx, you will need to add the build and scripts directory to your PATH:

export PATH="JDFTXDIR/build:JDFTXDIR/jdftx-VERSION/jdftx/scripts:$PATH"

(Remember to replace JDFTXDIR with the actual path, eg. /home/user/JDFTx, and VERSION with the actual version string throughout.) This setting will last only for the current session if you enter it in the terminal. To make it persistent, add the above line to .bashrc in your home directory. If you need to update the code, download a newer tarball and follow the same instructions.

Alternatively, if you need a recent update or plan to modify the code, you can get and compile the latest development version using:

git clone https://github.com/shankar1729/jdftx.git jdftx-git
mkdir build
cd build
cmake [options] ../jdftx-git/jdftx
make -j4

Note that the only difference is that git will fetch jdftx source files within a subdirectory called jdftx-git, so that VERSION is now "git" instead.

To update the code, you can subsequently use git pull from within the jdftx-git directory to fetch latest changes from the repository, and then run cmake and make in the build directory. See the git documentation for more details.


Above we indicated [options] for the cmake command, but left them blank since the defaults shoud work fine in most cases. Here, we'll discuss some common optional packages, features and performance tweaks, and the [options] used to activate them.

  • If you manually installed some of the dependencies, and/or they happen to be installed in non-standard locations, you can tell cmake where they are using [options]. For example, if you want to use a custom-installed the GNU Scientific Library in /home/user/gsl, add -D GSL_PATH=/home/user/gsl to [options]
  • You can use Intel's Math Kernel Library (MKL) to provide FFT, BLAS and LAPACK. Add -D EnableMKL=yes to [options], and additionally specify MKL_PATH as indicated above if MKL is in a non-standard location (besides /opt/intel/mkl)
  • You can use MKL to provide BLAS and LAPACK, but still use FFTW for Fourier transforms, by adding -D ForceFFTW=yes to [options]. We find this option to often be more reliable than using the MKL FFTs.
  • LibXC provides additional exchange-correlation functionals. JDFTx can link to LibXC version 2; add -D EnableLibXC=yes to options, and if necessary specify LIBXC_PATH
  • For GPU support, install the CUDA SDK (either from the website, or your package manager, if available) and add -D EnableCUDA=yes to [options]. If you get an unsupported compiler error, comment out the GCC version check from $CUDA_DIR/include/host_config.h.

    If you want to run on a GPU, it must be a discrete (not on-board) NVIDIA GPU with compute capability >= 1.3, since that is the minimum for double precision. You may also need to specify -D CUDA_ARCH=compute_xy and -D CUDA_CODE=sm_xy to match the compute architecture x.y of the oldest GPU you want to run on. It currently defaults to compute capability 3.5 which is the most prominent among compute-capable GPUs and Tesla coprocessors available today. See https://developer.nvidia.com/cuda-gpus for compute capabilities of various GPUs.

    Note that you will get a real speedup only if your device has a higher memory bandwidth than your CPU/motherboard/RAM combination, since plane-wave DFT is often a memory-bound computation. Also keep in mind that you need a lot of memory on the GPU to actually fit systems of reasonable size (you probably need at least 1-2 GB of GRAM to handle moderate-sized systems).

    When you compile with GPU support, extra executables jdftx_gpu, phonon_gpu and wannier_gpu will be generated that will run code almost exclusively on the GPUs, in addition to the regular executables that only run on CPUs.

  • The above commands use the default compiler (typically g++) and reasonable optimization flags. Using a different compiler require environment variables rather than [options] passed to cmake. For example, you can use the intel compiler using the command (note bash-specific syntax for environment variables):

    CC=icc CXX=icpc cmake [options] ../jdftx-VERSION/jdftx

    Make sure the environment variables for the intel compiler (path settings etc.) are loaded before issuing that command (see the compiler documentation / install notes). Of course, you would probably include -D EnableMKL=yes [options] to also use Intel MKL.

    Similarly, to use the Clang compiler:

    CC=clang CXX=clang++ cmake [options] ../jdftx-VERSION/jdftx

  • At the default optimization level, the compiled executable is not locked to specific CPU features. You can enable machine specific optimizations (-march=native on gcc, -fast on icc) by adding -D CompileNative=yes to [options]. Note however that this might cause your executable to be usable only on machines with CPUs of the same or newer generation than the machine it was compiled on.
  • Adding -D LinkTimeOptimization=yes will enable link-time optimizations (-ipo for the Intel compilers and -flto for the GNU compilers). Note that this significantly slows down the final link step of the build process.
  • Add -D StaticLinking=yes to compile JDFTx statically. This is necessary on Windows and is turned on automatically there. It could also be useful on other platforms to compile on one machine and execute on another without the compiler and support libraries installed.
  • Add -D EnableProfiling=yes to [options] to get summaries of run times per function and memory usage by object type at the end of calculations.

Compiling on supercomputing clusters

Compiling on TACC

Use the GNU compilers and MKL for the easiest compilation on TACC Stampede. The following commands may be used to invoke cmake (assuming bash shell):

#Select gcc as the compiler:
module load gcc/4.7.1
module load mkl gsl cuda cmake fftw3

CC=gcc CXX=g++ cmake \
   -D EnableCUDA=yes \
   -D EnableMKL=yes \
   -D ForceFFTW=yes \

make -j12

Make on the login nodes (as shown above) or on the gpu queue if you loaded cuda; it should work on any machine otherwise.

Compiling on NERSC

Use the gnu compiler and MKL to compile JDFTx on NERSC edison/cori. The following commands may be used to invoke cmake (assuming bash shell):

#Select the right compiler and load necessary modules
module swap PrgEnv-intel PrgEnv-gnu
module load gcc cmake gsl fftw
module unload darshan

#From inside your build directory
#(assuming relative paths as in the generic instructions above)
CC="cc -dynamic -lmpich" CXX="CC -dynamic -lmpich" cmake \
    -D EnableProfiling=yes \
    -D EnableMKL=yes \
    -D ForceFFTW=yes \
    -D ThreadedBLAS=no \
    -D GSL_PATH=${GSL_DIR} \
make -j12

The optional ThreadedBLAS=no line above uses single-threaded MKL with threads managed by JDFTx instead. This slightly reduces performance (around 5%) compared to using MKL threads, but MKL threads frequently lead to a crash when trying to create pthreads elsewhere in JDFTx on NERSC.