Posted onInABACUSWord count in article: 5.5kReading time ≈20 mins.
1. Preface
ABACUS has released the 3.10 - LTS stable version and is still being continuously iterated. Many users hope to deploy the ABACUS software on their own machines to experience the computing efficiency improvement brought by ABACUS. However, compiling ABACUS in different server and workstation environments and achieving the highest computing efficiency in these specific environments always presents some challenges.
The ABACUS Toolchain is a set of bash script collections built into the ABACUS repository. It can help users compile and install the software dependencies required by ABACUS online or offline, automatically handle the environment variables of each dependency library, and quickly complete the ABACUS source code compilation process based on these dependency libraries, realizing an efficient, high - performance, easy - to - modify, and easy - to - port automated ABACUS compilation solution.
This tutorial is written based on the ABACUS Toolchain of the 2025 - 02 version. At present, the ABACUS Toolchain supports the following compilation and installation functions:
GNU Toolchain, that is, the Toolchain method of compiling and installing ABACUS dependency libraries and the ABACUS body from scratch starting from a sufficient version of the GNU compilation suite (gcc, g++, gfortran, collectively referred to as GCC).
Intel Toolchain, that is, the Toolchain method of compiling and installing ABACUS dependency libraries and the ABACUS body based on Intel's compiler, mathematical library, and parallel library (usually packaged in Intel - OneAPI or Intel - parallel - xe - studio).
AMD Toolchain, that is, the method of compiling and installing ABACUS based on AMD's compiler and mathematical library, which is subdivided into GCC - AOCL Toolchain and AOCC - AOCL Toolchain.
At the same time, the ABACUS Toolchain also supports a series of advanced functions including functional plug - in support and packaged offline installation.
In general, the vision that the ABACUS Toolchain hopes to achieve is:
To facilitate users to efficiently compile the ABACUS most suitable for the current server environment from the source code, and to quickly test the computing efficiency of ABACUS compiled by different dependency library types of Toolchain.
To establish a standard process for ABACUS source code compilation. ABACUS developers can directly control the version and compilation method of each ABACUS dependency library in the Toolchain without having to compile and manually add various compilation options by themselves.
There has been a previous tutorial introducing how to use the GNU Toolchain to simply and directly compile ABACUS from scratch: ABACUS Installation Tutorial - Toolchain (1 - GNU). This solution has the best compatibility, but the compiled ABACUS may not be the most efficient, especially for many Intel - CPU servers configured with the corresponding Intel OneAPI suite. This tutorial will focus on how to use the Intel Toolchain to make the compiled ABACUS obtain higher performance.
1.1 Compiling and Installing ABACUS Based on Intel Toolchain
The following will take the Intel Toolchain as an example to show the process of installing the LTS - 3.10.0 version of ABACUS. Among them, according to the characteristics of different servers, we need to adjust the various scripts of the Intel Toolchain in a targeted manner. The tutorial author is used to using vim for adjustment and will teach based on vim. The specific operations can adopt any method of editing files on a Linux server. This tutorial defaults that the Intel dependency library is in Intel OneAPI. For the parallel_xe_studio library, users only need to handle their relevant environment variables by themselves.
Note: Before using the Toolchain for source code installation, be sure to ensure that you have correctly loaded the corresponding Intel OneAPI environment or other relevant environment variables!
1 2 3 4
# load intel - oneapi environment via source source /path/to/intel/oneapi/setvars.sh # load intel - oneapi environment via module - env if exists module load mkl mpi compiler
1.2 Directly Installing ABACUS Under the Dependence of Intel OneAPI 2024+
If the Intel OneAPI version of your server is above version 2024.0, generally speaking, you can directly carry out the compilation and installation of ABACUS through the following steps. (The 202502 version of Toolchain has supported the OneAPI 2025+ version by pulling the latest version of the dependency library.)
Download the ABACUS Repository
It can be downloaded through git clone, which allows users to quickly update or switch versions through the git package manager. It can also pull the ABACUS repository compression package through wget, so that the obtained ABACUS repository has no git - related information and the file volume is smaller.
1 2 3 4 5
# via github git clone https://github.com/deepmodeling/abacus - develop.git - b LTS # via wget from github by codeload: can be used under CN Internet wget https://codeload.github.com/deepmodeling/abacus - develop/tar.gz/LTS - O abacus - LTS.tar.gz tar - zxvf abacus - LTS.tar.gz
Enter the toolchain Directory
1 2 3
cd abacus - develop/toolchain # if you download abacus via wget from codeload cd abacus - develop - LTS/toolchain
Run the toolchain_intel.sh Script
1
sh ./toolchain_intel.sh
In this way, the compilation and installation of ABACUS dependency libraries based on the Toolchain can be started. Taking the default settings of the Intel Toolchain as an example, the following operations will be done:
Check the version of your system GNU compiler (new function of the 2025 - 02 version of Toolchain), and link the Intel compiler, and output something similar to:
MPI is detected and it appears tobe Intel MPI Checking system GCC versionfor gcc, intel and amd toolchain Your System gcc/g++/gfortran version should be consistent Minimum required version: 5 Your gcc version: 11.3.0 Your g++ version: 11.3.0 Your gfortran version: 11.3.0 Your GCC version seems tobe enough for ABACUS installation. Using MKL, so openblas is disabled. Compiling with 16 processes for target native. Step gcc took 0.00 seconds. ==================== Finding Intel compiler from system paths ==================== path to icx is /opt/intel/oneapi/compiler/2025.1/bin/icx path to icpx is /opt/intel/oneapi/compiler/2025.1/bin/icpx path to ifx is /opt/intel/oneapi/compiler/2025.1/bin/ifx CC is /opt/intel/oneapi/compiler/2025.1/bin/icx CXX is /opt/intel/oneapi/compiler/2025.1/bin/icpx FC is /opt/intel/oneapi/compiler/2025.1/bin/ifx Step intel took 0.00 seconds. Step amd took 0.00 seconds.
Note: Compiling ABACUS and its related dependencies has a minimum version requirement for the system C++ compiler. The GCC version must be at least not less than 5, even when using Intel or AMD related dependency libraries and compilers for compilation and installation. However, some old supercomputers still maintain the CentOS kernel and the GCC version 4.8.5. Therefore, if strange errors occur during the compilation process of the Intel Toolchain, it is very likely that the GCC version of the server is too low, making the Intel compiler unable to normally compile the ABACUS related programs. At this time, please contact the server administrator.
Download OpenBLAS (even the Intel Toolchain will perform this operation), and use the genarch program built into OpenBLAS to identify the system instruction set and architecture. The different outputs on different servers are as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# Run toolchain on Intel - CPU Machine ==================== Getting proc arch info using OpenBLAS tools ==================== wget https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29 - O OpenBLAS - 0.3.29.tar.gz --no - check - certificate --2025 - 05 - 06 16:58:40-- https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29 Resolving codeload.github.com (codeload.github.com)... 20.205.243.165 Connecting to codeload.github.com (codeload.github.com)|20.205.243.165|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/x - gzip] Saving to: ‘OpenBLAS - 0.3.29.tar.gz’ OpenBLAS - 0.3.29.tar.gz [ <=> ] 23.53M 6.75MB/s in 3.5s 2025 - 05 - 06 16:58:45 (6.75 MB/s) - ‘OpenBLAS - 0.3.29.tar.gz’ saved [24671913] OpenBLAS - 0.3.29.tar.gz: OK Checksum of OpenBLAS - 0.3.29.tar.gz Ok OpenBLAS detected LIBCORE = skylakex OpenBLAS detected ARCH = x86_64
If a prompt appears here: ./f_check: 100: [: Illegal number:, it should be known that it is only a problem that may occur in running the OpenBLAS genarch program and does not affect the Toolchain compilation process.
Automatically download the CMake compilation library.
Locate and link the Intel MPI parallel library and the Intel MKL mathematical library from the environment variables.
==================== Finding Intel MPI from system paths ==================== path to mpiexec is /opt/intel/oneapi/mpi/2021.15/bin/mpiexec path to mpiicx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx path to mpiicpx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx path to mpiifx is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx Found lib directory /opt/intel/oneapi/mpi/2021.15/lib libmpi is found in ld search path libmpicxx is found in ld search path I_MPI_CXX is icpx I_MPI_CC is icx I_MPI_FC is ifx MPICXX is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx MPICC is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx MPIFC is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx Step intelmpi took 0.00 seconds. ==================== Finding MKL from system paths ==================== MKLROOT is found to be /opt/intel/oneapi/mkl/2025.1 libm is found in ld search path libdl is found in ld search path Step mkl took 0.00 seconds.
Automatically download and compile and install:
Mathematical library: ELPA
Functional library: LibXC
Libraries required for other functions: CEREALRapidJSON (optional: LibRI, LibComm, LibTorch, Libnpy)
The complete output is as follows: (This case is completed based on the Bohr container)
MPI is detected and it appears to be Intel MPI Checking system GCC version for gcc, intel and amd toolchain Your System gcc/g++/gfortran version should be consistent Minimum required version: 5 Your gcc version: 11.3.0 Your g++ version: 11.3.0 Your gfortran version: 11.3.0 Your GCC version seems to be enough for ABACUS installation. Using MKL, so openblas is disabled. Compiling with 16 processes for target native. Step gcc took 0.00 seconds. ==================== Finding Intel compiler from system paths ==================== path to icx is /opt/intel/oneapi/compiler/2025.1/bin/icx path to icpx is /opt/intel/oneapi/compiler/2025.1/bin/icpx path to ifx is /opt/intel/oneapi/compiler/2025.1/bin/ifx CC is /opt/intel/oneapi/compiler/2025.1/bin/icx CXX is /opt/intel/oneapi/compiler/2025.1/bin/icpx FC is /opt/intel/oneapi/compiler/2025.1/bin/ifx Step intel took 0.00 seconds. Step amd took 0.00 seconds. ==================== Getting proc arch info using OpenBLAS tools ==================== wget https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29 - O OpenBLAS - 0.3.29.tar.gz --no - check - certificate --2025 - 05 - 09 17:15:04-- https://codeload.github.com/OpenMathLib/OpenBLAS/tar.gz/v0.3.29 Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.37, 10.255.254.7 Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected. Proxy request sent, awaiting response... 200 OK Length: unspecified [application/x - gzip] Saving to: 'OpenBLAS - 0.3.29.tar.gz' OpenBLAS - 0.3.29.tar.gz [ <=> ] 23.53M 4.64MB/s in 5.5s 2025 - 05 - 09 17:15:11 (4.30 MB/s) - 'OpenBLAS - 0.3.29.tar.gz' saved [24671913] OpenBLAS - 0.3.29.tar.gz: OK Checksum of OpenBLAS - 0.3.29.tar.gz Ok. / f_check: 100: [: Illegal number: OpenBLAS detected LIBCORE = skylakex OpenBLAS detected ARCH = x86_64 ==================== Installing CMake ==================== wget https://cmake.org/files/v3.31/cmake - 3.31.7 - linux - x86_64.sh - O cmake - 3.31.7 - linux - x86_64.sh --no - check - certificate --2025 - 05 - 09 17:15:16-- https://cmake.org/files/v3.31/cmake - 3.31.7 - linux - x86_64.sh Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.37, 10.255.254.7, 10.255.254.18 Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.37|:8118... connected. Proxy request sent, awaiting response... 200 OK Length: 55005854 (52M) [text/x - sh] Saving to: 'cmake - 3.31.7 - linux - x86_64.sh' cmake - 3.31.7 - linux - x86_64.sh 100%[=====================================>] 52.46M 7.10MB/s in 8.7s 2025 - 05 - 09 17:15:25 (6.04 MB/s) - 'cmake - 3.31.7 - linux - x86_64.sh' saved [55005854/55005854] cmake - 3.31.7 - linux - x86_64.sh: OK Checksum of cmake - 3.31.7 - linux - x86_64.sh Ok Installing from scratch into /opt/abacus - develop - LTS/toolchain/install/cmake - 3.31.7 Step cmake took 11.00 seconds. ==================== Finding Intel MPI from system paths ==================== path to mpiexec is /opt/intel/oneapi/mpi/2021.15/bin/mpiexec path to mpiicx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx path to mpiicpx is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx path to mpiifx is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx Found lib directory /opt/intel/oneapi/mpi/2021.15/lib libmpi is found in ld search path libmpicxx is found in ld search path I_MPI_CXX is icpx I_MPI_CC is icx I_MPI_FC is ifx MPICXX is /opt/intel/oneapi/mpi/2021.15/bin/mpiicpx MPICC is /opt/intel/oneapi/mpi/2021.15/bin/mpiicx MPIFC is /opt/intel/oneapi/mpi/2021.15/bin/mpiifx Step intelmpi took 0.00 seconds. ==================== Finding MKL from system paths ==================== MKLROOT is found to be /opt/intel/oneapi/mkl/2025.1 libm is found in ld search path libdl is found in ld search path Step mkl took 0.00 seconds. ==================== Installing LIBXC ==================== wget https://gitlab.com/libxc/libxc/-/archive/7.0.0/libxc - 7.0.0.tar.bz2 - O libxc - 7.0.0.tar.bz2 --no - check - certificate --2025 - 05 - 09 17:15:27-- https://gitlab.com/libxc/libxc/-/archive/7.0.0/libxc - 7.0.0.tar.bz2 Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37 Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected. Proxy request sent, awaiting response... 200 OK Length: unspecified [application/octet - stream] Saving to: 'libxc - 7.0.0.tar.bz2' libxc - 7.0.0.tar.bz2 [ <=> ] 49.98M 7.63MB/s in 7.5s 2025 - 05 - 09 17:15:36 (6.68 MB/s) - 'libxc - 7.0.0.tar.bz2' saved [52408700] libxc - 7.0.0.tar.bz2: OK Checksum of libxc - 7.0.0.tar.bz2 Ok Installing from scratch into /opt/abacus - develop - LTS/toolchain/install/libxc - 7.0.0 Step libxc took 42.00 seconds. Step fftw took 0.00 seconds. Step scalapack took 0.00 seconds. ==================== Installing ELPA ==================== wget https://elpa.mpcdf.mpg.de/software/tarball - archive/Releases/2025.01.001/elpa - 2025.01.001.tar.gz - O elpa - 2025.01.001.tar.gz --no - check - certificate --2025 - 05 - 09 17:16:09-- https://elpa.mpcdf.mpg.de/software/tarball - archive/Releases/2025.01.001/elpa - 2025.01.001.tar.gz Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37 Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected. Proxy request sent, awaiting response... 200 OK Length: 2169795 (2.1M) [application/gzip] Saving to: 'elpa - 2025.01.001.tar.gz' elpa - 2025.01.001.tar.gz 100%[=====================================>] 2.07M 7.64KB/s in 4m 4s 2025 - 05 - 09 17:20:15 (8.69 KB/s) - 'elpa - 2025.01.001.tar.gz' saved [2169795/2169795] elpa - 2025.01.001.tar.gz: OK Checksum of elpa - 2025.01.001.tar.gz Ok Installing from scratch into /opt/abacus - develop - LTS/toolchain/install/elpa - 2025.01.001/cpu Step elpa took 535.00 seconds. ==================== Installing CEREAL =======================> Notice: This version of CEREAL is downloaded in GitHub master repository <=== wget https://codeload.github.com/USCiLab/cereal/tar.gz/master - O cereal - master.tar.gz --no - check - certificate --2025 - 05 - 09 17:25:04-- https://codeload.github.com/USCiLab/cereal/tar.gz/master Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37 Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected. Proxy request sent, awaiting response... 200 OK Length: unspecified [application/x - gzip] Saving to: 'cereal - master.tar.gz' cereal - master.tar.gz [ <=> ] 377.35K --.-KB/s in 0.1s 2025 - 05 - 09 17:25:05 (2.73 MB/s) - 'cereal - master.tar.gz' saved [386409] Installing from scratch into /opt/abacus - develop - LTS/toolchain/install/cereal - master Step cereal took 1.00 seconds. ==================== Installing RAPIDJSON =======================> Notice: This version of rapidjson is downloaded in GitHub master repository <=== wget https://codeload.github.com/Tencent/rapidjson/tar.gz/master - O rapidjson - master.tar.gz --no - check - certificate --2025 - 05 - 09 17:25:05-- https://codeload.github.com/Tencent/rapidjson/tar.gz/master Resolving ga.dp.tech (ga.dp.tech)... 10.255.254.18, 10.255.254.7, 10.255.254.37 Connecting to ga.dp.tech (ga.dp.tech)|10.255.254.18|:8118... connected. Proxy request sent, awaiting response... 200 OK Length: unspecified [application/x - gzip] Saving to: 'rapidjson - master.tar.gz' rapidjson - master.tar.gz [ <=> ] 1.06M 1.94MB/s in 0.6s 2025 - 05 - 09 17:25:06 (1.94 MB/s) - 'rapidjson - master.tar.gz' saved [1116059] Installing from scratch into /opt/abacus - develop - LTS/toolchain/install/rapidjson - master Step rapidjson took 1.00 seconds. Step libtorch took 0.00 seconds. Step libnpy took 0.00 seconds. Step libri took 0.00 seconds. Step libcomm took 0.00 seconds. ========================== usage ========================= Done! To use the installed tools and libraries and ABACUS version compiled with it you will first need to execute at the prompt: source /opt/abacus - develop - LTS/toolchain/install/setup To build ABACUS by gnu - toolchain, just use: ./build_abacus_gnu.sh To build ABACUS by intel - toolchain, just use: ./build_abacus_intel.sh To build ABACUS by amd - toolchain in gcc - aocl, just use: ./build_abacus_gcc - aocl.sh To build ABACUS by amd - toolchain in aocc - aocl, just use: ./build_abacus_aocc - aocl.sh or you can modify the builder scripts to suit your needs.
After the compilation is successful, according to the output prompt, run build_abacus_intel.sh
1
sh ./build_abacus_intel.sh
Then the compilation and installation of the ABACUS body can be completed. The approximate output is as follows:
--TheCXX compiler identification isIntelLLVM2025.1.1 --DetectingCXX compiler ABI info --DetectingCXX compiler ABI info - done --Checkfor working CXX compiler: /opt/intel/oneapi/compiler/2025.1/bin/icpx - skipped --DetectingCXX compile features --DetectingCXX compile features - done --RapidJSON found. Headers: --FoundGit: /usr/bin/git (found version "2.34.1") --Found git: attempting to get commit info... fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git CMakeWarning at CMakeLists.txt:104 (message): Failed to get git commit info --FoundCereal: /opt/abacus - develop -LTS/toolchain/install/cereal - master/include/cereal --CouldNOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) --ELPA : We need pkg - config to get all information about the elpa library --FoundELPA: /opt/abacus - develop -LTS/toolchain/install/elpa - 2025.01.001/cpu/lib/libelpa_openmp.so --PerformingTestELPA_VERSION_SATISFIES --PerformingTestELPA_VERSION_SATISFIES-Success --FoundMPI_CXX: /opt/intel/oneapi/mpi/2021.15/lib/libmpicxx.so (found version "3.1") --FoundMPI: TRUE (found version "3.1") --PerformingTestCMAKE_HAVE_LIBC_PTHREAD --PerformingTestCMAKE_HAVE_LIBC_PTHREAD-Success --FoundThreads: TRUE --FoundOpenMP_CXX: - fiopenmp (found version "5.1") --FoundOpenMP: TRUE (found version "5.1") --Lookingfor a CUDA compiler --Lookingfor a CUDA compiler -NOTFOUND --MKL_VERSION: 2025.1.0 --MKL_ROOT: /opt/intel/oneapi/mkl/2025.1 --MKL_ARCH: intel64 --MKL_SYCL_LINK: None, set to ` dynamic` by default --MKL_LINK: None, set to ` dynamic` by default --MKL_SYCL_INTERFACE_FULL: intel_lp64 --MKL_INTERFACE_FULL: intel_lp64 --MKL_SYCL_THREADING: None, set to ` tbb_thread` by default --MKL_THREADING: None, set to ` intel_thread` by default --MKL_MPI: None, set to ` intelmpi` by default --Experimental oneMKL DataFittingSYCLAPI does not support LP64 on CPU --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_scalapack_lp64.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_cdft_core.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_intel_lp64.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_intel_thread.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_core.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_blacs_intelmpi_lp64.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_blas.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_lapack.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_dft.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_sparse.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_data_fitting.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_rng.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_stats.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_sycl_vm.so --Found/opt/intel/oneapi/mkl/2025.1/lib/libmkl_tbb_thread.so --Found/opt/intel/oneapi/compiler/2025.1/lib/libiomp5.so --CouldNOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) --FoundLibxc: version 7.0.0 --Configuring done (1.3s) --Generating done (0.2s) --Build files have been written to: /opt/abacus - develop -LTS/build_abacus_intel [ 0%] BuildingCXX object source/CMakeFiles/driver.dir/driver.cpp.o... [100%] Built target io_basic [100%] BuildingCXX object CMakeFiles/abacus.dir/source/main.cpp.o [100%] LinkingCXX executable abacus [100%] Built target abacus --Install configuration: "" --Installing: /opt/abacus - develop -LTS/bin/abacus ========================== usage ========================= Done! To use the installed ABACUS version You need to source /opt/abacus - develop -LTS/toolchain/abacus_env.sh first !
The above output means that the compilation and installation of ABACUS is completed. Note that the environment variables read from the system will not be written into the install/setup file corresponding to abacus_env.sh. Therefore, when actually using this abacus, in addition to sourcing abacus_env.sh, you also need to load the corresponding Intel - OneAPI environment.
1 2
# after load intel - OneAPI env via source or module - load source abacus_env.sh
In this way, all dependency library environments and the ABACUS body of ABACUS can be loaded. At this time, you can confirm that ABACUS has been correctly installed and loaded through the abacus --version command:
1.3 Installing ABACUS Based on the Older Intel - OneAPI
In many cases, we are forced to use the old version of OneAPI, such as:
Compiling ABACUS with GPU support using the Intel library (at this time, only the icpc compiler can be used to compile ABACUS)
Compiling ABACUS using Intel - OneAPI on an AMD - CPU server (at this time, only the icpc compiler can be used to compile ELPA)
The server only has the old version of OneAPI (but it is best to be above version 2023) or parallel_xe_studio
Note: The computing performance of ABACUS compiled by using Intel - OneAPI on an AMD - CPU server will be lower than that of other solutions.
At this time, we can complete the compilation of ABACUS based on the Toolchain by editing the key scripts. The key scripts for user - level operations include toolchain_*.sh and build_abacus_*.sh. For the Intel Toolchain, the corresponding scripts are toolchain_intel.sh and build_abacus_intel.sh. Open toolchain_intel.sh through vim.
#!/bin/bash #SBATCH - J install #SBATCH - N 1 #SBATCH - n 16 #SBATCH - o compile.log #SBATCH - e compile.err # JamesMisaka in 2025 - 05 - 05 # install abacus dependency by intel - toolchain # use mkl and intelmpi # but mpich and openmpi can also be tried # libtorch and libnpy are for deepks support, which can be = no # gpu - lcao supporting modify: CUDA_PATH and --enable - cuda # export CUDA_PATH = / usr / local / cuda # module load mkl mpi compiler ./install_abacus_toolchain.sh \ -- with - intel = system \ -- math - mode = mkl \ -- with - gcc = no\ -- with - intelmpi = system \ -- with - cmake = install \ -- with - scalapack = no\ -- with - libxc = install \ -- with - fftw = no\ -- with - elpa = install \ -- with - cereal = install \ -- with - rapidjson = install \ -- with - libtorch = no\ -- with - libnpy = no\ -- with - libri = no\ -- with - libcomm = no\ -- with - intel - classic = no\ | tee compile.log # for using AMD - CPU or GPU - version: set -- with - intel - classic = yes # to enable gpu - lcao, add the following lines: # -- enable - cuda \ # -- gpu - ver = 75 \ # one should check your gpu compute capability number
Enter :34 to switch to line 34, and press the i key or a key to enter the editing mode (curious friends can observe the difference between the two), and modify --with-intel-classic=no to --with-intel-classic=yes, presenting the effect as:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
./install_abacus_toolchain.sh \ -- with - intel = system \ -- math - mode = mkl \ -- with - gcc = no\ -- with - intelmpi = system \ -- with - cmake = install \ -- with - scalapack = no\ -- with - libxc = install \ -- with - fftw = no\ -- with - elpa = install \ -- with - cereal = install \ -- with - rapidjson = install \ -- with - libtorch = no\ -- with - libnpy = no\ -- with - libri = no\ -- with - libcomm = no\ -- with - intel - classic = yes\ | tee compile.log
Note: It can be found that the toolchain*.sh script actually completes the compilation by calling the main script install_abacus_toolchain.sh, but sets different compilation options for different toolchains to facilitate users to use out of the box and edit simply. It should be noted that when using bash commands in this way, the various options are actually connected together, that is, there should be no spaces or other characters after the line break symbol "", and there should be no comments between the various line breaks.
Press the ESC key to exit the editing mode, and enter :wq to save and exit. In this way, the Toolchain will identify the traditional Intel compiler (icc, icpc, ifort) and its MPI library for dependency library compilation at runtime. When running toolchain_intel.sh, there will be different outputs from before:
==================== Finding Intel compiler from system paths ==================== path to icc is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icc path to icpc is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icpc path to ifort is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/ifort CC is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icc CXX is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/icpc FC is /mnt/sg001/opt/intel/oneapi/compiler/2022.2.1/linux/bin/intel64/ifort Step intel took 0.00 seconds. Step amd took 0.00 seconds. ==================== Getting proc arch info using OpenBLAS tools ==================== OpenBLAS detected LIBCORE = skylakex OpenBLAS detected ARCH = x86_64 ==================== Finding CMake from system paths ==================== path to cmake is /mnt/sg001/home/fz_pku_jh/software/cmake/3.31.7/bin/cmake Step cmake took 0.00 seconds. ==================== Finding Intel MPI from system paths ==================== path to mpiexec is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiexec path to mpiicc is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicc path to mpiicpc is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicpc path to mpiifort is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiifort Found lib directory /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/lib/release libmpi is found in ld search path libmpicxx is found in ld search path I_MPI_CXX is icpc I_MPI_CC is icc I_MPI_FC is ifort MPICXX is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicpc MPICC is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiicc MPIFC is /mnt/sg001/opt/intel/oneapi/mpi/2021.7.1/bin/mpiifort Step intelmpi took 1.00 seconds. ==================== Finding MKL from system paths ==================== MKLROOT is found to be /mnt/sg001/opt/intel/oneapi/mkl/2022.2.1 libm is found in ld search path libdl is found in ld search path Step mkl took 0.00 seconds.
Thinking: Why is cmake from the system in the above compilation output? Which other compilation option did I modify? The answer to this question will be revealed in the next part.
After editing and running toolchain_intel.sh in this way, further open build_abacus_intel.sh through vim.
After saving and exiting, run build_abacus_intel.sh to complete. There may be a large number of ICC - related warnings during compilation. This is just a reminder that old version Intel compilers such as icpc will not be supported in the new version of Intel - OneAPI, and there is no need to pay attention to them. Through this method, ABACUS can be compiled based on the old version of Intel - OneAPI.
Some Intel - OneAPI are in the transition period (such as Intel - OneAPI 2023.2), which may have both icpxand mpiicpc but not mpiicpx. For this situation, another compilation option --with-intel-mpi-clas=yes can be used. Edit toolchain_intel.sh to make the install_abacus_toolchain.sh part present as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
./install_abacus_toolchain.sh \ -- with - intel = system \ -- math - mode = mkl \ -- with - gcc = no\ -- with - intelmpi = system \ -- with - cmake = install \ -- with - scalapack = no\ -- with - libxc = install \ -- with - fftw = no\ -- with - elpa = install \ -- with - cereal = install \ -- with - rapidjson = install \ -- with - libtorch = no\ -- with - libnpy = no\ -- with - libri = no\ -- with - libcomm = no\ -- with - intel - classic = no\ -- with - intel - mpi - clas = yes\ | tee compile.log
Then the icpx and mpiicpc can be used to compile the ABACUS dependency software. When compiling the ABACUS body, modify the compiler in build_abacus_intel.sh in a similar way to the above:
Great! You have learned to customize the toolchain by editing toolchain_intel.sh and build_abacus_intel.sh to compile ABACUS with Intel dependencies under different machine conditions. This customization can also be used to introduce related ABACUS functional plug - ins, such as the hybrid functional library LibRI or the Torch dependency required for DeePKS support.
2 Introducing ABACUS Functional Plugins Based on Intel Toolchain
In the previous content, we viewed and edited the two core toolchain scripts toolchain_*.sh and build_abacus_*.sh. By editing these two scripts, we can easily achieve the compilation of ABACUS with different requirements based on the Toolchain.
Among them, toolchain_*.sh mainly completes by calling the install_abacus_toolchain.sh main script, involving several compilation options. These compilation options and their functions are divided into several categories:
--with-PKG=[install, system, no, [abspath]], when installing a specific dependency library, download and install it through the downloaded software package / read the corresponding dependency library from the system environment variables / do not use the corresponding dependency library / [advanced] identify the corresponding dependency library from the absolute path.
--math-mode=[mkl, openblas, aocl], specify the type of mathematical library used by the Toolchain. The default is openblas, but if the environment variable $MKLROOT is recognized, it will switch to MKL (so the OneAPI environment cannot be loaded when using the GNU Toolchain).
--with-option=[yes, no] and other compilation options, used to specify different versions for specific dependency libraries, such as --with-intel-classic=[yes, no] for the Intel Toolchain, or --with-openmpi4=[yes, no] for the GNU Toolchain. More relevant information can be obtained through the README or by running install_abacus_toolchain.sh --help. In the above content, a thinking question was left. In fact, it is just that I used the --with-cmake=system option during specific compilation.
After understanding this, we can easily compile some ABACUS functional plug - ins based on the Toolchain. The following takes the Intel Toolchain as an example to briefly introduce how to compile ABACUS supporting hybrid functionals and DeePKS.
In toolchain_intel.sh, modify the following two options to install:
1 2
-- with - libri = install \ -- with - libcomm = install \
After editing other options, run toolchain_intel.sh. At this time, the Toolchain will automatically download and install the LibRI library and the LibComm library, and automatically add them to the environment variable management file install/setup. After running, edit build_abacus_intel.sh, open the comments near the relevant code (remove the comment symbol #), and connect the relevant code:
PREFIX = $ABACUS_DIR ELPA = $INSTALL_DIR / elpa - 2025.01.001 / cpu # ELPA = $INSTALL_DIR / elpa - 2025.01.001 / nvidia # for gpu - lcao CEREAL = $INSTALL_DIR / cereal - master / include / cereal LIBXC = $INSTALL_DIR / libxc - 7.0.0 RAPIDJSON = $INSTALL_DIR / rapidjson - master / # LIBTORCH = $INSTALL_DIR / libtorch - 2.1.2 / share / cmake / Torch # LIBNPY = $INSTALL_DIR / libnpy - 1.0.1 / include LIBRI = $INSTALL_DIR / LibRI - 0.2.1.0 LIBCOMM = $INSTALL_DIR / LibComm - master # DEEPMD = $HOME / apps / anaconda3 / envs / deepmd # v3.0 might have problem # Notice: if you are compiling with AMD - CPU or GPU - version ABACUS, then `icpc` and `mpiicpc` compilers are recommended cmake - B $BUILD_DIR - DCMAKE_INSTALL_PREFIX = $PREFIX \ - DCMAKE_CXX_COMPILER = icpx \ - DMPI_CXX_COMPILER = mpiicpx \ - DMKLROOT = $MKLROOT \ - DELPA_DIR = $ELPA \ - DCEREAL_INCLUDE_DIR = $CEREAL \ - DLibxc_DIR = $LIBXC \ - DENABLE_LCAO = ON \ - DENABLE_LIBXC = ON \ - DUSE_OPENMP = ON \ - DUSE_ELPA = ON \ - DENABLE_RAPIDJSON = ON \ - DRapidJSON_DIR = $RAPIDJSON \ - DENABLE_LIBRI = ON \ - DLIBRI_DIR = $LIBRI \ - DLIBCOMM_DIR = $LIBCOMM \ # - DENABLE_DEEPKS = 1 \ # - DTorch_DIR = $LIBTORCH \ # - Dlibnpy_INCLUDE_DIR = $LIBNPY \
Then run the build_abacus_intel.sh script. It is emphasized again that there should be no spaces or other characters after the line break symbol "", and there should be no comments between the various line breaks.
Note: ABACUS compiled by Intel - OneAPI often has stronger OpenMP parallel performance, and many OpenMP threads can be opened to accelerate the calculation through thread parallelism, especially when running calculations that require running the EXX part such as hybrid functionals, the efficiency is usually relatively high.
2.2 Compiling ABACUS Supporting DeePKS
In toolchain_intel.sh, modify the following two options to install:
1 2
-- with - libtorch = install \ -- with - libnpy = install \
After running toolchain_intel.sh, open the relevant comments and modify the script in build_abacus_intel.sh, for example:
PREFIX = $ABACUS_DIR ELPA = $INSTALL_DIR / elpa - 2025.01.001 / cpu # ELPA = $INSTALL_DIR / elpa - 2025.01.001 / nvidia # for gpu - lcao CEREAL = $INSTALL_DIR / cereal - master / include / cereal LIBXC = $INSTALL_DIR / libxc - 7.0.0 RAPIDJSON = $INSTALL_DIR / rapidjson - master / LIBTORCH = $INSTALL_DIR / libtorch - 2.1.2 / share / cmake / Torch LIBNPY = $INSTALL_DIR / libnpy - 1.0.1 / include LIBRI = $INSTALL_DIR / LibRI - 0.2.1.0 LIBCOMM = $INSTALL_DIR / LibComm - master # DEEPMD = $HOME / apps / anaconda3 / envs / deepmd # v3.0 might have problem # Notice: if you are compiling with AMD - CPU or GPU - version ABACUS, then `icpc` and `mpiicpc` compilers are recommended cmake - B $BUILD_DIR - DCMAKE_INSTALL_PREFIX = $PREFIX \ - DCMAKE_CXX_COMPILER = icpx \ - DMPI_CXX_COMPILER = mpiicpx \ - DMKLROOT = $MKLROOT \ - DELPA_DIR = $ELPA \ - DCEREAL_INCLUDE_DIR = $CEREAL \ - DLibxc_DIR = $LIBXC \ - DENABLE_LCAO = ON \ - DENABLE_LIBXC = ON \ - DUSE_OPENMP = ON \ - DUSE_ELPA = ON \ - DENABLE_RAPIDJSON = ON \ - DRapidJSON_DIR = $RAPIDJSON \ - DENABLE_LIBRI = ON \ - DLIBRI_DIR = $LIBRI \ - DLIBCOMM_DIR = $LIBCOMM \ - DENABLE_DEEPKS = 1 \ - DTorch_DIR = $LIBTORCH \ - Dlibnpy_INCLUDE_DIR = $LIBNPY \
Run according to the build_abacus_intel.sh modified similarly to the above, and you can compile ABACUS with both LibRI and DeePKS support at the same time.
Summary
This tutorial is part of the ABACUS user tutorial. Taking the installation of ABACUS with the Intel Toolchain as an example, it explains the use of the Toolchain, including how to edit the key scripts of the Toolchain to achieve the effect of switching different compilation dependency components or adding specific functional plug - ins.
Regarding the installation method, subsequent tutorials will be further updated:
Installing ABACUS optimized for AMD CPUs based on the AMD Toolchain
Compiling and installing the GPU version of ABACUS based on the Toolchain
Methods for quickly deploying ABACUS through Conda and Docker