-
Notifications
You must be signed in to change notification settings - Fork 12
Compiling the fortran versions
Below, details of compiling the fortran versions of SFINCS on various computational systems are discussed.
For large problem sizes (i.e. high resolution), superlu_dist seems to work better than mumps. (Mumps returns a range of strange error messages.) Therefore it is recommended that you use superlu_dist rather than mumps by setting whichParallelSolverToFactorPreconditioner = 2
in input.namelist
.
Superlu_dist had a bug prior to version 3.3 which causes SFINCS to sometimes return NaN, or to require many more iterations of the Krylov solver than necessary. Therefore be sure to use superlu_dist version no earlier than 3.3. On Cray computational systems, superlu_dist is included in a "cray-tpsl" module, and you can run module help cray-tpsl
to check which version of superlu_dist is being used.
The following procedure worked as of 22 Feb 2014.
First, enter the following commands from the sfincs/fortran/singleSpecies/
and/or sfincs/fortran/multiSpecies/
directories:
cp makefile.edison makefile
module load cray-petsc
module load cray-hdf5-parallel
make all
(Note that this procedure uses the default compiler, intel.) During compiling I get this warning several times
ifort: command line warning #10120: overriding '-xAVX' with '-msse3'
as well as this warning during linking:
/opt/cray/hdf5-parallel/1.8.11/INTEL/130/lib/libhdf5_parallel.a(H5PL.o): In function `H5PL_load':
H5PL.c:(.text+0x393): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
But SFINCS seems to run fine nonetheless.
OBSERVE: It seems as Petsc has a compatibility problem with the IBM implementation of MPI, which is the standard used at Hydra. Errors arise for large problems, when many MPI messages are sent and received. It is therefore recommended to fully avoid the IBM parallel environment, and use intel MPI instead.
The following procedure worked as of 16 Dec 2014.
Load the following modules:
module switch intel/14.0
module unload mpi.ibm/1.3.0
module load mpi.intel/4.1.3
module switch mkl/11.1
module load petsc-real/3.5.2
Set path to HDF5:
export HDF5_HOME=/hydra/u/system/SLES11/soft/hdf5/1.8.11/intel13.1/mpi.intel-4.1.0
export PATH=${PATH}:${HDF5_HOME}/bin
Enter the following (tcsh) commands from the sfincs/fortran/singleSpecies/
and/or sfincs/fortran/multiSpecies/
directories to build SFINCS:
setenv SFINCS_SYSTEM hydra
make clean
make all
To run one also needs to add ${HDF5_HOME}/lib
to LD_LIBRARY_PATH
, i.e. add
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${HDF5_HOME}/lib
in the job submission file. To use the intel parallel environment use
# @ job_type = mpich
in the job submission file, and submit the parallel job with mpiexec instead of poe.
As an example to run interactively with 8 processes:
ssh hydra-i.rzg.mpg.de
Create a file named host.list
with a line localhost
(i.e. 8 lines in this example) for each processor you intend to use, in the simulation directory.
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:${HDF5_HOME}/lib
poe ./sfincs -procs 8
The following procedure worked as of 31 Oct 2014.
Configure your github account. Download and install CMake. Download and install Valgrind.
Do not use the pgi compiler. Here we use gcc. Download and install PETSc (we used petsc-3.5.2):
export PETSC_DIR=$HOME/petsc-3.5.2 (or where you put petsc)
export PETSC_ARCH=linux-glenn-mpi-real
module load gcc/4.7/4.7.3
module load openmpi/1.5.4
module load acml/gfortran64_fma4_mp/5.3.0
./configure --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions PETSC_ARCH=linux-glenn-mpi-real --with-large-file-io=1 --with-shared-libraries --with-debugging=no --known-mpi-shared-libraries=0 --with-mpi=1 with-mpi-dir=$MPI_HOME --with-mpiexec=mpirun --with-blacs=1 --with-blacs-lib=/c3se/apps/Glenn/ScaLAPACK/2.0.1-gcc46_openmpi15/lib/libscalapack.a --with-blacs-include=. --with-scalapack=1 --with-scalapack-lib=/c3se/apps/Glenn/ScaLAPACK/2.0.1-gcc46_openmpi15/lib/libscalapack.a --with-scalapack-include=. --with-blas-lapack-dir=/c3se/apps/Common/acml/5.3.0/gfortran64_fma4_mp/lib --with-fftw=0 --with-x=0 --with-batch=1 --with-hdf5=1 --with-hdf5-dir=/c3se/apps/Glenn/hdf5/1.8.9-gcc4.7-openmpi1.5.3/ --with-metis=1 --download-metis --with-superlu_dist=1 --download-superlu_dist --with-parmetis=1 --download-parmetis --with-valgrind=1 --with-valgrind-dir=YOUR_PATH_TO_VALGRIND
(notice that you have to change YOUR_PATH_TO_VALGRIND to your Valgrind installation)
Then follow the instructions from configure about submitting a batch configure script and running reconfigure.
./reconfigure-linux-glenn-mpi-real.py
make all test
Clone a copy of SFINCS from github and enter the version you want to build:
export SFINCS_SYSTEM=glenn
make clean
make all