33 Commits

Author SHA1 Message Date
122329eca7 Fix zeroing 2023-01-23 17:01:29 +01:00
Ania Brown
58c0bf078e Zero Tijk correctly in CPU code 2023-01-23 16:58:08 +01:00
3fe15e5e5c Fix bs and ths error in equations 2023-01-23 16:57:07 +01:00
0d223e6ed9 Fix vector types for energy in cpu 2023-01-23 14:44:54 +01:00
c8bdc4239f Fix an odd character in the warmup 2023-01-23 14:43:17 +01:00
Ania Brown
be96e4bf8c 1.syntax error fix 2.allocate temporary buffers only once per sim 2023-01-23 14:30:11 +01:00
Anna Brown
9003c218a3 don't need to copy to separate mpi_data array on the host when sources are resident on gpu 2023-01-23 14:25:25 +01:00
Ania Brown
4af47a0bb7 Initialize sources on gpus when ATRIP_SOURCES_IN_GPU 2023-01-23 14:21:51 +01:00
Ania Brown
9a5a2487be Add warmup in the SliceUnion 2023-01-23 13:46:20 +01:00
c4ec227185 Clean getEnergyDistinct 2023-01-13 16:59:19 +01:00
1ceb4cf0d6 Fix maybeConjugate cuda scope 2023-01-13 12:08:54 +01:00
34a4e79db0 Initial compiling implementation of the energy kernel 2023-01-13 11:33:42 +01:00
249f1c0b51 Add raven modules for cuda 2023-01-04 15:23:36 +01:00
1d96800d45 Add support for reading tensors from file in atrip bench 2022-12-06 21:20:03 +01:00
9087e3af19 Update workflows 2022-12-06 20:58:32 +01:00
418fd9d389 Add simple cuda bench configuration 2022-12-06 20:57:34 +01:00
895cd02778 Add some documentation about running the benches 2022-12-06 20:38:57 +01:00
8efa3d911e Add --max-iterations to main bench 2022-12-06 20:38:38 +01:00
0fa24404e5 Improve the documentation in the readme for benches building 2022-12-06 14:17:53 +01:00
8f7d05efda Add Building information and building for sources on GPU 2022-12-06 13:26:44 +01:00
ad542fe856 Add the slicing into the GPU 2022-12-05 21:16:30 +01:00
658397ebd7 Update in SliceUnion ATRIP_SOURCES_IN_GPU 2022-12-05 17:55:23 +01:00
26e2f2d109 Add ATRIP_SOURCES_IN_GPU and ATRIP_CUDA_AWARE_MPI defines in configure 2022-12-05 17:49:54 +01:00
871471aae3 Fix naive-tuples experimentation bench 2022-10-18 16:23:43 +02:00
65a64f3f8c Test on all pushes 2022-10-08 16:05:40 +02:00
4f9f09e965 Cleanup flags handling in configure 2022-10-08 16:04:55 +02:00
6dc943e10a Clean up DatabaseCommunicator 2022-10-08 16:04:37 +02:00
ed347ab0d9 Rename NAIVE_SLOW into ATRIP_NAIVE_SLOW 2022-10-08 16:04:03 +02:00
8c5c47e208 Add atrip-def m4 macro 2022-10-08 16:03:08 +02:00
6871372cac Add support for only calculating DGEMM parts ATRIP_ONLY_DGEMM 2022-10-08 16:02:49 +02:00
452c0fe001 Fix test_main name in automake 2022-10-08 15:59:48 +02:00
b636b89a64 Add configure-benches script 2022-10-08 15:59:25 +02:00
e59d298a01 Rename test_main.cxx to main.cxx 2022-10-08 00:17:57 +02:00
19 changed files with 1200 additions and 499 deletions

View File

@@ -13,7 +13,9 @@
(format "%s/include/" root) (format "%s/include/" root)
(format "%s/" root) (format "%s/" root)
(format "%s/bench/" root) (format "%s/bench/" root)
(format "%s/build/main/" root))))) (format "%s/build/main/" root)))
(setq-local flycheck-clang-include-path
flycheck-gcc-include-path)))
(eval . (flycheck-mode)) (eval . (flycheck-mode))
(eval . (outline-minor-mode)) (eval . (outline-minor-mode))
(indent-tabs-mode . nil) (indent-tabs-mode . nil)

View File

@@ -26,3 +26,110 @@ before the proper paper is released please contact me.
In the mean time the code has been used in In the mean time the code has been used in
[[https://aip.scitation.org/doi/10.1063/5.0074936][this publication]] and can therefore been cited. [[https://aip.scitation.org/doi/10.1063/5.0074936][this publication]] and can therefore been cited.
* Building
Atrip uses autotools to build the system.
Autotools works by first creating a =configure= script from
a =configure.ac= file.
Atrip should be built out of source, this means that
you have to create a build directory other that the root
directory, for instance in the =build/tutorial= directory
#+begin_src sh :exports code
mkdir -p build/tutorial/
cd build/tutorial
#+end_src
First you have to build the =configure= script by doing
#+begin_src sh :dir build/tutorial :exports code :results raw drawer
../../bootstrap.sh
#+end_src
#+RESULTS:
:results:
Creating configure script
Now you can build by doing
mkdir build
cd build
../configure
make extern
make all
:end:
And then you can see the =configure= options
#+begin_src sh :dir build/tutorial :results raw drawer :eval no
../../configure --help
#+end_src
** Benches
The script =tools/configure-benches.sh= can be used to create
a couple of configurations for benches:
#+begin_src sh :exports results :results verbatim org :results verbatim drawer replace output
awk '/begin +doc/,/end +doc/ { print $NL }' tools/configure-benches.sh |
grep -v -e "begin \+doc" -e "end \+doc" |
sed "s/^# //; s/^# *$//; /^$/d"
#+end_src
#+RESULTS:
:results:
- default ::
This configuration uses a CPU code with dgemm
and without computing slices.
- only-dgemm ::
This only runs the computation part that involves dgemms.
- cuda-only-dgemm ::
This is the naive CUDA implementation compiling only the dgemm parts
of the compute.
- cuda-slices-on-gpu-only-dgemm ::
This configuration tests that slices reside completely on the gpu
and it should use a CUDA aware MPI implementation.
It also only uses the routines that involve dgemm.
:end:
In order to generate the benches just create a suitable directory for it
#+begin_src sh :eval no
mkdir -p build/benches
cd buid/benches
../../tools/configure-benches.sh CXX=g++ ...
#+end_src
and you will get a Makefile together with several project folders.
You can either configure all projects with =make all= or
then go in each folder.
Notice that you can give a path for ctf for all of them by doing
#+begin_src sh :eval no
../../tools/configure-benches.sh --with-ctf=/absolute/path/to/ctf
#+end_src
* Running benches
** Main benchmark
The main benchmark gets built in =bench/atrip= and is used to run an
atrip run with random tensors.
A common run of this script will be the following
#+begin_src sh
bench/atrip \
--no 100 \
--nv 1000 \
--mod 1 \
--% 0 \
--dist group \
--nocheckpoint \
--max-iterations 1000
#+end_src

View File

@@ -19,7 +19,7 @@ BENCHES_LDADD = $(ATRIP_LIB) $(ATRIP_CTF)
## main entry point and bench ## main entry point and bench
## ##
bin_PROGRAMS += atrip bin_PROGRAMS += atrip
atrip_SOURCES = test_main.cxx atrip_SOURCES = main.cxx
atrip_CPPFLAGS = $(AM_CPPFLAGS) atrip_CPPFLAGS = $(AM_CPPFLAGS)
atrip_LDADD = $(BENCHES_LDADD) atrip_LDADD = $(BENCHES_LDADD)

View File

@@ -5,18 +5,20 @@
#include <CLI11.hpp> #include <CLI11.hpp>
#define _print_size(what, size) \ #define _print_size(what, size) \
if (rank == 0) { \ do { \
std::cout << #what \ if (rank == 0) { \
<< " => " \ std::cout << #what \
<< (double)size * elem_to_gb \ << " => " \
<< "GB" \ << (double)size * elem_to_gb \
<< std::endl; \ << "GB" \
} << std::endl; \
} \
} while (0)
int main(int argc, char** argv) { int main(int argc, char** argv) {
MPI_Init(&argc, &argv); MPI_Init(&argc, &argv);
size_t checkpoint_it; size_t checkpoint_it, max_iterations;
int no(10), nv(100), itMod(-1), percentageMod(10); int no(10), nv(100), itMod(-1), percentageMod(10);
float checkpoint_percentage; float checkpoint_percentage;
bool bool
@@ -30,6 +32,9 @@ int main(int argc, char** argv) {
app.add_option("--no", no, "Occupied orbitals"); app.add_option("--no", no, "Occupied orbitals");
app.add_option("--nv", nv, "Virtual orbitals"); app.add_option("--nv", nv, "Virtual orbitals");
app.add_option("--mod", itMod, "Iteration modifier"); app.add_option("--mod", itMod, "Iteration modifier");
app.add_option("--max-iterations",
max_iterations,
"Maximum number of iterations to run");
app.add_flag("--keep-vppph", keepVppph, "Do not delete Vppph"); app.add_flag("--keep-vppph", keepVppph, "Do not delete Vppph");
app.add_flag("--nochrono", nochrono, "Do not print chrono"); app.add_flag("--nochrono", nochrono, "Do not print chrono");
app.add_flag("--rank-round-robin", rankRoundRobin, "Do rank round robin"); app.add_flag("--rank-round-robin", rankRoundRobin, "Do rank round robin");
@@ -45,14 +50,27 @@ int main(int argc, char** argv) {
checkpoint_percentage, checkpoint_percentage,
"Percentage for checkpoints"); "Percentage for checkpoints");
// Optional tensor files
std::string
ei_path, ea_path,
Tph_path, Tpphh_path,
Vpphh_path, Vhhhp_path, Vppph_path;
app.add_option("--ei", ei_path, "Path for ei");
app.add_option("--ea", ea_path, "Path for ea");
app.add_option("--Tpphh", Tpphh_path, "Path for Tpphh");
app.add_option("--Tph", Tph_path, "Path for Tph");
app.add_option("--Vpphh", Vpphh_path, "Path for Vpphh");
app.add_option("--Vhhhp", Vhhhp_path, "Path for Vhhhp");
app.add_option("--Vppph", Vppph_path, "Path for Vppph");
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA)
size_t ooo_threads = 0, ooo_blocks = 0; size_t ooo_threads = 0, ooo_blocks = 0;
app.add_option("--ooo-blocks", app.add_option("--ooo-blocks",
ooo_blocks, ooo_blocks,
"CUDA: Number of blocks per block for kernels going through ooo tensors"); "CUDA: Number of blocks per block for kernels going through ooo tensors");
app.add_option("--ooo-threads", app.add_option("--ooo-threads",
ooo_threads, ooo_threads,
"CUDA: Number of threads per block for kernels going through ooo tensors"); "CUDA: Number of threads per block for kernels going through ooo tensors");
#endif #endif
CLI11_PARSE(app, argc, argv); CLI11_PARSE(app, argc, argv);
@@ -148,37 +166,64 @@ int main(int argc, char** argv) {
} }
std::vector<int> symmetries(4, NS) std::vector<int>
, vo({nv, no}) symmetries(4, NS),
, vvoo({nv, nv, no, no}) vo({nv, no}),
, ooov({no, no, no, nv}) vvoo({nv, nv, no, no}),
, vvvo({nv, nv, nv, no}) ooov({no, no, no, nv}),
; vvvo({nv, nv, nv, no});
CTF::Tensor<double> CTF::Tensor<double>
ei(1, ooov.data(), symmetries.data(), world) ei(1, ooov.data(), symmetries.data(), world),
, ea(1, vo.data(), symmetries.data(), world) ea(1, vo.data(), symmetries.data(), world),
, Tph(2, vo.data(), symmetries.data(), world) Tph(2, vo.data(), symmetries.data(), world),
, Tpphh(4, vvoo.data(), symmetries.data(), world) Tpphh(4, vvoo.data(), symmetries.data(), world),
, Vpphh(4, vvoo.data(), symmetries.data(), world) Vpphh(4, vvoo.data(), symmetries.data(), world),
, Vhhhp(4, ooov.data(), symmetries.data(), world) Vhhhp(4, ooov.data(), symmetries.data(), world);
;
// initialize deletable tensors in heap // initialize deletable tensors in heap
auto Vppph auto Vppph
= new CTF::Tensor<double>(4, vvvo.data(), symmetries.data(), world); = new CTF::Tensor<double>(4, vvvo.data(), symmetries.data(), world);
_print_size(Vabci, no*nv*nv*nv) _print_size(Vabci, no*nv*nv*nv);
_print_size(Vabij, no*no*nv*nv) _print_size(Vabij, no*no*nv*nv);
_print_size(Vijka, no*no*no*nv) _print_size(Vijka, no*no*no*nv);
ei.fill_random(-40.0, -2); if (ei_path.size()) {
ea.fill_random(2, 50); ei.read_dense_from_file(ei_path.c_str());
Tpphh.fill_random(0, 1); } else {
Tph.fill_random(0, 1); ei.fill_random(-40.0, -2);
Vpphh.fill_random(0, 1); }
Vhhhp.fill_random(0, 1); if (ea_path.size()) {
Vppph->fill_random(0, 1); ea.read_dense_from_file(ea_path.c_str());
} else {
ea.fill_random(2, 50);
}
if (Tpphh_path.size()) {
Tpphh.read_dense_from_file(Tpphh_path.c_str());
} else {
Tpphh.fill_random(0, 1);
}
if (Tph_path.size()) {
Tph.read_dense_from_file(Tph_path.c_str());
} else {
Tph.fill_random(0, 1);
}
if (Vpphh_path.size()) {
Vpphh.read_dense_from_file(Vpphh_path.c_str());
} else {
Vpphh.fill_random(0, 1);
}
if (Vhhhp_path.size()) {
Vhhhp.read_dense_from_file(Vhhhp_path.c_str());
} else {
Vhhhp.fill_random(0, 1);
}
if (Vppph_path.size()) {
Vppph->read_dense_from_file(Vppph_path.c_str());
} else {
Vppph->fill_random(0, 1);
}
atrip::Atrip::init(MPI_COMM_WORLD); atrip::Atrip::init(MPI_COMM_WORLD);
const auto in const auto in
@@ -199,6 +244,7 @@ int main(int argc, char** argv) {
.with_iterationMod(itMod) .with_iterationMod(itMod)
.with_percentageMod(percentageMod) .with_percentageMod(percentageMod)
.with_tuplesDistribution(tuplesDistribution) .with_tuplesDistribution(tuplesDistribution)
.with_maxIterations(max_iterations)
// checkpoint options // checkpoint options
.with_checkpointAtEveryIteration(checkpoint_it) .with_checkpointAtEveryIteration(checkpoint_it)
.with_checkpointAtPercentage(checkpoint_percentage) .with_checkpointAtPercentage(checkpoint_percentage)

View File

@@ -21,26 +21,6 @@ AC_ARG_ENABLE(shared,
files (default=YES)]), files (default=YES)]),
[], [enable_shared=yes]) [], [enable_shared=yes])
AC_ARG_ENABLE(
[slice],
[AS_HELP_STRING(
[--disable-slice],
[Disable the step of slicing tensors for CTF, this is useful for example for benchmarking or testing.])],
[atrip_dont_slice=1
AC_DEFINE([ATRIP_DONT_SLICE],1,[Wether CTF will slice tensors or skip the step])
],
[atrip_dont_slice=0]
)
AC_ARG_ENABLE(
[atrip_dgemm],
[AS_HELP_STRING(
[--disable-dgemm],
[Disable using dgemm for the doubles equations])],
[],
[AC_DEFINE([ATRIP_USE_DGEMM],1,[Use dgemm for the doubles equations])]
)
AC_ARG_ENABLE([docs], AC_ARG_ENABLE([docs],
[AS_HELP_STRING([--enable-docs], [AS_HELP_STRING([--enable-docs],
@@ -74,13 +54,53 @@ AC_ARG_VAR([NVCC], [Path to the nvidia cuda compiler.])
AC_ARG_VAR([CUDA_LDFLAGS], [LDFLAGS to find libraries -lcuda, -lcudart, -lcublas.]) AC_ARG_VAR([CUDA_LDFLAGS], [LDFLAGS to find libraries -lcuda, -lcudart, -lcublas.])
AC_ARG_VAR([CUDA_CXXFLAGS], [CXXFLAGS to find the CUDA headers]) AC_ARG_VAR([CUDA_CXXFLAGS], [CXXFLAGS to find the CUDA headers])
dnl -----------------------------------------------------------------------
dnl ATRIP CPP DEFINES
dnl -----------------------------------------------------------------------
AC_ARG_WITH([atrip-debug], AC_ARG_WITH([atrip-debug],
[AS_HELP_STRING([--with-atrip-debug], [AS_HELP_STRING([--with-atrip-debug],
[Debug level for atrip, possible values: 1, 2, 3, 4])], [Debug level for atrip, possible values:
1, 2, 3, 4])],
[AC_DEFINE([ATRIP_DEBUG],[atrip-debug],[Atrip debug level])], [AC_DEFINE([ATRIP_DEBUG],[atrip-debug],[Atrip debug level])],
[AC_DEFINE([ATRIP_DEBUG],[1],[Atrip debug level])] [AC_DEFINE([ATRIP_DEBUG],[1],[Atrip debug level])])
)
AC_ARG_ENABLE([atrip_dgemm],
[AS_HELP_STRING([--disable-dgemm],
[Disable using dgemm for the doubles equations])],
[],
[AC_DEFINE([ATRIP_USE_DGEMM],
1,
[Use dgemm for the doubles equations])])
ATRIP_DEF([slice], [disable],
[ATRIP_DONT_SLICE],
[Disable the step of slicing tensors for CTF, this is useful
for example for benchmarking or testing.])
ATRIP_DEF([only-dgemm], [enable],
[ATRIP_ONLY_DGEMM],
[Run only the parts of atrip that involve dgemm calls, this
is useful for benchmarking and testing the code, it is
intended for developers of Atrip.])
ATRIP_DEF([naive-slow], [enable],
[ATRIP_NAIVE_SLOW],
[Run slow but correct code for the mapping of (iteration,
rank) to tuple of the naive tuple distribution.])
ATRIP_DEF([sources-in-gpu], [enable],
[ATRIP_SOURCES_IN_GPU],
[When using CUDA, activate storing all sources (slices of
the input tensors) in the GPU. This means that a lot of GPUs
will be needed.])
ATRIP_DEF([cuda-aware-mpi], [enable],
[ATRIP_CUDA_AWARE_MPI],
[When using MPI, assume support for CUDA aware mpi by the
given MPI implementation.])
dnl ----------------------------------------------------------------------- dnl -----------------------------------------------------------------------
@@ -144,8 +164,7 @@ AC_TYPE_SIZE_T
dnl ----------------------------------------------------------------------- dnl -----------------------------------------------------------------------
dnl CHECK CTF dnl CHECK CTF
if test xYES = x${BUILD_CTF}; then if test xYES = x${BUILD_CTF}; then
AC_MSG_WARN([Sorry, building CTF not supported yet provide a build path AC_MSG_WARN([You will have to do make ctf before building the project.])
with --with-ctf=path/to/ctf/installation])
else else
CPPFLAGS="$CPPFLAGS -I${LIBCTF_CPATH}" CPPFLAGS="$CPPFLAGS -I${LIBCTF_CPATH}"
LDFLAGS="$LDFLAGS -L${LIBCTF_LD_LIBRARY_PATH} -lctf" LDFLAGS="$LDFLAGS -L${LIBCTF_LD_LIBRARY_PATH} -lctf"

56
etc/env/raven/cuda vendored Normal file
View File

@@ -0,0 +1,56 @@
mods=(
cuda/11.6
intel/19.1.2
mkl/2020.4
impi/2019.8
autoconf/2.69
automake/1.15
libtool/2.4.6
)
module purge
module load ${mods[@]}
LIB_PATH="${CUDA_HOME}/lib64"
export CUDA_ROOT=${CUDA_HOME}
export CUDA_LDFLAGS="-L${LIB_PATH} -lcuda -L${LIB_PATH} -lcudart -L${LIB_PATH} -lcublas"
export CUDA_CXXFLAGS="-I${CUDA_HOME}/include"
export LD_LIBRARY_PATH="${MKL_HOME}/lib/intel64_lin:${LD_LIBRARY_PATH}"
BLAS_STATIC_PATH="$MKL_HOME/lib/intel64/libmkl_intel_lp64.a"
ls ${LIB_PATH}/libcublas.so
ls ${LIB_PATH}/libcudart.so
cat <<EOF
////////////////////////////////////////////////////////////////////////////////
info
////////////////////////////////////////////////////////////////////////////////
MKL_HOME = $MKL_HOME
BLAS_STATIC_PATH = $BLAS_STATIC_PATH
CUDA_ROOT = ${CUDA_HOME}
CUDA_LDFLAGS = "-L${LIB_PATH} -lcuda -L${LIB_PATH} -lcudart -L${LIB_PATH} -lcublas"
CUDA_CXXFLAGS = "-I${CUDA_HOME}/include"
Consider now runnng the following
../configure \\
--enable-cuda \\
--disable-slice \\
--with-blas="-L\$MKL_HOME/lib/intel64/ -lmkl_intel_lp64 -mkl" \\
CXX=mpiicpc \\
CC=mpiicc \\
MPICXX=mpiicpc
EOF
return

8
etc/m4/atrip-def.m4 Normal file
View File

@@ -0,0 +1,8 @@
AC_DEFUN([ATRIP_DEF],
[AC_ARG_ENABLE([$1],
[AS_HELP_STRING([--$2-$1],
[$4])],
[AC_DEFINE([$3],
1,
[$4])])])

View File

@@ -86,7 +86,7 @@ namespace atrip {
ADD_ATTRIBUTE(bool, rankRoundRobin, false) ADD_ATTRIBUTE(bool, rankRoundRobin, false)
ADD_ATTRIBUTE(bool, chrono, false) ADD_ATTRIBUTE(bool, chrono, false)
ADD_ATTRIBUTE(bool, barrier, false) ADD_ATTRIBUTE(bool, barrier, false)
ADD_ATTRIBUTE(int, maxIterations, 0) ADD_ATTRIBUTE(size_t, maxIterations, 0)
ADD_ATTRIBUTE(int, iterationMod, -1) ADD_ATTRIBUTE(int, iterationMod, -1)
ADD_ATTRIBUTE(int, percentageMod, -1) ADD_ATTRIBUTE(int, percentageMod, -1)
ADD_ATTRIBUTE(TuplesDistribution, tuplesDistribution, NAIVE) ADD_ATTRIBUTE(TuplesDistribution, tuplesDistribution, NAIVE)

View File

@@ -11,11 +11,22 @@
#if defined(HAVE_CUDA) && defined(__CUDACC__) #if defined(HAVE_CUDA) && defined(__CUDACC__)
# define __MAYBE_GLOBAL__ __global__ # define __MAYBE_GLOBAL__ __global__
# define __MAYBE_DEVICE__ __device__ # define __MAYBE_DEVICE__ __device__
# define __MAYBE_HOST__ __host__
# define __INLINE__ __inline__
#else #else
# define __MAYBE_GLOBAL__ # define __MAYBE_GLOBAL__
# define __MAYBE_DEVICE__ # define __MAYBE_DEVICE__
# define __MAYBE_HOST__
# define __INLINE__ inline
#endif #endif
#if defined(HAVE_CUDA)
#define ACC_FUNCALL(fname, i, j, ...) fname<<<(i), (j)>>>(__VA_ARGS__)
#else
#define ACC_FUNCALL(fname, i, j, ...) fname(__VA_ARGS__)
#endif /* defined(HAVE_CUDA) */
#define _CHECK_CUDA_SUCCESS(message, ...) \ #define _CHECK_CUDA_SUCCESS(message, ...) \
do { \ do { \
CUresult result = __VA_ARGS__; \ CUresult result = __VA_ARGS__; \

View File

@@ -23,6 +23,8 @@
#include<thrust/device_vector.h> #include<thrust/device_vector.h>
#endif #endif
#include<atrip/CUDA.hpp>
namespace atrip { namespace atrip {
using ABCTuple = std::array<size_t, 3>; using ABCTuple = std::array<size_t, 3>;
@@ -32,21 +34,25 @@ using ABCTuples = std::vector<ABCTuple>;
// [[file:~/cuda/atrip/atrip.org::*Energy][Energy:1]] // [[file:~/cuda/atrip/atrip.org::*Energy][Energy:1]]
template <typename F=double> template <typename F=double>
double getEnergyDistinct __MAYBE_GLOBAL__
void getEnergyDistinct
( F const epsabc ( F const epsabc
, size_t const No , size_t const No
, F* const epsi , F* const epsi
, F* const Tijk , F* const Tijk
, F* const Zijk , F* const Zijk
, double* energy
); );
template <typename F=double> template <typename F=double>
double getEnergySame __MAYBE_GLOBAL__
void getEnergySame
( F const epsabc ( F const epsabc
, size_t const No , size_t const No
, F* const epsi , F* const epsi
, F* const Tijk , F* const Tijk
, F* const Zijk , F* const Zijk
, double* energy
); );
// Energy:1 ends here // Energy:1 ends here
@@ -97,6 +103,11 @@ void singlesContribution
// -- TIJK // -- TIJK
// , DataPtr<F> Tijk // , DataPtr<F> Tijk
, DataFieldType<F>* Tijk_ , DataFieldType<F>* Tijk_
#if defined(HAVE_CUDA)
// -- tmp buffers
, DataFieldType<F>* _t_buffer
, DataFieldType<F>* _vhhh
#endif
); );
// Doubles contribution:1 ends here // Doubles contribution:1 ends here

View File

@@ -0,0 +1,171 @@
// Copyright 2022 Alejandro Gallo
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef OPERATIONS_HPP_
#define OPERATIONS_HPP_
#include <atrip/CUDA.hpp>
#include <atrip/Types.hpp>
#include <atrip/Complex.hpp>
namespace atrip {
namespace acc {
// cuda kernels
template <typename F>
__MAYBE_GLOBAL__
void zeroing(F* a, size_t n) {
F zero = {0};
for (size_t i = 0; i < n; i++) {
a[i] = zero;
}
}
////
template <typename F>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
F maybeConjugateScalar(const F &a) { return a; }
#if defined(HAVE_CUDA)
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
cuDoubleComplex maybeConjugateScalar(const cuDoubleComplex &a) {
return {a.x, -a.y};
}
#endif /* defined(HAVE_CUDA) */
template <typename F>
__MAYBE_GLOBAL__
void maybeConjugate(F* to, F* from, size_t n) {
for (size_t i = 0; i < n; ++i) {
to[i] = maybeConjugateScalar<F>(from[i]);
}
}
template <typename F>
__MAYBE_DEVICE__ __MAYBE_HOST__
void reorder(F* to, F* from, size_t size, size_t I, size_t J, size_t K) {
size_t idx = 0;
const size_t IDX = I + J*size + K*size*size;
for (size_t k = 0; k < size; k++)
for (size_t j = 0; j < size; j++)
for (size_t i = 0; i < size; i++, idx++)
to[idx] += from[IDX];
}
// Multiplication operation
//////////////////////////////////////////////////////////////////////////////
template <typename F>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
F prod(const F &a, const F &b) { return a * b; }
#if defined(HAVE_CUDA)
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
cuDoubleComplex prod(const cuDoubleComplex &a, const cuDoubleComplex &b) {
return cuCmul(a, b);
}
#endif /* defined(HAVE_CUDA) */
// Division operation
//////////////////////////////////////////////////////////////////////////////
template <typename F>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
F div(const F &a, const F &b) { return a / b; }
#if defined(HAVE_CUDA)
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
cuDoubleComplex div(const cuDoubleComplex &a, const cuDoubleComplex &b) {
return cuCdiv(a, b);
}
#endif /* defined(HAVE_CUDA) */
// Real part
//////////////////////////////////////////////////////////////////////////////
template <typename F>
__MAYBE_HOST__ __INLINE__
double real(F &a) { return std::real(a); }
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
double real(double &a) {
return a;
}
#if defined(HAVE_CUDA)
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
double real(cuDoubleComplex &a) {
return cuCreal(a);
}
#endif /* defined(HAVE_CUDA) */
// Substraction operator
//////////////////////////////////////////////////////////////////////////////
template <typename F>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
F sub(const F &a, const F &b) { return a - b; }
#if defined(HAVE_CUDA)
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
cuDoubleComplex sub(const cuDoubleComplex &a,
const cuDoubleComplex &b) {
return cuCsub(a, b);
}
#endif /* defined(HAVE_CUDA) */
// Addition operator
//////////////////////////////////////////////////////////////////////////////
template <typename F>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
F add(const F &a, const F &b) { return a + b; }
#if defined(HAVE_CUDA)
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__ __INLINE__
cuDoubleComplex add(const cuDoubleComplex &a, const cuDoubleComplex &b) {
return cuCadd(a, b);
}
#endif /* defined(HAVE_CUDA) */
// Sum in place operator
//////////////////////////////////////////////////////////////////////////////
template <typename F>
__MAYBE_DEVICE__ __MAYBE_HOST__
void sum_in_place(F* to, const F* from) { *to += *from; }
#if defined(HAVE_CUDA)
template <>
__MAYBE_DEVICE__ __MAYBE_HOST__
void sum_in_place(cuDoubleComplex* to, const cuDoubleComplex* from) {
to->x += from->x;
to->y += from->y;
}
#endif /* defined(HAVE_CUDA) */
} // namespace acc
} // namespace atrip
#endif

View File

@@ -352,7 +352,7 @@ Info info;
// [[file:~/cuda/atrip/atrip.org::*Attributes][Attributes:2]] // [[file:~/cuda/atrip/atrip.org::*Attributes][Attributes:2]]
DataPtr<F> data; DataPtr<F> data;
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA) && !defined (ATRIP_SOURCES_IN_GPU)
F* mpi_data; F* mpi_data;
#endif #endif
// Attributes:2 ends here // Attributes:2 ends here
@@ -456,7 +456,7 @@ void unwrapAndMarkReady() {
if (errorCode != MPI_SUCCESS) if (errorCode != MPI_SUCCESS)
throw "Atrip: Unexpected error MPI ERROR"; throw "Atrip: Unexpected error MPI ERROR";
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA) && !defined(ATRIP_SOURCES_IN_GPU)
// copy the retrieved mpi data to the device // copy the retrieved mpi data to the device
WITH_CHRONO("cuda:memcpy", WITH_CHRONO("cuda:memcpy",
_CHECK_CUDA_SUCCESS("copying mpi data to device", _CHECK_CUDA_SUCCESS("copying mpi data to device",
@@ -488,7 +488,7 @@ void unwrapAndMarkReady() {
Slice(size_t size_) Slice(size_t size_)
: info({}) : info({})
, data(DataNullPtr) , data(DataNullPtr)
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA) && !defined(ATRIP_SOURCES_IN_GPU)
, mpi_data(nullptr) , mpi_data(nullptr)
#endif #endif
, size(size_) , size(size_)

View File

@@ -18,6 +18,12 @@
#include <atrip/Slice.hpp> #include <atrip/Slice.hpp>
#include <atrip/RankMap.hpp> #include <atrip/RankMap.hpp>
#if defined(ATRIP_SOURCES_IN_GPU)
# define SOURCES_DATA(s) (s)
#else
# define SOURCES_DATA(s) (s).data()
#endif
namespace atrip { namespace atrip {
// Prolog:1 ends here // Prolog:1 ends here
@@ -195,7 +201,7 @@ template <typename F=double>
; ;
if (blank.info.state == Slice<F>::SelfSufficient) { if (blank.info.state == Slice<F>::SelfSufficient) {
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA)
const size_t _size = sizeof(F) * sources[from.source].size(); const size_t _size = sizeof(F) * sliceSize;
// TODO: this is code duplication with downstairs // TODO: this is code duplication with downstairs
if (freePointers.size() == 0) { if (freePointers.size() == 0) {
std::stringstream stream; std::stringstream stream;
@@ -212,12 +218,12 @@ template <typename F=double>
WITH_CHRONO("cuda:memcpy:self-sufficient", WITH_CHRONO("cuda:memcpy:self-sufficient",
_CHECK_CUDA_SUCCESS("copying mpi data to device", _CHECK_CUDA_SUCCESS("copying mpi data to device",
cuMemcpyHtoD(blank.data, cuMemcpyHtoD(blank.data,
(void*)sources[from.source].data(), (void*)SOURCES_DATA(sources[from.source]),
sizeof(F) * sources[from.source].size())); sizeof(F) * sliceSize));
)) ))
#else #else
blank.data = sources[from.source].data(); blank.data = SOURCES_DATA(sources[from.source]);
#endif #endif
} else { } else {
if (freePointers.size() == 0) { if (freePointers.size() == 0) {
@@ -396,23 +402,44 @@ template <typename F=double>
, world(child_world) , world(child_world)
, universe(global_world) , universe(global_world)
, sliceLength(sliceLength_) , sliceLength(sliceLength_)
, sliceSize(std::accumulate(sliceLength.begin(),
sliceLength.end(),
1UL, std::multiplies<size_t>()))
#if defined(ATRIP_SOURCES_IN_GPU)
, sources(rankMap.nSources())
#else
, sources(rankMap.nSources(), , sources(rankMap.nSources(),
std::vector<F> std::vector<F>(sliceSize))
(std::accumulate(sliceLength.begin(), #endif
sliceLength.end(),
1UL, std::multiplies<size_t>())))
, name(name_) , name(name_)
, sliceTypes(sliceTypes_) , sliceTypes(sliceTypes_)
, sliceBuffers(nSliceBuffers) , sliceBuffers(nSliceBuffers)
//, slices(2 * sliceTypes.size(), Slice<F>{ sources[0].size() })
{ // constructor begin { // constructor begin
LOG(0,"Atrip") << "INIT SliceUnion: " << name << "\n"; LOG(0,"Atrip") << "INIT SliceUnion: " << name << "\n";
printf("sliceSize %d, number of slices %d\n\n\n", sliceSize, sources.size());
#if defined(ATRIP_SOURCES_IN_GPU)
for (auto& ptr: sources) {
const CUresult sourceError =
cuMemAlloc(&ptr, sizeof(F) * sliceSize);
if (ptr == 0UL) {
throw "UNSUFICCIENT MEMORY ON THE GRAPHIC CARD FOR SOURCES";
}
if (sourceError != CUDA_SUCCESS) {
std::stringstream s;
s << "Error allocating memory for sources "
<< "code " << sourceError << "\n";
throw s.str();
}
}
#endif
for (auto& ptr: sliceBuffers) { for (auto& ptr: sliceBuffers) {
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA)
const CUresult error = const CUresult error =
cuMemAlloc(&ptr, sizeof(F) * sources[0].size()); cuMemAlloc(&ptr, sizeof(F) * sliceSize);
if (ptr == 0UL) { if (ptr == 0UL) {
throw "UNSUFICCIENT MEMORY ON THE GRAPHIC CARD FOR FREE POINTERS"; throw "UNSUFICCIENT MEMORY ON THE GRAPHIC CARD FOR FREE POINTERS";
} }
@@ -423,12 +450,12 @@ template <typename F=double>
throw s.str(); throw s.str();
} }
#else #else
ptr = (DataPtr<F>)malloc(sizeof(F) * sources[0].size()); ptr = (DataPtr<F>)malloc(sizeof(F) * sliceSize);
#endif #endif
} }
slices slices
= std::vector<Slice<F>>(2 * sliceTypes.size(), { sources[0].size() }); = std::vector<Slice<F>>(2 * sliceTypes.size(), { sliceSize });
// TODO: think exactly ^------------------- about this number // TODO: think exactly ^------------------- about this number
// initialize the freePointers with the pointers to the buffers // initialize the freePointers with the pointers to the buffers
@@ -436,17 +463,45 @@ template <typename F=double>
std::inserter(freePointers, freePointers.begin()), std::inserter(freePointers, freePointers.begin()),
[](DataPtr<F> ptr) { return ptr; }); [](DataPtr<F> ptr) { return ptr; });
#if defined(HAVE_CUDA)
LOG(1,"Atrip") << "warming communication up " << slices.size() << "\n";
WITH_CHRONO("cuda:warmup",
int nRanks=Atrip::np, requestCount=0;
int nSends=sliceBuffers.size()*nRanks;
MPI_Request *requests = (MPI_Request*) malloc(nSends*2 * sizeof(MPI_Request));
MPI_Status *statuses = (MPI_Status*) malloc(nSends*2 * sizeof(MPI_Status));
for (int sliceId=0; sliceId<sliceBuffers.size(); sliceId++){
for (int rankId=0; rankId<nRanks; rankId++){
MPI_Isend((void*)SOURCES_DATA(sources[0]),
sliceSize,
traits::mpi::datatypeOf<F>(),
rankId,
100,
universe,
&requests[requestCount++]);
MPI_Irecv((void*)sliceBuffers[sliceId],
sliceSize,
traits::mpi::datatypeOf<F>(),
rankId,
100,
universe,
&requests[requestCount++]);
}
}
MPI_Waitall(nSends*2, requests, statuses);
)
#endif
LOG(1,"Atrip") << "#slices " << slices.size() << "\n"; LOG(1,"Atrip") << "#slices " << slices.size() << "\n";
WITH_RANK << "#slices[0] " << slices[0].size << "\n"; WITH_RANK << "#slices[0] " << slices[0].size << "\n";
LOG(1,"Atrip") << "#sources " << sources.size() << "\n"; LOG(1,"Atrip") << "#sources " << sources.size() << "\n";
WITH_RANK << "#sources[0] " << sources[0].size() << "\n"; WITH_RANK << "#sources[0] " << sliceSize << "\n";
WITH_RANK << "#freePointers " << freePointers.size() << "\n"; WITH_RANK << "#freePointers " << freePointers.size() << "\n";
LOG(1,"Atrip") << "#sliceBuffers " << sliceBuffers.size() << "\n"; LOG(1,"Atrip") << "#sliceBuffers " << sliceBuffers.size() << "\n";
LOG(1,"Atrip") << "GB*" << np << " " LOG(1,"Atrip") << "GB*" << np << " "
<< double(sources.size() + sliceBuffers.size()) << double(sources.size() + sliceBuffers.size())
* sources[0].size() * sliceSize
* 8 * np * 8 * np
/ 1073741824.0 / 1073741824.0
<< "\n"; << "\n";
@@ -495,14 +550,13 @@ template <typename F=double>
if (otherRank == info.from.rank) sendData_p = false; if (otherRank == info.from.rank) sendData_p = false;
if (!sendData_p) return; if (!sendData_p) return;
MPI_Isend( sources[info.from.source].data() MPI_Isend((void*)SOURCES_DATA(sources[info.from.source]),
, sources[info.from.source].size() sliceSize,
, traits::mpi::datatypeOf<F>() traits::mpi::datatypeOf<F>(),
, otherRank otherRank,
, tag tag,
, universe universe,
, &request &request);
);
WITH_CRAZY_DEBUG WITH_CRAZY_DEBUG
WITH_RANK << "sent to " << otherRank << "\n"; WITH_RANK << "sent to " << otherRank << "\n";
@@ -516,25 +570,25 @@ template <typename F=double>
if (Atrip::rank == info.from.rank) return; if (Atrip::rank == info.from.rank) return;
if (slice.info.state == Slice<F>::Fetch) { if (slice.info.state == Slice<F>::Fetch) { // if-1
// TODO: do it through the slice class // TODO: do it through the slice class
slice.info.state = Slice<F>::Dispatched; slice.info.state = Slice<F>::Dispatched;
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA) && defined(ATRIP_SOURCES_IN_GPU)
slice.mpi_data = (F*)malloc(sizeof(F) * slice.size); # if !defined(ATRIP_CUDA_AWARE_MPI)
MPI_Irecv( slice.mpi_data # error "You need CUDA aware MPI to have slices on the GPU"
# endif
MPI_Irecv((void*)slice.data,
#else #else
MPI_Irecv( slice.data MPI_Irecv(slice.data,
#endif #endif
, slice.size slice.size,
, traits::mpi::datatypeOf<F>() traits::mpi::datatypeOf<F>(),
, info.from.rank info.from.rank,
, tag tag,
, universe universe,
, &slice.request &slice.request);
//, MPI_STATUS_IGNORE } // if-1
); } // receive
}
}
void unwrapAll(ABCTuple const& abc) { void unwrapAll(ABCTuple const& abc) {
for (auto type: sliceTypes) unwrapSlice(type, abc); for (auto type: sliceTypes) unwrapSlice(type, abc);
@@ -597,7 +651,12 @@ template <typename F=double>
const MPI_Comm world; const MPI_Comm world;
const MPI_Comm universe; const MPI_Comm universe;
const std::vector<size_t> sliceLength; const std::vector<size_t> sliceLength;
const size_t sliceSize;
#if defined(ATRIP_SOURCES_IN_GPU)
std::vector< DataPtr<F> > sources;
#else
std::vector< std::vector<F> > sources; std::vector< std::vector<F> > sources;
#endif
std::vector< Slice<F> > slices; std::vector< Slice<F> > slices;
typename Slice<F>::Name name; typename Slice<F>::Name name;
const std::vector<typename Slice<F>::Type> sliceTypes; const std::vector<typename Slice<F>::Type> sliceTypes;

View File

@@ -19,8 +19,14 @@
namespace atrip { namespace atrip {
template <typename F=double> template <typename F=double>
static
void sliceIntoVector void sliceIntoVector
( std::vector<F> &v #if defined(ATRIP_SOURCES_IN_GPU)
( DataPtr<F> &source
#else
( std::vector<F> &source
#endif
, size_t sliceSize
, CTF::Tensor<F> &toSlice , CTF::Tensor<F> &toSlice
, std::vector<int64_t> const low , std::vector<int64_t> const low
, std::vector<int64_t> const up , std::vector<int64_t> const up
@@ -44,18 +50,30 @@ namespace atrip {
<< "\n"; << "\n";
#ifndef ATRIP_DONT_SLICE #ifndef ATRIP_DONT_SLICE
toSlice.slice( toSlice_.low.data() toSlice.slice(toSlice_.low.data(),
, toSlice_.up.data() toSlice_.up.data(),
, 0.0 0.0,
, origin origin,
, origin_.low.data() origin_.low.data(),
, origin_.up.data() origin_.up.data(),
, 1.0); 1.0);
memcpy(v.data(), toSlice.data, sizeof(F) * v.size());
#else #else
# pragma message("WARNING: COMPILING WITHOUT SLICING THE TENSORS") # pragma message("WARNING: COMPILING WITHOUT SLICING THE TENSORS")
#endif #endif
#if defined(ATRIP_SOURCES_IN_GPU)
WITH_CHRONO("cuda:sources",
_CHECK_CUDA_SUCCESS("copying sources data to device",
cuMemcpyHtoD(source,
toSlice.data,
sliceSize));
)
#else
memcpy(source.data(),
toSlice.data,
sizeof(F) * sliceSize);
#endif
} }
@@ -80,16 +98,15 @@ namespace atrip {
void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override
{ {
const int Nv = this->sliceLength[0]
, No = this->sliceLength[1]
, a = this->rankMap.find({static_cast<size_t>(Atrip::rank), it});
;
const int
Nv = this->sliceLength[0],
No = this->sliceLength[1],
a = this->rankMap.find({static_cast<size_t>(Atrip::rank), it});
sliceIntoVector<F>( this->sources[it] sliceIntoVector<F>(this->sources[it], this->sliceSize,
, to, {0, 0, 0}, {Nv, No, No} to, {0, 0, 0}, {Nv, No, No},
, from, {a, 0, 0, 0}, {a+1, Nv, No, No} from, {a, 0, 0, 0}, {a+1, Nv, No, No});
);
} }
@@ -118,14 +135,13 @@ namespace atrip {
void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override
{ {
const int No = this->sliceLength[0] const int
, a = this->rankMap.find({static_cast<size_t>(Atrip::rank), it}) No = this->sliceLength[0],
; a = this->rankMap.find({static_cast<size_t>(Atrip::rank), it});
sliceIntoVector<F>( this->sources[it] sliceIntoVector<F>(this->sources[it], this->sliceSize,
, to, {0, 0, 0}, {No, No, No} to, {0, 0, 0}, {No, No, No},
, from, {0, 0, 0, a}, {No, No, No, a+1} from, {0, 0, 0, a}, {No, No, No, a+1});
);
} }
}; };
@@ -153,18 +169,17 @@ namespace atrip {
void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override { void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override {
const int Nv = this->sliceLength[0] const int
, No = this->sliceLength[1] Nv = this->sliceLength[0],
, el = this->rankMap.find({static_cast<size_t>(Atrip::rank), it}) No = this->sliceLength[1],
, a = el % Nv el = this->rankMap.find({static_cast<size_t>(Atrip::rank), it}),
, b = el / Nv a = el % Nv,
; b = el / Nv;
sliceIntoVector<F>( this->sources[it] sliceIntoVector<F>(this->sources[it], this->sliceSize,
, to, {0, 0}, {Nv, No} to, {0, 0}, {Nv, No},
, from, {a, b, 0, 0}, {a+1, b+1, Nv, No} from, {a, b, 0, 0}, {a+1, b+1, Nv, No});
);
} }
@@ -191,17 +206,17 @@ namespace atrip {
void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override { void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override {
const int Nv = from.lens[0] const int
, No = this->sliceLength[1] Nv = from.lens[0],
, el = this->rankMap.find({static_cast<size_t>(Atrip::rank), it}) No = this->sliceLength[1],
, a = el % Nv el = this->rankMap.find({static_cast<size_t>(Atrip::rank), it}),
, b = el / Nv a = el % Nv,
; b = el / Nv;
sliceIntoVector<F>( this->sources[it]
, to, {0, 0}, {No, No} sliceIntoVector<F>(this->sources[it], this->sliceSize,
, from, {a, b, 0, 0}, {a+1, b+1, No, No} to, {0, 0}, {No, No},
); from, {a, b, 0, 0}, {a+1, b+1, No, No});
} }
@@ -231,17 +246,16 @@ namespace atrip {
void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override { void sliceIntoBuffer(size_t it, CTF::Tensor<F> &to, CTF::Tensor<F> const& from) override {
// TODO: maybe generalize this with ABHH // TODO: maybe generalize this with ABHH
const int Nv = from.lens[0] const int
, No = this->sliceLength[1] Nv = from.lens[0],
, el = this->rankMap.find({static_cast<size_t>(Atrip::rank), it}) No = this->sliceLength[1],
, a = el % Nv el = this->rankMap.find({static_cast<size_t>(Atrip::rank), it}),
, b = el / Nv a = el % Nv,
; b = el / Nv;
sliceIntoVector<F>( this->sources[it] sliceIntoVector<F>(this->sources[it], this->sliceSize,
, to, {0, 0}, {No, No} to, {0, 0}, {No, No},
, from, {a, b, 0, 0}, {a+1, b+1, No, No} from, {a, b, 0, 0}, {a+1, b+1, No, No});
);
} }

View File

@@ -1,3 +1,11 @@
#+quicklisp
(eval-when (:compile-toplevel :load-toplevel :execute)
(ql:quickload '(vgplot fiveam)))
(defpackage :naive-tuples
(:use :cl :vgplot))
(in-package :naive-tuples)
(defun tuples-atrip (nv) (defun tuples-atrip (nv)
(declare (optimize (speed 3) (safety 0) (debug 0))) (declare (optimize (speed 3) (safety 0) (debug 0)))
(loop :for a :below nv (loop :for a :below nv
@@ -218,58 +226,3 @@
cheaper cheaper
(print (equal (nth i tuples) (print (equal (nth i tuples)
cheaper))))) cheaper)))))
(let* ((l 101)
(tuples (tuples-atrip l)))
(loop :for a below l
:do (print (let ((s (a-block-atrip a l))
(c (count-if (lambda (x) (eq (car x) a))
tuples)))
(list :a a
:size s
:real c
:? (eq c s))))))
(ql:quickload 'vgplot)
(import 'vgplot:plot)
(import 'vgplot:replot)
(let ((l 10))
(plot (mapcar (lambda (x) (getf x :size))
(loop :for a upto l
collect (list :a a :size (a-block a l))))
"penis"))
(let* ((l 50)
(tuples (tuples-half l)))
(loop :for a below l
:do (print (let ((s (a-block a l))
(c (count-if (lambda (x) (eq (car x) a))
tuples)))
(list :a a
:size s
:real c
:? (eq c s))))))
(defun range (from to) (loop for i :from from :to to collect i))
(defun half-again (i nv)
(let ((a-block-list (let ((ll (mapcar (lambda (i) (a-block i nv))
(range 0 (- nv 1)))))
(loop :for i :from 1 :to (length ll)
:collect
(reduce #'+
ll
:end i)))))
(loop :for blk :in a-block-list
:with a = 0
:with total-blk = 0
:if (eq 0 (floor i blk))
:do
(let ((i (mod i blk)))
(print (list i (- i total-blk) blk a))
(return))
:else
:do (progn
(incf a)
(setq total-blk blk)))))

View File

@@ -202,7 +202,7 @@ Atrip::Output Atrip::run(Atrip::Input<F> const& in) {
_CHECK_CUDA_SUCCESS("Zijk", _CHECK_CUDA_SUCCESS("Zijk",
cuMemAlloc(&Zijk, sizeof(F) * No * No * No)); cuMemAlloc(&Zijk, sizeof(F) * No * No * No));
#else #else
std::vector<F> &Tai = _Tai, &epsi = _epsi, &epsa = _epsa; DataPtr<F> Tai = _Tai.data(), epsi = _epsi.data(), epsa = _epsa.data();
Zijk = (DataFieldType<F>*)malloc(No*No*No * sizeof(DataFieldType<F>)); Zijk = (DataFieldType<F>*)malloc(No*No*No * sizeof(DataFieldType<F>));
Tijk = (DataFieldType<F>*)malloc(No*No*No * sizeof(DataFieldType<F>)); Tijk = (DataFieldType<F>*)malloc(No*No*No * sizeof(DataFieldType<F>));
#endif #endif
@@ -258,6 +258,25 @@ Atrip::Output Atrip::run(Atrip::Input<F> const& in) {
// all tensors // all tensors
std::vector< SliceUnion<F>* > unions = {&taphh, &hhha, &abph, &abhh, &tabhh}; std::vector< SliceUnion<F>* > unions = {&taphh, &hhha, &abph, &abhh, &tabhh};
#ifdef HAVE_CUDA
// TODO: free buffers
DataFieldType<F>* _t_buffer;
DataFieldType<F>* _vhhh;
WITH_CHRONO("double:cuda:alloc",
_CHECK_CUDA_SUCCESS("Allocating _t_buffer",
cuMemAlloc((CUdeviceptr*)&_t_buffer,
No*No*No * sizeof(DataFieldType<F>)));
_CHECK_CUDA_SUCCESS("Allocating _vhhh",
cuMemAlloc((CUdeviceptr*)&_vhhh,
No*No*No * sizeof(DataFieldType<F>)));
)
//const size_t
// bs = Atrip::kernelDimensions.ooo.blocks,
//ths = Atrip::kernelDimensions.ooo.threads;
//cuda::zeroing<<<bs, ths>>>((DataFieldType<F>*)_t_buffer, NoNoNo);
//cuda::zeroing<<<bs, ths>>>((DataFieldType<F>*)_vhhh, NoNoNo);
#endif
// get tuples for the current rank // get tuples for the current rank
TuplesDistribution *distribution; TuplesDistribution *distribution;
@@ -639,13 +658,23 @@ Atrip::Output Atrip::run(Atrip::Input<F> const& in) {
tabhh.unwrapSlice(Slice<F>::AC, abc), tabhh.unwrapSlice(Slice<F>::AC, abc),
tabhh.unwrapSlice(Slice<F>::BC, abc), tabhh.unwrapSlice(Slice<F>::BC, abc),
// -- TIJK // -- TIJK
(DataFieldType<F>*)Tijk); (DataFieldType<F>*)Tijk
#if defined(HAVE_CUDA)
// -- tmp buffers
,(DataFieldType<F>*)_t_buffer
,(DataFieldType<F>*)_vhhh
#endif
);
WITH_RANK << iteration << "-th doubles done\n"; WITH_RANK << iteration << "-th doubles done\n";
)) ))
} }
// COMPUTE SINGLES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {{{1 // COMPUTE SINGLES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {{{1
OCD_Barrier(universe); OCD_Barrier(universe);
#if defined(ATRIP_ONLY_DGEMM)
if (false)
#endif
if (!isFakeTuple(i)) { if (!isFakeTuple(i)) {
WITH_CHRONO("oneshot-unwrap", WITH_CHRONO("oneshot-unwrap",
WITH_CHRONO("unwrap", WITH_CHRONO("unwrap",
@@ -664,7 +693,7 @@ Atrip::Output Atrip::run(Atrip::Input<F> const& in) {
(DataFieldType<F>*)Tai, (DataFieldType<F>*)Tai,
#else #else
singlesContribution<F>(No, Nv, abc[0], abc[1], abc[2], singlesContribution<F>(No, Nv, abc[0], abc[1], abc[2],
Tai.data(), Tai,
#endif #endif
(DataFieldType<F>*)abhh.unwrapSlice(Slice<F>::AB, (DataFieldType<F>*)abhh.unwrapSlice(Slice<F>::AB,
abc), abc),
@@ -678,30 +707,73 @@ Atrip::Output Atrip::run(Atrip::Input<F> const& in) {
// COMPUTE ENERGY %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {{{1 // COMPUTE ENERGY %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {{{1
#if defined(ATRIP_ONLY_DGEMM)
if (false)
#endif /* defined(ATRIP_ONLY_DGEMM) */
if (!isFakeTuple(i)) { if (!isFakeTuple(i)) {
double tupleEnergy(0.); #if defined(HAVE_CUDA)
double *tupleEnergy;
cuMemAlloc((DataPtr<double>*)&tupleEnergy, sizeof(double));
#else
double _tupleEnergy(0.);
double *tupleEnergy = &_tupleEnergy;
#endif /* defined(HAVE_CUDA) */
int distinct(0); int distinct(0);
if (abc[0] == abc[1]) distinct++; if (abc[0] == abc[1]) distinct++;
if (abc[1] == abc[2]) distinct--; if (abc[1] == abc[2]) distinct--;
const F epsabc(_epsa[abc[0]] + _epsa[abc[1]] + _epsa[abc[2]]); const double
epsabc = std::real(_epsa[abc[0]] + _epsa[abc[1]] + _epsa[abc[2]]);
DataFieldType<F> _epsabc{epsabc};
// LOG(0, "AtripCUDA") << "doing energy " << i << "distinct " << distinct << "\n";
WITH_CHRONO("energy", WITH_CHRONO("energy",
/* if ( distinct == 0) {
TODO: think about how to do this on the GPU in the best way possible ACC_FUNCALL(getEnergyDistinct<DataFieldType<F>>,
if ( distinct == 0) 1, 1, // for cuda
tupleEnergy = getEnergyDistinct<F>(epsabc, No, (F*)epsi, (F*)Tijk, (F*)Zijk); _epsabc,
else No,
tupleEnergy = getEnergySame<F>(epsabc, No, (F*)epsi, (F*)Tijk, (F*)Zijk); #if defined(HAVE_CUDA)
*/ (DataFieldType<F>*)epsi,
) (DataFieldType<F>*)Tijk,
(DataFieldType<F>*)Zijk,
#else
epsi,
Tijk,
Zijk,
#endif
tupleEnergy);
} else {
ACC_FUNCALL(getEnergySame<DataFieldType<F>>,
1, 1, // for cuda
_epsabc,
No,
#if defined(HAVE_CUDA)
(DataFieldType<F>*)epsi,
(DataFieldType<F>*)Tijk,
(DataFieldType<F>*)Zijk,
#else
epsi,
Tijk,
Zijk,
#endif
tupleEnergy);
})
#if defined(HAVE_CUDA)
double host_tuple_energy;
cuMemcpyDtoH((void*)&host_tuple_energy,
(DataPtr<double>)tupleEnergy,
sizeof(double));
#else
double host_tuple_energy = *tupleEnergy;
#endif /* defined(HAVE_CUDA) */
#if defined(HAVE_OCD) || defined(ATRIP_PRINT_TUPLES) #if defined(HAVE_OCD) || defined(ATRIP_PRINT_TUPLES)
tupleEnergies[abc] = tupleEnergy; tupleEnergies[abc] = host_tuple_energy;
#endif #endif
energy += tupleEnergy; energy += host_tuple_energy;
} }
@@ -767,6 +839,8 @@ Atrip::Output Atrip::run(Atrip::Input<F> const& in) {
Atrip::chrono["iterations"].stop(); Atrip::chrono["iterations"].stop();
// ITERATION END %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%{{{1 // ITERATION END %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%{{{1
if (in.maxIterations != 0 && i >= in.maxIterations) break;
} }
// END OF MAIN LOOP // END OF MAIN LOOP

View File

@@ -4,8 +4,10 @@
namespace atrip { namespace atrip {
/* This function is really too slow, below are more performant #if defined(ATRIP_NAIVE_SLOW)
functions to get tuples. /*
* This function is really too slow, below are more performant
* functions to get tuples.
*/ */
static static
ABCTuples get_nth_naive_tuples(size_t Nv, size_t np, int64_t i) { ABCTuples get_nth_naive_tuples(size_t Nv, size_t np, int64_t i) {
@@ -52,33 +54,26 @@ namespace atrip {
return result; return result;
} }
#endif
static
inline
size_t a_block_atrip(size_t a, size_t nv) {
return (nv - 1) * (nv - (a - 1))
- ((nv - 1) * nv) / 2
+ ((a - 1) * (a - 2)) / 2
- 1;
}
static static
inline inline
size_t a_block_sum_atrip(int64_t T, int64_t nv) { size_t a_block_sum_atrip(int64_t T, int64_t nv) {
int64_t nv1 = nv - 1, tplus1 = T + 1; const int64_t nv_min_1 = nv - 1, t_plus_1 = T + 1;
return tplus1 * nv1 * nv return t_plus_1 * nv_min_1 * nv
+ nv1 * tplus1 + nv_min_1 * t_plus_1
- (nv1 * (T * (T + 1)) / 2) - (nv_min_1 * (T * t_plus_1) / 2)
- (tplus1 * (nv1 * nv) / 2) - (t_plus_1 * (nv_min_1 * nv) / 2)
+ (((T * (T + 1) * (1 + 2 * T)) / 6) - 3 * ((T * (T + 1)) / 2)) / 2 // do not simplify this expression, only the addition of both parts
// is a pair integer, prepare to endure the consequences of
// simplifying otherwise
+ (((T * t_plus_1 * (1 + 2 * T)) / 6) - 3 * ((T * t_plus_1) / 2)) / 2
; ;
// + tplus1;
} }
static static
inline inline
int64_t b_block_sum_atrip (int64_t a, int64_t T, int64_t nv) { int64_t b_block_sum_atrip (int64_t a, int64_t T, int64_t nv) {
return nv * ((T - a) + 1) return nv * ((T - a) + 1)
- (T * (T + 1) - a * (a - 1)) / 2 - (T * (T + 1) - a * (a - 1)) / 2
- 1; - 1;
@@ -94,9 +89,6 @@ namespace atrip {
a_sums.resize(nv); a_sums.resize(nv);
for (size_t _i = 0; _i < nv; _i++) { for (size_t _i = 0; _i < nv; _i++) {
a_sums[_i] = a_block_sum_atrip(_i, nv); a_sums[_i] = a_block_sum_atrip(_i, nv);
/*
std::cout << Atrip::rank << ": " << _i << " " << a_sums[_i] << std::endl;
*/
} }
} }
@@ -114,10 +106,6 @@ namespace atrip {
std::vector<int64_t> b_sums(nv - a); std::vector<int64_t> b_sums(nv - a);
for (size_t t = a, i=0; t < nv; t++) { for (size_t t = a, i=0; t < nv; t++) {
b_sums[i++] = b_block_sum_atrip(a, t, nv); b_sums[i++] = b_block_sum_atrip(a, t, nv);
/*
std::cout << Atrip::rank << ": b-sum " << i-1 << " "
<< ":a " << a << " :t " << t << " = " << b_sums[i-1] << std::endl;
*/
} }
int64_t b = a - 1, block_b = block_a; int64_t b = a - 1, block_b = block_a;
for (const auto& sum: b_sums) { for (const auto& sum: b_sums) {
@@ -141,6 +129,11 @@ namespace atrip {
inline inline
ABCTuples nth_atrip_distributed(int64_t it, size_t nv, size_t np) { ABCTuples nth_atrip_distributed(int64_t it, size_t nv, size_t np) {
// If we are getting the previous tuples in the first iteration,
// then just return an impossible tuple, different from the FAKE_TUPLE,
// because if FAKE_TUPLE is defined as {0,0,0} slices thereof
// are actually attainable.
//
if (it < 0) { if (it < 0) {
ABCTuples result(np, {nv, nv, nv}); ABCTuples result(np, {nv, nv, nv});
return result; return result;
@@ -160,9 +153,6 @@ namespace atrip {
for (size_t rank = 0; rank < np; rank++) { for (size_t rank = 0; rank < np; rank++) {
const size_t const size_t
global_iteration = tuples_per_rank * rank + it; global_iteration = tuples_per_rank * rank + it;
/*
std::cout << Atrip::rank << ":" << "global_bit " << global_iteration << "\n";
*/
result[rank] = nth_atrip(global_iteration, nv); result[rank] = nth_atrip(global_iteration, nv);
} }
@@ -248,38 +238,25 @@ namespace atrip {
using Database = typename Slice<F>::Database; using Database = typename Slice<F>::Database;
Database db; Database db;
#ifdef NAIVE_SLOW #ifdef ATRIP_NAIVE_SLOW
WITH_CHRONO("db:comm:naive:tuples", WITH_CHRONO("db:comm:naive:tuples",
const auto tuples = get_nth_naive_tuples(nv, const auto tuples = get_nth_naive_tuples(nv,
np, np,
iteration); iteration);
const auto prev_tuples = get_nth_naive_tuples(nv, const auto prev_tuples = get_nth_naive_tuples(nv,
np, np,
(int64_t)iteration - 1); iteration - 1);
) )
#else #else
WITH_CHRONO("db:comm:naive:tuples", WITH_CHRONO("db:comm:naive:tuples",
const auto tuples = nth_atrip_distributed((int64_t)iteration, const auto tuples = nth_atrip_distributed(iteration,
nv, nv,
np); np);
const auto prev_tuples = nth_atrip_distributed((int64_t)iteration - 1, const auto prev_tuples = nth_atrip_distributed(iteration - 1,
nv, nv,
np); np);
) )
if (false)
for (size_t rank = 0; rank < np; rank++) {
std::cout << Atrip::rank << ":"
<< " :tuples< " << rank << ">" << iteration
<< " :abc " << tuples[rank][0]
<< ", " << tuples[rank][1]
<< ", " << tuples[rank][2] << "\n";
std::cout << Atrip::rank << ":"
<< " :prev-tuples< " << rank << ">" << iteration
<< " :abc-prev " << prev_tuples[rank][0]
<< ", " << prev_tuples[rank][1]
<< ", " << prev_tuples[rank][2] << "\n";
}
#endif #endif
for (size_t rank = 0; rank < np; rank++) { for (size_t rank = 0; rank < np; rank++) {

View File

@@ -16,96 +16,13 @@
#include<atrip/Equations.hpp> #include<atrip/Equations.hpp>
#include<atrip/CUDA.hpp> #include<atrip/CUDA.hpp>
#include<atrip/Operations.hpp>
namespace atrip { namespace atrip {
// Prolog:2 ends here // Prolog:2 ends here
#ifdef HAVE_CUDA
namespace cuda {
// cuda kernels
template <typename F>
__global__
void zeroing(F* a, size_t n) {
F zero = {0};
for (size_t i = 0; i < n; i++) {
a[i] = zero;
}
}
////
template <typename F>
__device__
F maybeConjugateScalar(const F a);
template <>
__device__
double maybeConjugateScalar(const double a) { return a; }
template <>
__device__
cuDoubleComplex
maybeConjugateScalar(const cuDoubleComplex a) {
return {a.x, -a.y};
}
template <typename F>
__global__
void maybeConjugate(F* to, F* from, size_t n) {
for (size_t i = 0; i < n; ++i) {
to[i] = maybeConjugateScalar<F>(from[i]);
}
}
template <typename F>
__global__
void reorder(F* to, F* from, size_t size, size_t I, size_t J, size_t K) {
size_t idx = 0;
const size_t IDX = I + J*size + K*size*size;
for (size_t k = 0; k < size; k++)
for (size_t j = 0; j < size; j++)
for (size_t i = 0; i < size; i++, idx++)
to[idx] += from[IDX];
}
// I mean, really CUDA... really!?
template <typename F>
__device__
F multiply(const F &a, const F &b);
template <>
__device__
double multiply(const double &a, const double &b) { return a * b; }
template <>
__device__
cuDoubleComplex multiply(const cuDoubleComplex &a, const cuDoubleComplex &b) {
return
{a.x * b.x - a.y * b.y,
a.x * b.y + a.y * b.x};
}
template <typename F>
__device__
void sum_in_place(F* to, const F* from);
template <>
__device__
void sum_in_place(double* to, const double *from) { *to += *from; }
template <>
__device__
void sum_in_place(cuDoubleComplex* to, const cuDoubleComplex* from) {
to->x += from->x;
to->y += from->y;
}
};
#endif
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA)
#define FOR_K() \ #define FOR_K() \
for (size_t kmin = blockIdx.x * blockDim.x + threadIdx.x, \ for (size_t kmin = blockIdx.x * blockDim.x + threadIdx.x, \
@@ -133,7 +50,7 @@ namespace cuda {
_REORDER_BODY_(__VA_ARGS__) \ _REORDER_BODY_(__VA_ARGS__) \
} }
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA)
#define GO(__TO, __FROM) cuda::sum_in_place<F>(&__TO, &__FROM); #define GO(__TO, __FROM) acc::sum_in_place<F>(&__TO, &__FROM);
#else #else
#define GO(__TO, __FROM) __TO += __FROM; #define GO(__TO, __FROM) __TO += __FROM;
#endif #endif
@@ -156,7 +73,6 @@ namespace cuda {
* in order to have an argument in the signature of * in order to have an argument in the signature of
* the function that helps the compiler know which * the function that helps the compiler know which
* instantiation it should take. * instantiation it should take.
*
*/ */
template <typename F, reordering_t R> template <typename F, reordering_t R>
struct reorder_proxy {}; struct reorder_proxy {};
@@ -180,162 +96,205 @@ namespace cuda {
#undef _IJK_ #undef _IJK_
#undef GO #undef GO
#if defined(HAVE_CUDA)
# define MIN(a, b) min((a), (b))
#else
# define MIN(a, b) std::min((a), (b))
#endif
// [[file:~/cuda/atrip/atrip.org::*Energy][Energy:2]] // [[file:~/cuda/atrip/atrip.org::*Energy][Energy:2]]
template <typename F> template <typename F>
double getEnergyDistinct __MAYBE_GLOBAL__
void getEnergyDistinct
( F const epsabc ( F const epsabc
, size_t const No , size_t const No
, F* const epsi , F* const epsi
, F* const Tijk , F* const Tijk
, F* const Zijk , F* const Zijk
, double* energy
) { ) {
constexpr size_t blockSize=16; constexpr size_t blockSize=16;
F energy(0.); F _energy = {0.};
for (size_t kk=0; kk<No; kk+=blockSize){ for (size_t kk=0; kk<No; kk+=blockSize){
const size_t kend( std::min(No, kk+blockSize) ); const size_t kend( MIN(No, kk+blockSize) );
for (size_t jj(kk); jj<No; jj+=blockSize){ for (size_t jj(kk); jj<No; jj+=blockSize){
const size_t jend( std::min( No, jj+blockSize) ); const size_t jend( MIN( No, jj+blockSize) );
for (size_t ii(jj); ii<No; ii+=blockSize){ for (size_t ii(jj); ii<No; ii+=blockSize){
const size_t iend( std::min( No, ii+blockSize) ); const size_t iend( MIN( No, ii+blockSize) );
for (size_t k(kk); k < kend; k++){ for (size_t k(kk); k < kend; k++){
const F ek(epsi[k]); const F ek(epsi[k]);
const size_t jstart = jj > k ? jj : k; const size_t jstart = jj > k ? jj : k;
for (size_t j(jstart); j < jend; j++){ for (size_t j(jstart); j < jend; j++){
F const ej(epsi[j]); F const ej(epsi[j]);
F const facjk = j == k ? F(0.5) : F(1.0); F const facjk = j == k ? F{0.5} : F{1.0};
size_t istart = ii > j ? ii : j; size_t istart = ii > j ? ii : j;
for (size_t i(istart); i < iend; i++){ for (size_t i(istart); i < iend; i++){
const F const F
ei(epsi[i]) ei(epsi[i])
, facij = i == j ? F(0.5) : F(1.0) , facij = i == j ? F{0.5} : F{1.0}
, denominator(epsabc - ei - ej - ek) , eijk(acc::add(acc::add(ei, ej), ek))
, denominator(acc::sub(epsabc, eijk))
, U(Zijk[i + No*j + No*No*k]) , U(Zijk[i + No*j + No*No*k])
, V(Zijk[i + No*k + No*No*j]) , V(Zijk[i + No*k + No*No*j])
, W(Zijk[j + No*i + No*No*k]) , W(Zijk[j + No*i + No*No*k])
, X(Zijk[j + No*k + No*No*i]) , X(Zijk[j + No*k + No*No*i])
, Y(Zijk[k + No*i + No*No*j]) , Y(Zijk[k + No*i + No*No*j])
, Z(Zijk[k + No*j + No*No*i]) , Z(Zijk[k + No*j + No*No*i])
, A(maybeConjugate<F>(Tijk[i + No*j + No*No*k])) , A(acc::maybeConjugateScalar(Tijk[i + No*j + No*No*k]))
, B(maybeConjugate<F>(Tijk[i + No*k + No*No*j])) , B(acc::maybeConjugateScalar(Tijk[i + No*k + No*No*j]))
, C(maybeConjugate<F>(Tijk[j + No*i + No*No*k])) , C(acc::maybeConjugateScalar(Tijk[j + No*i + No*No*k]))
, D(maybeConjugate<F>(Tijk[j + No*k + No*No*i])) , D(acc::maybeConjugateScalar(Tijk[j + No*k + No*No*i]))
, E(maybeConjugate<F>(Tijk[k + No*i + No*No*j])) , E(acc::maybeConjugateScalar(Tijk[k + No*i + No*No*j]))
, _F(maybeConjugate<F>(Tijk[k + No*j + No*No*i])) , _F(acc::maybeConjugateScalar(Tijk[k + No*j + No*No*i]))
, value , AU = acc::prod(A, U)
= 3.0 * ( A * U , BV = acc::prod(B, V)
+ B * V , CW = acc::prod(C, W)
+ C * W , DX = acc::prod(D, X)
+ D * X , EY = acc::prod(E, Y)
+ E * Y , FZ = acc::prod(_F, Z)
+ _F * Z ) , UXY = acc::add(U, acc::add(X, Y))
+ ( ( U + X + Y ) , VWZ = acc::add(V, acc::add(W, Z))
- 2.0 * ( V + W + Z ) , ADE = acc::add(A, acc::add(D, E))
) * ( A + D + E ) , BCF = acc::add(B, acc::add(C, _F))
+ ( ( V + W + Z ) // I just might as well write this in CL
- 2.0 * ( U + X + Y ) , _first = acc::add(AU,
) * ( B + C + _F ) acc::add(BV,
acc::add(CW,
acc::add(DX,
acc::add(EY, FZ)))))
, _second = acc::prod(acc::sub(UXY,
acc::prod(F{-2.0}, VWZ)),
ADE)
, _third = acc::prod(acc::sub(VWZ,
acc::prod(F{-2.0}, UXY)),
BCF)
, value = acc::add(acc::prod(F{3.0}, _first),
acc::add(_second,
_third))
, _loop_energy = acc::prod(acc::prod(F{2.0}, value),
acc::div(acc::prod(facjk, facij),
denominator))
; ;
energy += 2.0 * value / denominator * facjk * facij; acc::sum_in_place(&_energy, &_loop_energy);
} // i } // i
} // j } // j
} // k } // k
} // ii } // ii
} // jj } // jj
} // kk } // kk
return std::real(energy); const double real_part = acc::real(_energy);
acc::sum_in_place(energy, &real_part);
} }
template <typename F> template <typename F>
double getEnergySame __MAYBE_GLOBAL__
void getEnergySame
( F const epsabc ( F const epsabc
, size_t const No , size_t const No
, F* const epsi , F* const epsi
, F* const Tijk , F* const Tijk
, F* const Zijk , F* const Zijk
, double* energy
) { ) {
constexpr size_t blockSize = 16; constexpr size_t blockSize = 16;
F energy = F(0.); F _energy = F{0.};
for (size_t kk=0; kk<No; kk+=blockSize){ for (size_t kk=0; kk<No; kk+=blockSize){
const size_t kend( std::min( kk+blockSize, No) ); const size_t kend( MIN( kk+blockSize, No) );
for (size_t jj(kk); jj<No; jj+=blockSize){ for (size_t jj(kk); jj<No; jj+=blockSize){
const size_t jend( std::min( jj+blockSize, No) ); const size_t jend( MIN( jj+blockSize, No) );
for (size_t ii(jj); ii<No; ii+=blockSize){ for (size_t ii(jj); ii<No; ii+=blockSize){
const size_t iend( std::min( ii+blockSize, No) ); const size_t iend( MIN( ii+blockSize, No) );
for (size_t k(kk); k < kend; k++){ for (size_t k(kk); k < kend; k++){
const F ek(epsi[k]); const F ek(epsi[k]);
const size_t jstart = jj > k ? jj : k; const size_t jstart = jj > k ? jj : k;
for(size_t j(jstart); j < jend; j++){ for(size_t j(jstart); j < jend; j++){
const F facjk( j == k ? F(0.5) : F(1.0)); const F facjk( j == k ? F{0.5} : F{1.0});
const F ej(epsi[j]); const F ej(epsi[j]);
const size_t istart = ii > j ? ii : j; const size_t istart = ii > j ? ii : j;
for(size_t i(istart); i < iend; i++){ for(size_t i(istart); i < iend; i++){
const F const F
ei(epsi[i]) ei(epsi[i])
, facij ( i==j ? F(0.5) : F(1.0)) , facij ( i==j ? F{0.5} : F{1.0})
, denominator(epsabc - ei - ej - ek) , eijk(acc::add(acc::add(ei, ej), ek))
, denominator(acc::sub(epsabc, eijk))
, U(Zijk[i + No*j + No*No*k]) , U(Zijk[i + No*j + No*No*k])
, V(Zijk[j + No*k + No*No*i]) , V(Zijk[j + No*k + No*No*i])
, W(Zijk[k + No*i + No*No*j]) , W(Zijk[k + No*i + No*No*j])
, A(maybeConjugate<F>(Tijk[i + No*j + No*No*k])) , A(acc::maybeConjugateScalar(Tijk[i + No*j + No*No*k]))
, B(maybeConjugate<F>(Tijk[j + No*k + No*No*i])) , B(acc::maybeConjugateScalar(Tijk[j + No*k + No*No*i]))
, C(maybeConjugate<F>(Tijk[k + No*i + No*No*j])) , C(acc::maybeConjugateScalar(Tijk[k + No*i + No*No*j]))
, value , ABC = acc::add(A, acc::add(B, C))
= F(3.0) * ( A * U , UVW = acc::add(U, acc::add(V, W))
+ B * V , AU = acc::prod(A, U)
+ C * W , BV = acc::prod(B, V)
) , CW = acc::prod(C, W)
- ( A + B + C ) * ( U + V + W ) , AU_and_BV_and_CW = acc::add(acc::add(AU, BV), CW)
, value = acc::sub(acc::prod(F{3.0}, AU_and_BV_and_CW),
acc::prod(ABC, UVW))
, _loop_energy = acc::prod(acc::prod(F{2.0}, value),
acc::div(acc::prod(facjk, facij),
denominator))
; ;
energy += F(2.0) * value / denominator * facjk * facij;
acc::sum_in_place(&_energy, &_loop_energy);
} // i } // i
} // j } // j
} // k } // k
} // ii } // ii
} // jj } // jj
} // kk } // kk
return std::real(energy); const double real_part = acc::real(_energy);
acc::sum_in_place(energy, &real_part);
} }
// Energy:2 ends here // Energy:2 ends here
// [[file:~/cuda/atrip/atrip.org::*Energy][Energy:3]] // [[file:~/cuda/atrip/atrip.org::*Energy][Energy:3]]
// instantiate double // instantiate double
template template
double getEnergyDistinct __MAYBE_GLOBAL__
( double const epsabc void getEnergyDistinct
( DataFieldType<double> const epsabc
, size_t const No , size_t const No
, double* const epsi , DataFieldType<double>* const epsi
, double* const Tijk , DataFieldType<double>* const Tijk
, double* const Zijk , DataFieldType<double>* const Zijk
, DataFieldType<double>* energy
); );
template template
double getEnergySame __MAYBE_GLOBAL__
( double const epsabc void getEnergySame
( DataFieldType<double> const epsabc
, size_t const No , size_t const No
, double* const epsi , DataFieldType<double>* const epsi
, double* const Tijk , DataFieldType<double>* const Tijk
, double* const Zijk , DataFieldType<double>* const Zijk
, DataFieldType<double>* energy
); );
// instantiate Complex // instantiate Complex
template template
double getEnergyDistinct __MAYBE_GLOBAL__
( Complex const epsabc void getEnergyDistinct
( DataFieldType<Complex> const epsabc
, size_t const No , size_t const No
, Complex* const epsi , DataFieldType<Complex>* const epsi
, Complex* const Tijk , DataFieldType<Complex>* const Tijk
, Complex* const Zijk , DataFieldType<Complex>* const Zijk
, DataFieldType<double>* energy
); );
template template
double getEnergySame __MAYBE_GLOBAL__
( Complex const epsabc void getEnergySame
( DataFieldType<Complex> const epsabc
, size_t const No , size_t const No
, Complex* const epsi , DataFieldType<Complex>* const epsi
, Complex* const Tijk , DataFieldType<Complex>* const Tijk
, Complex* const Zijk , DataFieldType<Complex>* const Zijk
, DataFieldType<double>* energy
); );
// Energy:3 ends here // Energy:3 ends here
@@ -361,18 +320,26 @@ double getEnergySame
const size_t ijk = i + j*No + k*NoNo; const size_t ijk = i + j*No + k*NoNo;
#ifdef HAVE_CUDA #ifdef HAVE_CUDA
# define GO(__TPH, __VABIJ) \
{ \ #define GO(__TPH, __VABIJ) \
const DataFieldType<F> product \ do { \
= cuda::multiply<DataFieldType<F>>((__TPH), (__VABIJ)); \ const DataFieldType<F> \
cuda::sum_in_place<DataFieldType<F>>(&Zijk[ijk], &product); \ product = acc::prod<DataFieldType<F>>((__TPH), \
} (__VABIJ)); \
acc::sum_in_place<DataFieldType<F>>(&Zijk[ijk], \
&product); \
} while (0)
#else #else
# define GO(__TPH, __VABIJ) Zijk[ijk] += (__TPH) * (__VABIJ);
#define GO(__TPH, __VABIJ) Zijk[ijk] += (__TPH) * (__VABIJ)
#endif #endif
GO(Tph[ a + i * Nv ], VBCij[ j + k * No ])
GO(Tph[ b + j * Nv ], VACij[ i + k * No ]) GO(Tph[ a + i * Nv ], VBCij[ j + k * No ]);
GO(Tph[ c + k * Nv ], VABij[ i + j * No ]) GO(Tph[ b + j * Nv ], VACij[ i + k * No ]);
GO(Tph[ c + k * Nv ], VABij[ i + j * No ]);
#undef GO #undef GO
} // for loop j } // for loop j
} }
@@ -434,8 +401,12 @@ double getEnergySame
// -- TIJK // -- TIJK
// , DataPtr<F> Tijk_ // , DataPtr<F> Tijk_
, DataFieldType<F>* Tijk_ , DataFieldType<F>* Tijk_
) { #if defined(HAVE_CUDA)
// -- tmp buffers
, DataFieldType<F>* _t_buffer
, DataFieldType<F>* _vhhh
#endif
) {
const size_t a = abc[0], b = abc[1], c = abc[2] const size_t a = abc[0], b = abc[1], c = abc[2]
, NoNo = No*No , NoNo = No*No
; ;
@@ -444,14 +415,14 @@ double getEnergySame
#if defined(ATRIP_USE_DGEMM) #if defined(ATRIP_USE_DGEMM)
#if defined(HAVE_CUDA) #if defined(HAVE_CUDA)
#define REORDER(__II, __JJ, __KK) \ #define REORDER(__II, __JJ, __KK) \
reorder<<<bs, ths>>>(reorder_proxy< \ reorder<<<bs, ths>>>(reorder_proxy< \
DataFieldType<F>, \ DataFieldType<F>, \
__II ## __JJ ## __KK \ __II ## __JJ ## __KK \
>{}, \ >{}, \
No, \ No, \
Tijk, \ Tijk, \
_t_buffer); _t_buffer)
#define DGEMM_PARTICLES(__A, __B) \ #define DGEMM_PARTICLES(__A, __B) \
atrip::xgemm<F>("T", \ atrip::xgemm<F>("T", \
"N", \ "N", \
@@ -481,11 +452,18 @@ double getEnergySame
_t_buffer, \ _t_buffer, \
(int const*)&NoNo \ (int const*)&NoNo \
) )
#define MAYBE_CONJ(_conj, _buffer) \ #define MAYBE_CONJ(_conj, _buffer) \
cuda::maybeConjugate<<< \ do { \
Atrip::kernelDimensions.ooo.blocks, \ acc::maybeConjugate<<< \
Atrip::kernelDimensions.ooo.threads \ \
>>>((DataFieldType<F>*)_conj, (DataFieldType<F>*)_buffer, NoNoNo); Atrip::kernelDimensions.ooo.blocks, \
\
Atrip::kernelDimensions.ooo.threads \
\
>>>((DataFieldType<F>*)_conj, \
(DataFieldType<F>*)_buffer, \
NoNoNo); \
} while (0)
// END CUDA //////////////////////////////////////////////////////////////////// // END CUDA ////////////////////////////////////////////////////////////////////
@@ -500,7 +478,9 @@ double getEnergySame
#define REORDER(__II, __JJ, __KK) \ #define REORDER(__II, __JJ, __KK) \
reorder(reorder_proxy<DataFieldType<F>, \ reorder(reorder_proxy<DataFieldType<F>, \
__II ## __JJ ## __KK >{}, \ __II ## __JJ ## __KK >{}, \
No, Tijk, _t_buffer); No, \
Tijk, \
_t_buffer)
#define DGEMM_PARTICLES(__A, __B) \ #define DGEMM_PARTICLES(__A, __B) \
atrip::xgemm<F>("T", \ atrip::xgemm<F>("T", \
"N", \ "N", \
@@ -531,29 +511,37 @@ double getEnergySame
_t_buffer, \ _t_buffer, \
(int const*)&NoNo \ (int const*)&NoNo \
) )
#define MAYBE_CONJ(_conj, _buffer) \ #define MAYBE_CONJ(_conj, _buffer) \
for (size_t __i = 0; __i < NoNoNo; ++__i) \ do { \
_conj[__i] = maybeConjugate<F>(_buffer[__i]); for (size_t __i = 0; __i < NoNoNo; ++__i) { \
_conj[__i] \
= maybeConjugate<F>(_buffer[__i]); \
} \
} while (0)
#endif #endif
F one{1.0}, m_one{-1.0}, zero{0.0}; F one{1.0}, m_one{-1.0}, zero{0.0};
const size_t NoNoNo = No*NoNo; const size_t NoNoNo = No*NoNo;
#ifdef HAVE_CUDA #ifdef HAVE_CUDA
DataFieldType<F>* _t_buffer; // DataFieldType<F>* _t_buffer;
DataFieldType<F>* _vhhh; // DataFieldType<F>* _vhhh;
WITH_CHRONO("double:cuda:alloc", // WITH_CHRONO("double:cuda:alloc",
_CHECK_CUDA_SUCCESS("Allocating _t_buffer", // _CHECK_CUDA_SUCCESS("Allocating _t_buffer",
cuMemAlloc((CUdeviceptr*)&_t_buffer, // cuMemAlloc((CUdeviceptr*)&_t_buffer,
NoNoNo * sizeof(DataFieldType<F>))); // NoNoNo * sizeof(DataFieldType<F>)));
_CHECK_CUDA_SUCCESS("Allocating _vhhh", // _CHECK_CUDA_SUCCESS("Allocating _vhhh",
cuMemAlloc((CUdeviceptr*)&_vhhh, // cuMemAlloc((CUdeviceptr*)&_vhhh,
NoNoNo * sizeof(DataFieldType<F>))); // NoNoNo * sizeof(DataFieldType<F>)));
) // )
#if !defined(ATRIP_ONLY_DGEMM)
// we still have to zero this
const size_t const size_t
bs = Atrip::kernelDimensions.ooo.blocks, bs = Atrip::kernelDimensions.ooo.blocks,
ths = Atrip::kernelDimensions.ooo.threads; ths = Atrip::kernelDimensions.ooo.threads;
cuda::zeroing<<<bs, ths>>>((DataFieldType<F>*)_t_buffer, NoNoNo); acc::zeroing<<<bs, ths>>>((DataFieldType<F>*)_t_buffer, NoNoNo);
cuda::zeroing<<<bs, ths>>>((DataFieldType<F>*)_vhhh, NoNoNo); acc::zeroing<<<bs, ths>>>((DataFieldType<F>*)_vhhh, NoNoNo);
#endif
#else #else
DataFieldType<F>* _t_buffer = (DataFieldType<F>*)malloc(NoNoNo * sizeof(F)); DataFieldType<F>* _t_buffer = (DataFieldType<F>*)malloc(NoNoNo * sizeof(F));
DataFieldType<F>* _vhhh = (DataFieldType<F>*)malloc(NoNoNo * sizeof(F)); DataFieldType<F>* _vhhh = (DataFieldType<F>*)malloc(NoNoNo * sizeof(F));
@@ -565,55 +553,65 @@ double getEnergySame
#endif #endif
// Set Tijk to zero // Set Tijk to zero
#ifdef HAVE_CUDA #if defined(HAVE_CUDA) && !defined(ATRIP_ONLY_DGEMM)
WITH_CHRONO("double:reorder", WITH_CHRONO("double:reorder",
cuda::zeroing<<<bs, ths>>>((DataFieldType<F>*)Tijk, acc::zeroing<<<bs, ths>>>((DataFieldType<F>*)Tijk,
NoNoNo); NoNoNo);
) )
#else #endif
#if !defined(HAVE_CUDA)
WITH_CHRONO("double:reorder", WITH_CHRONO("double:reorder",
for (size_t k = 0; k < NoNoNo; k++) { for (size_t k = 0; k < NoNoNo; k++) {
Tijk[k] = DataFieldType<F>{0.0}; Tijk[k] = DataFieldType<F>{0.0};
}) })
#endif #endif /* !defined(HAVE_CUDA) */
#if defined(ATRIP_ONLY_DGEMM)
#undef MAYBE_CONJ
#undef REORDER
#define MAYBE_CONJ(a, b) do {} while(0)
#define REORDER(i, j, k) do {} while(0)
#endif /* defined(ATRIP_ONLY_DGEMM) */
// HOLES // HOLES
WITH_CHRONO("doubles:holes", WITH_CHRONO("doubles:holes",
{ {
// VhhhC[i + k*No + L*NoNo] * TABhh[L + j*No]; H1 // VhhhC[i + k*No + L*NoNo] * TABhh[L + j*No]; H1
MAYBE_CONJ(_vhhh, VhhhC) MAYBE_CONJ(_vhhh, VhhhC);
WITH_CHRONO("doubles:holes:1", WITH_CHRONO("doubles:holes:1",
DGEMM_HOLES(_vhhh, TABhh, "N"); DGEMM_HOLES(_vhhh, TABhh, "N");
REORDER(I, K, J) REORDER(I, K, J);
) )
// VhhhC[j + k*No + L*NoNo] * TABhh[i + L*No]; H0 // VhhhC[j + k*No + L*NoNo] * TABhh[i + L*No]; H0
WITH_CHRONO("doubles:holes:2", WITH_CHRONO("doubles:holes:2",
DGEMM_HOLES(_vhhh, TABhh, "T"); DGEMM_HOLES(_vhhh, TABhh, "T");
REORDER(J, K, I) REORDER(J, K, I);
) )
// VhhhB[i + j*No + L*NoNo] * TAChh[L + k*No]; H5 // VhhhB[i + j*No + L*NoNo] * TAChh[L + k*No]; H5
MAYBE_CONJ(_vhhh, VhhhB) MAYBE_CONJ(_vhhh, VhhhB);
WITH_CHRONO("doubles:holes:3", WITH_CHRONO("doubles:holes:3",
DGEMM_HOLES(_vhhh, TAChh, "N"); DGEMM_HOLES(_vhhh, TAChh, "N");
REORDER(I, J, K) REORDER(I, J, K);
) )
// VhhhB[k + j*No + L*NoNo] * TAChh[i + L*No]; H3 // VhhhB[k + j*No + L*NoNo] * TAChh[i + L*No]; H3
WITH_CHRONO("doubles:holes:4", WITH_CHRONO("doubles:holes:4",
DGEMM_HOLES(_vhhh, TAChh, "T"); DGEMM_HOLES(_vhhh, TAChh, "T");
REORDER(K, J, I) REORDER(K, J, I);
) )
// VhhhA[j + i*No + L*NoNo] * TBChh[L + k*No]; H1 // VhhhA[j + i*No + L*NoNo] * TBChh[L + k*No]; H1
MAYBE_CONJ(_vhhh, VhhhA) MAYBE_CONJ(_vhhh, VhhhA);
WITH_CHRONO("doubles:holes:5", WITH_CHRONO("doubles:holes:5",
DGEMM_HOLES(_vhhh, TBChh, "N"); DGEMM_HOLES(_vhhh, TBChh, "N");
REORDER(J, I, K) REORDER(J, I, K);
) )
// VhhhA[k + i*No + L*NoNo] * TBChh[j + L*No]; H4 // VhhhA[k + i*No + L*NoNo] * TBChh[j + L*No]; H4
WITH_CHRONO("doubles:holes:6", WITH_CHRONO("doubles:holes:6",
DGEMM_HOLES(_vhhh, TBChh, "T"); DGEMM_HOLES(_vhhh, TBChh, "T");
REORDER(K, I, J) REORDER(K, I, J);
) )
} }
) )
@@ -625,32 +623,32 @@ double getEnergySame
// TAphh[E + i*Nv + j*NoNv] * VBCph[E + k*Nv]; P0 // TAphh[E + i*Nv + j*NoNv] * VBCph[E + k*Nv]; P0
WITH_CHRONO("doubles:particles:1", WITH_CHRONO("doubles:particles:1",
DGEMM_PARTICLES(TAphh, VBCph); DGEMM_PARTICLES(TAphh, VBCph);
REORDER(I, J, K) REORDER(I, J, K);
) )
// TAphh[E + i*Nv + k*NoNv] * VCBph[E + j*Nv]; P3 // TAphh[E + i*Nv + k*NoNv] * VCBph[E + j*Nv]; P3
WITH_CHRONO("doubles:particles:2", WITH_CHRONO("doubles:particles:2",
DGEMM_PARTICLES(TAphh, VCBph); DGEMM_PARTICLES(TAphh, VCBph);
REORDER(I, K, J) REORDER(I, K, J);
) )
// TCphh[E + k*Nv + i*NoNv] * VABph[E + j*Nv]; P5 // TCphh[E + k*Nv + i*NoNv] * VABph[E + j*Nv]; P5
WITH_CHRONO("doubles:particles:3", WITH_CHRONO("doubles:particles:3",
DGEMM_PARTICLES(TCphh, VABph); DGEMM_PARTICLES(TCphh, VABph);
REORDER(K, I, J) REORDER(K, I, J);
) )
// TCphh[E + k*Nv + j*NoNv] * VBAph[E + i*Nv]; P2 // TCphh[E + k*Nv + j*NoNv] * VBAph[E + i*Nv]; P2
WITH_CHRONO("doubles:particles:4", WITH_CHRONO("doubles:particles:4",
DGEMM_PARTICLES(TCphh, VBAph); DGEMM_PARTICLES(TCphh, VBAph);
REORDER(K, J, I) REORDER(K, J, I);
) )
// TBphh[E + j*Nv + i*NoNv] * VACph[E + k*Nv]; P1 // TBphh[E + j*Nv + i*NoNv] * VACph[E + k*Nv]; P1
WITH_CHRONO("doubles:particles:5", WITH_CHRONO("doubles:particles:5",
DGEMM_PARTICLES(TBphh, VACph); DGEMM_PARTICLES(TBphh, VACph);
REORDER(J, I, K) REORDER(J, I, K);
) )
// TBphh[E + j*Nv + k*NoNv] * VCAph[E + i*Nv]; P4 // TBphh[E + j*Nv + k*NoNv] * VCAph[E + i*Nv]; P4
WITH_CHRONO("doubles:particles:6", WITH_CHRONO("doubles:particles:6",
DGEMM_PARTICLES(TBphh, VCAph); DGEMM_PARTICLES(TBphh, VCAph);
REORDER(J, K, I) REORDER(J, K, I);
) )
} }
) )
@@ -659,16 +657,16 @@ double getEnergySame
#ifdef HAVE_CUDA #ifdef HAVE_CUDA
// we need to synchronize here since we need // we need to synchronize here since we need
// the Tijk for next process in the pipeline // the Tijk for next process in the pipeline
_CHECK_CUDA_SUCCESS("Synchronizing", //_CHECK_CUDA_SUCCESS("Synchronizing",
cuCtxSynchronize()); // cuCtxSynchronize());
_CHECK_CUDA_SUCCESS("Freeing _vhhh", //_CHECK_CUDA_SUCCESS("Freeing _vhhh",
cuMemFree((CUdeviceptr)_vhhh)); // cuMemFree((CUdeviceptr)_vhhh));
_CHECK_CUDA_SUCCESS("Freeing _t_buffer", //_CHECK_CUDA_SUCCESS("Freeing _t_buffer",
cuMemFree((CUdeviceptr)_t_buffer)); // cuMemFree((CUdeviceptr)_t_buffer));
#else #else
free(_vhhh); free(_vhhh);
free(_t_buffer); free(_t_buffer);
#endif #endif /* defined(HAVE_CUDA) */
} }
#undef REORDER #undef REORDER
@@ -719,7 +717,7 @@ double getEnergySame
} }
} }
#endif #endif /* defined(ATRIP_USE_DGEMM) */
} }
@@ -751,6 +749,12 @@ double getEnergySame
, DataPtr<double> const TBChh , DataPtr<double> const TBChh
// -- TIJK // -- TIJK
, DataFieldType<double>* Tijk , DataFieldType<double>* Tijk
#if defined(HAVE_CUDA)
// -- tmp buffers
, DataFieldType<double>* _t_buffer
, DataFieldType<double>* _vhhh
#endif
); );
template template
@@ -779,6 +783,12 @@ double getEnergySame
, DataPtr<Complex> const TBChh , DataPtr<Complex> const TBChh
// -- TIJK // -- TIJK
, DataFieldType<Complex>* Tijk , DataFieldType<Complex>* Tijk
#if defined(HAVE_CUDA)
// -- tmp buffers
, DataFieldType<Complex>* _t_buffer
, DataFieldType<Complex>* _vhhh
#endif
); );
// Doubles contribution:2 ends here // Doubles contribution:2 ends here

183
tools/configure-benches.sh Executable file
View File

@@ -0,0 +1,183 @@
#!/usr/bin/env bash
# Copyright (C) 2022 by Alejandro Gallo <aamsgallo@gmail.com>
set -eu
flags=("${@}")
PROJECTS=()
############################################################
#
## Check root directory
#
root_project=$(git rev-parse --show-toplevel)
configure=$root_project/configure
if [[ $(basename $PWD) == $(basename $root_project) ]]; then
cat <<EOF
You are trying to build in the root directory, create a build folder
and then configure.
mkdir build
cd build
$(readlink -f $0)
EOF
exit 1
fi
[[ -f $configure ]] || {
cat <<EOF
No configure script at $configure create it with bootstrap.sh or
autoreconf -vif
EOF
exit 1
}
############################################################
#
## Create configuration function
#
create_config () {
file=$1
name=$2
PROJECTS=(${PROJECTS[@]} "$name")
mkdir -p $name
cd $name
echo "> creating: $name"
cat <<SH > configure
#!/usr/bin/env bash
# creator: $0
# date: $(date)
$root_project/configure $(cat $file | paste -s) \\
$(for word in "${flags[@]}"; do
printf " \"%s\"" "$word";
done)
exit 0
SH
chmod +x configure
cd - > /dev/null
}
############################################################
# begin doc
#
# - default ::
# This configuration uses a CPU code with dgemm
# and without computing slices.
#
# end doc
tmp=`mktemp`
cat <<EOF > $tmp
--disable-slice
EOF
create_config $tmp default
rm $tmp
# begin doc
#
# - only-dgemm ::
# This only runs the computation part that involves dgemms.
#
# end doc
tmp=`mktemp`
cat <<EOF > $tmp
--disable-slice
--enable-only-dgemm
EOF
create_config $tmp only-dgemm
rm $tmp
# begin doc
#
# - cuda-only-dgemm ::
# This is the naive CUDA implementation compiling only the dgemm parts
# of the compute.
#
# end doc
tmp=`mktemp`
cat <<EOF > $tmp
--enable-cuda
--enable-only-dgemm
--disable-slice
EOF
create_config $tmp cuda-only-dgemm
rm $tmp
# begin doc
#
# - cuda-slices-on-gpu-only-dgemm ::
# This configuration tests that slices reside completely on the gpu
# and it should use a CUDA aware MPI implementation.
# It also only uses the routines that involve dgemm.
#
# end doc
tmp=`mktemp`
cat <<EOF > $tmp
--enable-cuda
--enable-sources-in-gpu
--enable-cuda-aware-mpi
--enable-only-dgemm
--disable-slice
EOF
create_config $tmp cuda-slices-on-gpu-only-dgemm
rm $tmp
############################################################
#
## Create makefile
#
cat <<MAKE > Makefile
all: configure do
do: configure
configure: ${PROJECTS[@]/%/\/Makefile}
%/Makefile: %/configure
cd \$* && ./configure
do: ${PROJECTS[@]/%/\/src\/libatrip.a}
%/src/libatrip.a:
cd \$* && \$(MAKE)
.PHONY: configure do all
MAKE
cat <<EOF
Now you can do
make all
or go into one of the directories
${PROJECTS[@]}
and do
./configure
make
EOF
## Emacs stuff
# Local Variables:
# eval: (outline-minor-mode)
# outline-regexp: "############################################################"
# End: