This repository contains the artifact for "Bridging Control-Centric and Data-Centric Optimization", submitted to CGO'23.
Running and building the Docker container requires an installation of docker and a running instance of the Docker daemon.
The requirements for a manual setup are listed in the Dockerfile
.
There are three options to run the benchmarks (sudo
is not always necessary):
Pull the docker image with:
sudo docker pull berkeates/dcir-cgo23:latest
And run it:
sudo docker run -it --rm berkeates/dcir-cgo23
git clone [--recurse-submodules] --depth 1 --shallow-submodules https://github.com/Berke-Ates/dcir-artifact
cd dcir-artifact
sudo docker build -t dcir-cgo23 .
sudo docker run -it --rm dcir-cgo23
For a manual setup follow the instructions in the Dockerfile
on your machine.
To run all benchmarks execute the following script inside the docker container:
./scripts/run_all.sh <Output directory> <Number of repetitions>
The outputs will then be in CSV format in the output directory, organized by figure number or result name. The results are also plotted to PDF files.
If running in a container, you can mount a local folder to view the results outside it.
To do so, use the -v
flag when running the container, as follows:
sudo docker run -it --rm -v <local folder>:<container folder> berkeates/dcir-cgo23
For example:
$ mkdir results
$ sudo docker run -it --rm -v /path/to/local/results:/home/user/ae berkeates/dcir-cgo23
# ./scripts/run_all.sh ae 10
The raw results from the original paper can be found in the output
subdirectory.
To run a single benchmark use the runners in the subdirectories:
scripts/Polybench
: For Polybench benchmarksscripts/pytorch
: For PyTorch benchmarksscripts/snippets
: For snippet benchmarks
An example command can be seen in this line:
./scripts/Polybench/dcir.sh ./benchmarks/Polybench/2mm/2mm.c <Output directory> <Number of repetitions>
The subdirectories also contain a run_all.sh
, which executes all benchmarks of their group.
benchmarks
: Folder containing all benchmarks filesPolybench
: Folder containing all Polybench benchmarks. The subfolders contain the edited C benchmarks and the generated SDFG files for the DaCe benchmarks.pytorch
: Folder containing all PyTorch benchmarks.snippets
: Folder containing all snippet benchmarks. The subfolders contain the adjusted C benchmarks, the C benchmarks with timer attached (suffix:-chrono.c
) and the generated SDFG files for the DaCe benchmarks.
dace
: Folder containing the DaCe projectmlir-dace
: Folder containing the MLIR-DaCe projectmlir-hlo
: Folder containing the MLIR-HLO projectoutput
: Raw outputs from the paperpolybench-comparator
: Folder containing a python script to compare Polybench outputsPolygeist
: Folder containing the Polygeist projectscripts
: Folder containing all scripts to run the benchmarks. Every script prints its usage, if it's called without argumentsPolybench
: Folder containing Polybench specific run scriptspytorch
: Folder containing PyTorch specific run scriptssnippets
: Folder containing snippets specific run scriptsget_sdfg_times.py
: Script retrieving the runtime of a specific SDFG runmhlo2std.sh
: Script to convert the MHLO dialect to builtin dialectsmulti_plot.py
: Script to plot multiple CSV filesopt_sdfg.py
: Script to optimize a SDFG filerun_all.sh
: Script to run all benchmarks and generate the plotssingle_plot.py
: Script to plot a single CSV file
torch-mlir
: Folder containing the Torch-MLIR projectDockerfile
: Dockerfile to build the imageREADME.md
: This file