Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into struct_access_interst…
Browse files Browse the repository at this point in the history
…ate_edge_bug
  • Loading branch information
phschaad committed Oct 29, 2024
2 parents 356af95 + 2070d39 commit dbe1ae2
Show file tree
Hide file tree
Showing 216 changed files with 12,214 additions and 4,848 deletions.
9 changes: 6 additions & 3 deletions .github/workflows/fpga-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,14 @@ name: FPGA Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

jobs:
test-fpga:
Expand Down
12 changes: 8 additions & 4 deletions .github/workflows/general-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: General Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

jobs:
test:
Expand Down Expand Up @@ -85,4 +85,8 @@ jobs:
./tests/polybench_test.sh
./tests/xform_test.sh
coverage combine .; coverage report; coverage xml
./codecov
- uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
7 changes: 4 additions & 3 deletions .github/workflows/gpu-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,16 @@ name: GPU Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

env:
CUDACXX: /usr/local/cuda/bin/nvcc
MKLROOT: /opt/intel/oneapi/mkl/latest/
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}


jobs:
Expand Down
7 changes: 4 additions & 3 deletions .github/workflows/heterogeneous-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@ name: Heterogeneous Tests

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

env:
CUDA_HOME: /usr/local/cuda
CUDACXX: nvcc
MKLROOT: /opt/intel/oneapi/mkl/latest/
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

jobs:
test-heterogeneous:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/pyFV3-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: NASA/NOAA pyFV3 repository build test

on:
push:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
pull_request:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]
merge_group:
branches: [ master, ci-fix ]
branches: [ main, ci-fix ]

defaults:
run:
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ For automatic styling, we use the [yapf](https://github.com/google/yapf) file fo
We use [pytest](https://www.pytest.org/) for our testing infrastructure. All tests under the `tests/` folder
(and any subfolders within) are automatically read and run. The files must be under the right subfolder
based on the component being tested (e.g., `tests/sdfg/` for IR-related tests), and must have the right
suffix: either `*_test.py` or `*_cudatest.py`. See [pytest.ini](https://github.com/spcl/dace/blob/master/pytest.ini)
suffix: either `*_test.py` or `*_cudatest.py`. See [pytest.ini](https://github.com/spcl/dace/blob/main/pytest.ini)
for more information, and for the markers we use to specify software/hardware requirements.

The structure of the test file must follow `pytest` standards (i.e., free functions called `test_*`), and
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@
[![FPGA Tests](https://github.com/spcl/dace/actions/workflows/fpga-ci.yml/badge.svg)](https://github.com/spcl/dace/actions/workflows/fpga-ci.yml)
[![Documentation Status](https://readthedocs.org/projects/spcldace/badge/?version=latest)](https://spcldace.readthedocs.io/en/latest/?badge=latest)
[![PyPI version](https://badge.fury.io/py/dace.svg)](https://badge.fury.io/py/dace)
[![codecov](https://codecov.io/gh/spcl/dace/branch/master/graph/badge.svg)](https://codecov.io/gh/spcl/dace)
[![codecov](https://codecov.io/gh/spcl/dace/branch/main/graph/badge.svg)](https://codecov.io/gh/spcl/dace)


![D](dace.svg)aCe - Data-Centric Parallel Programming
=====================================================

_Decoupling domain science from performance optimization._

DaCe is a [fast](https://nbviewer.org/github/spcl/dace/blob/master/tutorials/benchmarking.ipynb) parallel programming
DaCe is a [fast](https://nbviewer.org/github/spcl/dace/blob/main/tutorials/benchmarking.ipynb) parallel programming
framework that takes code in Python/NumPy and other programming languages, and maps it to high-performance
**CPU, GPU, and FPGA** programs, which can be optimized to achieve state-of-the-art. Internally, DaCe
uses the Stateful DataFlow multiGraph (SDFG) *data-centric intermediate
Expand Down Expand Up @@ -61,13 +61,13 @@ be used in any C ABI compatible language (C/C++, FORTRAN, etc.).

For more information on how to use DaCe, see the [samples](samples) or tutorials below:

* [Getting Started](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/getting_started.ipynb)
* [Benchmarks, Instrumentation, and Performance Comparison with Other Python Compilers](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/benchmarking.ipynb)
* [Explicit Dataflow in Python](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/explicit.ipynb)
* [NumPy API Reference](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/numpy_frontend.ipynb)
* [SDFG API](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/sdfg_api.ipynb)
* [Using and Creating Transformations](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/transformations.ipynb)
* [Extending the Code Generator](https://nbviewer.jupyter.org/github/spcl/dace/blob/master/tutorials/codegen.ipynb)
* [Getting Started](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/getting_started.ipynb)
* [Benchmarks, Instrumentation, and Performance Comparison with Other Python Compilers](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/benchmarking.ipynb)
* [Explicit Dataflow in Python](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/explicit.ipynb)
* [NumPy API Reference](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/numpy_frontend.ipynb)
* [SDFG API](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/sdfg_api.ipynb)
* [Using and Creating Transformations](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/transformations.ipynb)
* [Extending the Code Generator](https://nbviewer.jupyter.org/github/spcl/dace/blob/main/tutorials/codegen.ipynb)

Publication
-----------
Expand Down
1 change: 1 addition & 0 deletions dace/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from .frontend.operations import reduce, elementwise

from . import data, hooks, subsets
from .codegen.compiled_sdfg import CompiledSDFG
from .config import Config
from .sdfg import SDFG, SDFGState, InterstateEdge, nodes
from .sdfg.propagation import propagate_memlets_sdfg, propagate_memlet
Expand Down
220 changes: 220 additions & 0 deletions dace/cli/sdfg_diff.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,220 @@
# Copyright 2019-2024 ETH Zurich and the DaCe authors. All rights reserved.
""" SDFG diff tool. """

import argparse
from hashlib import sha256
import json
import os
import platform
import tempfile
from typing import Dict, Literal, Set, Tuple, Union

import jinja2
import dace
from dace import memlet as mlt
from dace.sdfg import nodes as nd
from dace.sdfg.graph import Edge, MultiConnectorEdge
from dace.sdfg.sdfg import InterstateEdge
from dace.sdfg.state import ControlFlowBlock
import dace.serialize


DiffableT = Union[ControlFlowBlock, nd.Node, MultiConnectorEdge[mlt.Memlet], Edge[InterstateEdge]]
DiffSetsT = Tuple[Set[str], Set[str], Set[str]]


def _print_diff(sdfg_A: dace.SDFG, sdfg_B: dace.SDFG, diff_sets: DiffSetsT) -> None:
all_id_elements_A: Dict[str, DiffableT] = dict()
all_id_elements_B: Dict[str, DiffableT] = dict()

all_id_elements_A[sdfg_A.guid] = sdfg_A
for n, _ in sdfg_A.all_nodes_recursive():
all_id_elements_A[n.guid] = n
for e, _ in sdfg_A.all_edges_recursive():
all_id_elements_A[e.data.guid] = e

all_id_elements_B[sdfg_B.guid] = sdfg_B
for n, _ in sdfg_B.all_nodes_recursive():
all_id_elements_B[n.guid] = n
for e, _ in sdfg_B.all_edges_recursive():
all_id_elements_B[e.data.guid] = e

no_removed = True
no_added = True
no_changed = True
if len(diff_sets[0]) > 0:
print('Removed elements:')
for k in diff_sets[0]:
print(all_id_elements_A[k])
no_removed = False
if len(diff_sets[1]) > 0:
if not no_removed:
print('')
print('Added elements:')
for k in diff_sets[1]:
print(all_id_elements_B[k])
no_added = False
if len(diff_sets[2]) > 0:
if not no_removed or not no_added:
print('')
print('Changed elements:')
for k in diff_sets[2]:
print(all_id_elements_B[k])
no_changed = False

if no_removed and no_added and no_changed:
print('SDFGs are identical')


def _sdfg_diff(sdfg_A: dace.SDFG, sdfg_B: dace.SDFG, eq_strategy = Union[Literal['hash', '==']]) -> DiffSetsT:
all_id_elements_A: Dict[str, DiffableT] = dict()
all_id_elements_B: Dict[str, DiffableT] = dict()

all_id_elements_A[sdfg_A.guid] = sdfg_A
for n, _ in sdfg_A.all_nodes_recursive():
all_id_elements_A[n.guid] = n
for e, _ in sdfg_A.all_edges_recursive():
all_id_elements_A[e.data.guid] = e

all_id_elements_B[sdfg_B.guid] = sdfg_B
for n, _ in sdfg_B.all_nodes_recursive():
all_id_elements_B[n.guid] = n
for e, _ in sdfg_B.all_edges_recursive():
all_id_elements_B[e.data.guid] = e

a_keys = set(all_id_elements_A.keys())
b_keys = set(all_id_elements_B.keys())

added_keys = b_keys - a_keys
removed_keys = a_keys - b_keys
changed_keys = set()

remaining_keys = a_keys - removed_keys
if remaining_keys != b_keys - added_keys:
raise RuntimeError(
'The sets of remaining keys between graphs A and B after accounting for added and removed keys do not match'
)
for k in remaining_keys:
el_a = all_id_elements_A[k]
el_b = all_id_elements_B[k]

if eq_strategy == 'hash':
try:
if isinstance(el_a, Edge):
attr_a = dace.serialize.all_properties_to_json(el_a.data)
else:
attr_a = dace.serialize.all_properties_to_json(el_a)
hash_a = sha256(json.dumps(attr_a).encode('utf-8')).hexdigest()
except KeyError:
hash_a = None
try:
if isinstance(el_b, Edge):
attr_b = dace.serialize.all_properties_to_json(el_b.data)
else:
attr_b = dace.serialize.all_properties_to_json(el_b)
hash_b = sha256(json.dumps(attr_b).encode('utf-8')).hexdigest()
except KeyError:
hash_b = None

if hash_a != hash_b:
changed_keys.add(k)
else:
if isinstance(el_a, Edge):
attr_a = dace.serialize.all_properties_to_json(el_a.data)
else:
attr_a = dace.serialize.all_properties_to_json(el_a)
if isinstance(el_b, Edge):
attr_b = dace.serialize.all_properties_to_json(el_b.data)
else:
attr_b = dace.serialize.all_properties_to_json(el_b)

if attr_a != attr_b:
changed_keys.add(k)

return removed_keys, added_keys, changed_keys


def main():
# Command line options parser
parser = argparse.ArgumentParser(description='SDFG diff tool.')

# Required argument for SDFG file path
parser.add_argument('sdfg_A_path', help='<PATH TO FIRST SDFG FILE>', type=str)
parser.add_argument('sdfg_B_path', help='<PATH TO SECOND SDFG FILE>', type=str)

parser.add_argument('-g',
'--graphical',
dest='graphical',
action='store_true',
help="If set, visualize the difference graphically",
default=False)
parser.add_argument('-o',
'--output',
dest='output',
help="The output filename to generate",
type=str)
parser.add_argument('-H',
'--hash',
dest='hash',
action='store_true',
help="If set, use the hash of JSON serialized properties for change checks instead of " +
"Python's dictionary equivalence checks. This makes changes order sensitive.",
default=False)

args = parser.parse_args()

if not os.path.isfile(args.sdfg_A_path):
print('SDFG file', args.sdfg_A_path, 'not found')
exit(1)

if not os.path.isfile(args.sdfg_B_path):
print('SDFG file', args.sdfg_B_path, 'not found')
exit(1)

sdfg_A = dace.SDFG.from_file(args.sdfg_A_path)
sdfg_B = dace.SDFG.from_file(args.sdfg_B_path)

eq_strategy = 'hash' if args.hash else '=='

diff_sets = _sdfg_diff(sdfg_A, sdfg_B, eq_strategy)

if args.graphical:
basepath = os.path.join(os.path.dirname(os.path.realpath(dace.__file__)), 'viewer')
template_loader = jinja2.FileSystemLoader(searchpath=os.path.join(basepath, 'templates'))
template_env = jinja2.Environment(loader=template_loader)
template = template_env.get_template('sdfv_diff_view.html')

# if we are serving, the base path should just be root
html = template.render(sdfgA=json.dumps(dace.serialize.dumps(sdfg_A.to_json())),
sdfgB=json.dumps(dace.serialize.dumps(sdfg_B.to_json())),
removedKeysList=json.dumps(list(diff_sets[0])),
addedKeysList=json.dumps(list(diff_sets[1])),
changedKeysList=json.dumps(list(diff_sets[2])),
dir=basepath + '/')

if args.output:
fd = None
html_filename = args.output
else:
fd, html_filename = tempfile.mkstemp(suffix=".sdfg.html")

with open(html_filename, 'w') as f:
f.write(html)

if fd is not None:
os.close(fd)

system = platform.system()

if system == 'Windows':
os.system(html_filename)
elif system == 'Darwin':
os.system('open %s' % html_filename)
else:
os.system('xdg-open %s' % html_filename)
else:
_print_diff(sdfg_A, sdfg_B, diff_sets)


if __name__ == '__main__':
main()
3 changes: 3 additions & 0 deletions dace/codegen/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Copyright 2019-2024 ETH Zurich and the DaCe authors. All rights reserved.

from dace.codegen.compiled_sdfg import CompiledSDFG
Loading

0 comments on commit dbe1ae2

Please sign in to comment.