Skip to content

Commit

Permalink
Merge origin/main
Browse files Browse the repository at this point in the history
  • Loading branch information
tehrengruber committed Jan 4, 2024
2 parents 28a481d + 7a9489f commit ffc44b7
Show file tree
Hide file tree
Showing 84 changed files with 1,408 additions and 811 deletions.
2 changes: 1 addition & 1 deletion .gitpod.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ image:
tasks:
- name: Setup venv and dev tools
init: |
ln -s /workspace/gt4py/.gitpod/.vscode /workspace/gt4py/.vscode
ln -sfn /workspace/gt4py/.gitpod/.vscode /workspace/gt4py/.vscode
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip setuptools wheel
Expand Down
38 changes: 38 additions & 0 deletions CODING_GUIDELINES.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,44 @@ We deviate from the [Google Python Style Guide][google-style-guide] only in the
- Client code (like tests, doctests and examples) should use the above style for public FieldView API
- Library code should always import the defining module and use qualified names.

### Error messages

Error messages should be written as sentences, starting with a capital letter and ending with a period (avoid exclamation marks). Try to be informative without being verbose. Code objects such as 'ClassNames' and 'function_names' should be enclosed in single quotes, and so should string values used for message interpolation.

Examples:

```python
raise ValueError(f"Invalid argument 'dimension': should be of type 'Dimension', got '{dimension.type}'.")
```

Interpolated integer values do not need double quotes, if they are indicating an amount. Example:

```python
raise ValueError(f"Invalid number of arguments: expected 3 arguments, got {len(args)}.")
```

The double quotes can also be dropped when presenting a sequence of values. In this case the message should be rephrased so the sequence is separated from the text by a colon ':'.

```python
raise ValueError(f"unexpected keyword arguments: {', '.join(set(kwarg_names} - set(expected_kwarg_names)))}.")
```

The message should be kept to one sentence if reasonably possible. Ideally the sentence should be kept short and avoid unneccessary words. Examples:

```python
# too many sentences
raise ValueError(f"Received an unexpeted number of arguments. Should receive 5 arguments, but got {len(args)}. Please provide the correct number of arguments.")
# better
raise ValueError(f"Wrong number of arguments: expected 5, got {len(args)}.")

# less extreme
raise TypeError(f"Wrong argument type. Can only accept 'int's, got '{type(arg)}' instead.")
# but can still be improved
raise TypeError(f"Wrong argument type: 'int' expected, got '{type(arg)}'")
```

The terseness vs. helpfulness tradeoff should be more in favor of terseness for internal error messages and more in favor of helpfulness for `DSLError` and it's subclassses, where additional sentences are encouraged if they point out likely hidden sources of the problem or common fixes.

### Docstrings

We generate the API documentation automatically from the docstrings using [Sphinx][sphinx] and some extensions such as [Sphinx-autodoc][sphinx-autodoc] and [Sphinx-napoleon][sphinx-napoleon]. These follow the Google Python Style Guide docstring conventions to automatically format the generated documentation. A complete overview can be found here: [Example Google Style Python Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html#example-google).
Expand Down
16 changes: 8 additions & 8 deletions docs/user/next/QuickstartGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ You can call field operators from [programs](#Programs), other field operators,
result = gtx.as_field([CellDim, KDim], np.zeros(shape=grid_shape))
add(a, b, out=result, offset_provider={})
print("{} + {} = {} ± {}".format(a_value, b_value, np.average(np.asarray(result)), np.std(np.asarray(result))))
print("{} + {} = {} ± {}".format(a_value, b_value, np.average(result.asnumpy()), np.std(result.asnumpy())))
```

#### Programs
Expand All @@ -128,7 +128,7 @@ You can execute the program by simply calling it:
result = gtx.as_field([CellDim, KDim], np.zeros(shape=grid_shape))
run_add(a, b, result, offset_provider={})
print("{} + {} = {} ± {}".format(b_value, (a_value + b_value), np.average(np.asarray(result)), np.std(np.asarray(result))))
print("{} + {} = {} ± {}".format(b_value, (a_value + b_value), np.average(result.asnumpy()), np.std(result.asnumpy())))
```

#### Composing field operators and programs
Expand Down Expand Up @@ -256,7 +256,7 @@ def run_nearest_cell_to_edge(cell_values: gtx.Field[[CellDim], float64], out : g
run_nearest_cell_to_edge(cell_values, edge_values, offset_provider={"E2C": E2C_offset_provider})
print("0th adjacent cell's value: {}".format(np.asarray(edge_values)))
print("0th adjacent cell's value: {}".format(edge_values.asnumpy()))
```

Running the above snippet results in the following edge field:
Expand All @@ -283,7 +283,7 @@ def run_sum_adjacent_cells(cells : gtx.Field[[CellDim], float64], out : gtx.Fiel
run_sum_adjacent_cells(cell_values, edge_values, offset_provider={"E2C": E2C_offset_provider})
print("sum of adjacent cells: {}".format(np.asarray(edge_values)))
print("sum of adjacent cells: {}".format(edge_values.asnumpy()))
```

For the border edges, the results are unchanged compared to the previous example, but the inner edges now contain the sum of the two adjacent cells:
Expand Down Expand Up @@ -317,7 +317,7 @@ def conditional(mask: gtx.Field[[CellDim, KDim], bool], a: gtx.Field[[CellDim, K
return where(mask, a, b)
conditional(mask, a, b, out=result_where, offset_provider={})
print("where return: {}".format(np.asarray(result_where)))
print("where return: {}".format(result_where.asnumpy()))
```

**Tuple implementation:**
Expand All @@ -340,7 +340,7 @@ result_1: gtx.Field[[CellDim, KDim], float64], result_2: gtx.Field[[CellDim, KDi
_conditional_tuple(mask, a, b, out=(result_1, result_2))
conditional_tuple(mask, a, b, result_1, result_2, offset_provider={})
print("where tuple return: {}".format((np.asarray(result_1), np.asarray(result_2))))
print("where tuple return: {}".format((result_1.asnumpy(), result_2.asnumpy())))
```

The `where` builtin also allows for nesting of tuples. In this scenario, it will first perform an unrolling:
Expand Down Expand Up @@ -375,7 +375,7 @@ def conditional_tuple_nested(
_conditional_tuple_nested(mask, a, b, c, d, out=((result_1, result_2), (result_2, result_1)))
conditional_tuple_nested(mask, a, b, c, d, result_1, result_2, offset_provider={})
print("where nested tuple return: {}".format(((np.asarray(result_1), np.asarray(result_2)), (np.asarray(result_2), np.asarray(result_1)))))
print("where nested tuple return: {}".format(((result_1.asnumpy(), result_2.asnumpy()), (result_2.asnumpy(), result_1.asnumpy()))))
```

#### Implementing the pseudo-laplacian
Expand Down Expand Up @@ -447,7 +447,7 @@ run_pseudo_laplacian(cell_values,
result_pseudo_lap,
offset_provider={"E2C": E2C_offset_provider, "C2E": C2E_offset_provider})
print("pseudo-laplacian: {}".format(np.asarray(result_pseudo_lap)))
print("pseudo-laplacian: {}".format(result_pseudo_lap.asnumpy()))
```

As a closure, here is an example of chaining field operators, which is very simple to do when working with fields. The field operator below executes the pseudo-laplacian, and then calls the pseudo-laplacian on the result of the first, in effect, calculating the laplacian of a laplacian.
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -342,6 +342,7 @@ markers = [
'uses_reduction_with_only_sparse_fields: tests that require backend support for with sparse fields',
'uses_scan_in_field_operator: tests that require backend support for scan in field operator',
'uses_sparse_fields: tests that require backend support for sparse fields',
'uses_sparse_fields_as_output: tests that require backend support for writing sparse fields',
'uses_strided_neighbor_offset: tests that require backend support for strided neighbor offset',
'uses_tuple_args: tests that require backend support for tuple arguments',
'uses_tuple_returns: tests that require backend support for tuple results',
Expand Down
22 changes: 15 additions & 7 deletions src/gt4py/_core/definitions.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,17 +73,23 @@

BoolScalar: TypeAlias = Union[bool_, bool]
BoolT = TypeVar("BoolT", bound=BoolScalar)
BOOL_TYPES: Final[Tuple[type, ...]] = cast(Tuple[type, ...], BoolScalar.__args__) # type: ignore[attr-defined]
BOOL_TYPES: Final[Tuple[type, ...]] = cast(
Tuple[type, ...], BoolScalar.__args__ # type: ignore[attr-defined]
)


IntScalar: TypeAlias = Union[int8, int16, int32, int64, int]
IntT = TypeVar("IntT", bound=IntScalar)
INT_TYPES: Final[Tuple[type, ...]] = cast(Tuple[type, ...], IntScalar.__args__) # type: ignore[attr-defined]
INT_TYPES: Final[Tuple[type, ...]] = cast(
Tuple[type, ...], IntScalar.__args__ # type: ignore[attr-defined]
)


UnsignedIntScalar: TypeAlias = Union[uint8, uint16, uint32, uint64]
UnsignedIntT = TypeVar("UnsignedIntT", bound=UnsignedIntScalar)
UINT_TYPES: Final[Tuple[type, ...]] = cast(Tuple[type, ...], UnsignedIntScalar.__args__) # type: ignore[attr-defined]
UINT_TYPES: Final[Tuple[type, ...]] = cast(
Tuple[type, ...], UnsignedIntScalar.__args__ # type: ignore[attr-defined]
)


IntegralScalar: TypeAlias = Union[IntScalar, UnsignedIntScalar]
Expand All @@ -93,7 +99,9 @@

FloatingScalar: TypeAlias = Union[float32, float64, float]
FloatingT = TypeVar("FloatingT", bound=FloatingScalar)
FLOAT_TYPES: Final[Tuple[type, ...]] = cast(Tuple[type, ...], FloatingScalar.__args__) # type: ignore[attr-defined]
FLOAT_TYPES: Final[Tuple[type, ...]] = cast(
Tuple[type, ...], FloatingScalar.__args__ # type: ignore[attr-defined]
)


#: Type alias for all scalar types supported by GT4Py
Expand Down Expand Up @@ -195,7 +203,7 @@ def dtype_kind(sc_type: Type[ScalarT]) -> DTypeKind:
if issubclass(sc_type, numbers.Complex):
return DTypeKind.COMPLEX

raise TypeError("Unknown scalar type kind")
raise TypeError("Unknown scalar type kind.")


@dataclasses.dataclass(frozen=True)
Expand Down Expand Up @@ -491,10 +499,10 @@ def __rtruediv__(self, other: Any) -> NDArrayObject:
def __pow__(self, other: NDArrayObject | Scalar) -> NDArrayObject:
...

def __eq__(self, other: NDArrayObject | Scalar) -> NDArrayObject: # type: ignore[override] # mypy want to return `bool`
def __eq__(self, other: NDArrayObject | Scalar) -> NDArrayObject: # type: ignore[override] # mypy wants to return `bool`
...

def __ne__(self, other: NDArrayObject | Scalar) -> NDArrayObject: # type: ignore[override] # mypy want to return `bool`
def __ne__(self, other: NDArrayObject | Scalar) -> NDArrayObject: # type: ignore[override] # mypy wants to return `bool`
...

def __gt__(self, other: NDArrayObject | Scalar) -> NDArrayObject: # type: ignore[misc] # Forward operator is not callable
Expand Down
8 changes: 5 additions & 3 deletions src/gt4py/next/allocators.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,9 @@ def get_allocator(
elif not strict or is_field_allocator(default):
return default
else:
raise TypeError(f"Object {obj} is neither a field allocator nor a field allocator factory")
raise TypeError(
f"Object '{obj}' is neither a field allocator nor a field allocator factory."
)


@dataclasses.dataclass(frozen=True)
Expand Down Expand Up @@ -331,15 +333,15 @@ def allocate(
"""
if device is None and allocator is None:
raise ValueError("No 'device' or 'allocator' specified")
raise ValueError("No 'device' or 'allocator' specified.")
actual_allocator = get_allocator(allocator)
if actual_allocator is None:
assert device is not None # for mypy
actual_allocator = device_allocators[device.device_type]
elif device is None:
device = core_defs.Device(actual_allocator.__gt_device_type__, 0)
elif device.device_type != actual_allocator.__gt_device_type__:
raise ValueError(f"Device {device} and allocator {actual_allocator} are incompatible")
raise ValueError(f"Device '{device}' and allocator '{actual_allocator}' are incompatible.")

return actual_allocator.__gt_allocate__(
domain=common.domain(domain),
Expand Down
Loading

0 comments on commit ffc44b7

Please sign in to comment.