diff --git a/HISTORY.rst b/HISTORY.rst
index fbb9516d..457019e1 100644
--- a/HISTORY.rst
+++ b/HISTORY.rst
@@ -2,6 +2,30 @@
History
=======
+0.33.0 (2024-12-17)
+-------------------
+
+* Introduce ``v1`` opt-in, providing a more user-friendly experience with significant performance improvements for de-serialization ๐
+* Add models for ``v1``, imported from ``dataclass_wizard.v1``:
+ * :func:`Alias`
+ * :func:`AliasPath`
+* Add enums for ``v1``, imported from ``dataclass_wizard.v1.enums``:
+ * :class:`KeyCase`
+ * :class:`KeyAction`
+* Add ``Meta`` settings for ``v1``:
+ * ``v1`` โ Enable opt-in for the "experimental" major release `v1` feature.
+ * ``v1_debug`` โ Replaces the deprecated ``debug_enabled`` Meta setting, which will be removed in ``v1``.
+ * ``v1_key_case`` โ Specifies the letter case used for matching JSON keys when mapping them to dataclass fields.
+ * ``v1_field_to_alias`` โ Custom mapping of dataclass fields to their JSON aliases (keys) for de/serialization.
+ * ``v1_on_unknown_key`` โ Defines the action to take when an unknown JSON key is encountered during :meth:`from_dict` or :meth:`from_json` calls.
+ * ``v1_unsafe_parse_dataclass_in_union`` โ Unsafe option: Enables parsing of dataclasses in unions without requiring the presence of a :attr:`tag_key`.
+* Require the ``typing-extensions`` library up to Python 3.11 (its main use in Python 3.11 is ``ReadOnly`` for ``TypedDict``).
+* Phase out the ``UnknownJSONKey`` exception class in favor of ``UnknownKeysError``, since ``v1`` now provides *all* missing keys in JSON (not just the first one!).
+* Update benchmarks:
+ * Add benchmark for ``CatchAll``.
+ * Move benchmark dependencies to ``requirements-bench.txt``.
+* Add new test cases.
+
0.32.1 (2024-12-04)
-------------------
diff --git a/README.rst b/README.rst
index 863b2005..11bb974f 100644
--- a/README.rst
+++ b/README.rst
@@ -2,37 +2,30 @@
Dataclass Wizard
================
-Full documentation is available at `Read The Docs`_. (`Installation`_)
-
-.. image:: https://img.shields.io/pypi/v/dataclass-wizard.svg
- :target: https://pypi.org/project/dataclass-wizard
-
-.. image:: https://img.shields.io/conda/vn/conda-forge/dataclass-wizard.svg
- :target: https://anaconda.org/conda-forge/dataclass-wizard
-
-.. image:: https://img.shields.io/pypi/pyversions/dataclass-wizard.svg
- :target: https://pypi.org/project/dataclass-wizard
+Release v\ |version| | ๐ Full docs on `Read the Docs`_ (`Installation`_).
.. image:: https://github.com/rnag/dataclass-wizard/actions/workflows/dev.yml/badge.svg
- :target: https://github.com/rnag/dataclass-wizard/actions/workflows/dev.yml
-
-.. image:: https://readthedocs.org/projects/dataclass-wizard/badge/?version=latest
- :target: https://dataclass-wizard.readthedocs.io/en/latest/?version=latest
- :alt: Documentation Status
+ :target: https://github.com/rnag/dataclass-wizard/actions/workflows/dev.yml
+ :alt: CI Status
+.. image:: https://img.shields.io/pypi/pyversions/dataclass-wizard.svg
+ :target: https://pypi.org/project/dataclass-wizard
+ :alt: Supported Python Versions
-.. image:: https://pyup.io/repos/github/rnag/dataclass-wizard/shield.svg
- :target: https://pyup.io/repos/github/rnag/dataclass-wizard/
- :alt: Updates
-
+.. image:: https://img.shields.io/pypi/l/dataclass-wizard.svg
+ :target: https://pypi.org/project/dataclass-wizard/
+ :alt: License
+.. image:: https://static.pepy.tech/badge/dataclass-wizard/month
+ :target: https://pepy.tech/project/dataclass-wizard
+ :alt: Monthly Downloads
-**Dataclass Wizard** offers simple, elegant, *wizarding* ๐ช tools for
-interacting with Python's ``dataclasses``.
+**Dataclass Wizard** ๐ช
+Simple, elegant *wizarding* tools for Pythonโs ``dataclasses``.
- It excels at โก๏ธ lightning-fast de/serialization, effortlessly
- converting dataclass instances to/from JSON -- perfect for
- *nested dataclass* models!
+Lightning-fast โก, pure Python, and lightweight โ effortlessly
+convert dataclass instances to/from JSON, perfect
+for complex and *nested dataclass* models!
-------------------
@@ -71,63 +64,135 @@ interacting with Python's ``dataclasses``.
:local:
:backlinks: none
+``v1`` Opt-In ๐
+----------------
+
+Early access to **V1** is available! To opt in, simply enable ``v1=True`` in the ``Meta`` settings:
+
+.. code-block:: python3
+
+ from dataclasses import dataclass
+ from dataclass_wizard import JSONPyWizard
+ from dataclass_wizard.v1 import Alias
+
+ @dataclass
+ class A(JSONPyWizard):
+ class _(JSONPyWizard.Meta):
+ v1 = True
+
+ my_str: str
+ version_info: float = Alias(load='v-info')
+
+ # Alternatively, for simple dataclasses that don't subclass `JSONPyWizard`:
+ # LoadMeta(v1=True).bind_to(A)
+
+ a = A.from_dict({'my_str': 'test', 'v-info': '1.0'})
+ assert a.version_info == 1.0
+ assert a.to_dict() == {'my_str': 'test', 'version_info': 1.0}
+
+For more information, see the `Field Guide to V1 Opt-in`_.
+
+.. _`Field Guide to V1 Opt-in`: https://github.com/rnag/dataclass-wizard/wiki/Field-Guide-to-V1-Opt%E2%80%90in
+
+Performance Improvements
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The upcoming **V1** release brings significant performance improvements in de/serialization. Personal benchmarks show that **V1** can make Dataclass Wizard
+approximately **2x faster** than ``pydantic``!
+
+While some features are still being refined and fully supported, **v1** positions Dataclass Wizard alongside other high-performance serialization libraries in Python.
+
+Why Use Dataclass Wizard?
+-------------------------
+
+Effortlessly handle complex data with one of the *fastest* and *lightweight* libraries available! Perfect for APIs, JSON wrangling, and more.
+
+- ๐ **Blazing Fast** โ One of the fastest libraries out there!
+- ๐ชถ **Lightweight** โ Pure Python, minimal dependencies
+- ๐ถ Easy Setup โ Intuitive, hassle-free
+- โ๏ธ **Battle-Tested** โ Proven reliability with solid test coverage
+- โ๏ธ Highly Customizable โ Endless de/serialization options to fit your needs
+- ๐ Built-in Support โ JSON, YAML, TOML, and environment/settings management
+- ๐ฆ **Full Python Type Support** โ Powered by type hints with full support for native types and ``typing-extensions``
+- ๐ Auto-Generate Schemas โ JSON to Dataclass made easy
+
+Key Features
+------------
+
+- ๐ Flexible (de)serialization โ Marshal dataclasses to/from JSON, TOML, YAML, or ``dict`` with ease.
+- ๐ฟ Environment Magic โ Map env vars and ``.env`` files to strongly-typed class fields effortlessly.
+- ๐งโ๐ป Field Properties Made Simple โ Add properties with default values to your dataclasses.
+- ๐งโโ๏ธ JSON-to-Dataclass Wizardry โ Auto-generate a dataclass schema from any JSON file or string instantly.
Installation
------------
-Dataclass Wizard is available on `PyPI`_. Install with ``pip``:
+*Dataclass Wizard* is available on `PyPI`_. You can install it with ``pip``:
.. code-block:: console
$ pip install dataclass-wizard
-Also available on `conda`_ via `conda-forge`_. Install with ``conda``:
+Also available on `conda`_ via `conda-forge`_. To install via ``conda``:
.. code-block:: console
$ conda install dataclass-wizard -c conda-forge
-This library supports **Python 3.9** or higher.
+This library supports **Python 3.9+**. Support for Python 3.6 โ 3.8 was
+available in earlier releases but is no longer maintained, as those
+versions no longer receive security updates.
+
+For convenience, the table below outlines the last compatible version
+of *Dataclass Wizard* for unsupported Python versions (3.6 โ 3.8):
+.. list-table::
+ :header-rows: 1
+ :widths: 15 35 15
+
+ * - Python Version
+ - Last Version of ``dataclass-wizard``
+ - Python EOL
+ * - 3.6
+ - 0.26.1_
+ - 2021-12-23
+ * - 3.7
+ - 0.26.1_
+ - 2023-06-27
+ * - 3.8
+ - 0.26.1_
+ - 2024-10-07
+
+.. _0.26.1: https://pypi.org/project/dataclass-wizard/0.26.1/
.. _PyPI: https://pypi.org/project/dataclass-wizard/
.. _conda: https://anaconda.org/conda-forge/dataclass-wizard
.. _conda-forge: https://conda-forge.org/
+.. _Changelog: https://dataclass-wizard.readthedocs.io/en/latest/history.html
-Features
---------
+See the package on `PyPI`_ and the `Changelog`_ in the docs for the latest version details.
-Unlock the full potential of your `dataclasses`_ with these key features:
-
-- *Flexible (de)serialization*: Marshal dataclasses to/from JSON, TOML, YAML, or ``dict`` with ease.
-- *Environment magic*: Map env vars and ``dotenv`` files to strongly-typed class fields effortlessly.
-- *Field properties made simple*: Add properties with default values to your dataclasses.
-- *JSON-to-Dataclass wizardry*: Auto-generate a dataclass schema from any JSON file or string instantly.
+Wizard Mixins โจ
+----------------
-Wizard Mixins
--------------
+In addition to ``JSONWizard``, these `Mixin`_ classes simplify common tasks and make your data handling *spellbindingly* efficient:
-In addition to ``JSONWizard``, these handy Mixin_ classes simplify your workflow:
+- ๐ช `EnvWizard`_ โ Load environment variables and `.env` files into typed schemas, even supporting secret files (keys as file names).
+- ๐ฉ `JSONPyWizard`_ โ A helper for ``JSONWizard`` that preserves your keys as-is (no camelCase changes).
+- ๐ฎ `JSONListWizard`_ โ Extend ``JSONWizard`` to convert lists into `Container`_ objects.
+- ๐ผ `JSONFileWizard`_ โ Convert dataclass instances to/from local JSON files with ease.
+- ๐ณ `TOMLWizard`_ โ Map your dataclasses to/from TOML format.
+- ๐งโโ๏ธ `YAMLWizard`_ โ Convert between YAML and dataclass instances using ``PyYAML``.
-* `EnvWizard`_ โ Seamlessly load env variables and ``.env`` files into typed schemas. Supports secret files (file names as keys).
-* `JSONPyWizard`_ โ A ``JSONWizard`` helper to skip *camelCase* and preserve keys as-is.
-* `JSONListWizard`_ โ Extends ``JSONWizard`` to return `Container`_ objects instead of *lists* when possible.
-* `JSONFileWizard`_ โ Effortlessly convert dataclass instances from/to JSON files on your local drive.
-* `TOMLWizard`_ โ Easily map dataclass instances to/from TOML format.
-* `YAMLWizard`_ โ Instantly convert dataclass instances to/from YAML, using the default ``PyYAML`` parser.
+Supported Types ๐งโ๐ป
+--------------------
-Supported Types
----------------
+*Dataclass Wizard* supports:
-The Dataclass Wizard library natively supports standard Python
-collections like ``list``, ``dict``, and ``set``, along with
-popular `typing`_ module Generics such as ``Union`` and ``Any``.
-Additionally, it handles commonly used types like ``Enum``,
-``defaultdict``, and date/time objects (e.g., ``datetime``)
-with ease.
+- ๐ **Collections**: Handle ``list``, ``dict``, and ``set`` effortlessly.
+- ๐ข **Typing Generics**: Manage ``Union``, ``Any``, and other types from the `typing`_ module.
+- ๐ **Advanced Types**: Work with ``Enum``, ``defaultdict``, and ``datetime`` with ease.
-For a detailed list of supported types and insights into the
-load/dump process for special types, visit the
-`Supported Types`_ section of the docs.
+For more info, check out the `Supported Types`_ section in the docs for detailed insights into each type and the load/dump process!
Usage and Examples
------------------
@@ -828,18 +893,15 @@ Dataclasses in ``Union`` Types
------------------------------
The ``dataclass-wizard`` library fully supports declaring dataclass models in
-`Union`_ types as field annotations, such as ``list[Wizard | Archer | Barbarian]``.
+`Union`_ types, such as ``list[Wizard | Archer | Barbarian]``.
-As of *v0.19.0*, there is added support to *auto-generate* tags for a dataclass model
--- based on the class name -- as well as to specify a custom *tag key* that will be
-present in the JSON object, which defaults to a special ``__tag__`` key otherwise.
-These two options are controlled by the ``auto_assign_tags`` and ``tag_key``
-attributes (respectively) in the ``Meta`` config.
+Starting from *v0.19.0*, the library introduces two key features:
+- **Auto-generated tags** for dataclass models (based on class names).
+- A customizable **tag key** (default: ``__tag__``) that identifies the model in JSON.
-To illustrate a specific example, a JSON object such as
-``{"oneOf": {"type": "A", ...}, ...}`` will now automatically map to a dataclass
-instance ``A``, provided that the ``tag_key`` is correctly set to "type", and
-the field ``one_of`` is annotated as a Union type in the ``A | B`` syntax.
+These options are controlled by the ``auto_assign_tags`` and ``tag_key`` attributes in the ``Meta`` config.
+
+For example, if a JSON object looks like ``{"type": "A", ...}``, you can set ``tag_key = "type"`` to automatically deserialize it into the appropriate class, like `A`.
Let's start out with an example, which aims to demonstrate the simplest usage of
dataclasses in ``Union`` types. For more info, check out the
@@ -850,7 +912,6 @@ dataclasses in ``Union`` types. For more info, check out the
from __future__ import annotations
from dataclasses import dataclass
-
from dataclass_wizard import JSONWizard
@@ -890,27 +951,71 @@ dataclasses in ``Union`` types. For more info, check out the
]
}
-
c = Container.from_dict(data)
- print(f'{c!r}')
-
- # True
- assert c == Container(objects=[A(my_int=42, my_bool=False),
- C(my_str='hello world'),
- B(my_int=123, my_bool=True),
- A(my_int=321, my_bool=True)])
+ print(repr(c))
+ # Output:
+ # Container(objects=[A(my_int=42, my_bool=False),
+ # C(my_str='hello world'),
+ # B(my_int=123, my_bool=True),
+ # A(my_int=321, my_bool=True)])
print(c.to_dict())
- # prints the following on a single line:
- # {'objects': [{'myInt': 42, 'myBool': False, 'type': 'A'},
- # {'myStr': 'hello world', 'type': 'C'},
- # {'myInt': 123, 'myBool': True, 'type': 'B'},
- # {'myInt': 321, 'myBool': True, 'type': 'A'}]}
# True
assert c == c.from_json(c.to_json())
+Supercharged ``Union`` Parsing
+------------------------------
+
+**What about untagged dataclasses in** ``Union`` **types or** ``|`` **syntax?** With the major release **V1** opt-in, ``dataclass-wizard`` supercharges *Union* parsing, making it intuitive and flexible, even without tags.
+
+This is especially useful for collections like ``list[Wizard]`` or when tags (discriminators) are not feasible.
+
+To enable this feature, opt in to **v1** using the ``Meta`` settings. For details, see the `Field Guide to V1 Opt-in`_.
+
+.. code-block:: python3
+
+ from __future__ import annotations # Remove in Python 3.10+
+
+ from dataclasses import dataclass
+ from typing import Literal
+
+ from dataclass_wizard import JSONWizard
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True # Enable v1 opt-in
+ v1_unsafe_parse_dataclass_in_union = True
+
+ literal_or_float: Literal['Auto'] | float
+ entry: int | MoreDetails
+ collection: list[MoreDetails | int]
+
+ @dataclass
+ class MoreDetails:
+ arg: str
+
+ # OK: Union types work seamlessly
+ c = MyClass.from_dict({
+ "literal_or_float": 1.23,
+ "entry": 123,
+ "collection": [{"arg": "test"}]
+ })
+ print(repr(c))
+ #> MyClass(literal_or_float=1.23, entry=123, collection=[MoreDetails(arg='test')])
+
+ # OK: Handles primitive and dataclass parsing
+ c = MyClass.from_dict({
+ "literal_or_float": "Auto",
+ "entry": {"arg": "example"},
+ "collection": [123]
+ })
+ print(repr(c))
+ #> MyClass(literal_or_float='Auto', entry=MoreDetails(arg='example'), collection=[123])
+
Conditional Field Skipping
--------------------------
@@ -1236,6 +1341,13 @@ refer to the `Using Field Properties`_ section in the documentation.
What's New in v1.0
------------------
+.. admonition:: v1 Opt-in Now Available
+
+ Early opt-in for **v1** is now available with enhanced features, including intuitive ``Union`` parsing and optimized performance. To enable this,
+ set ``v1=True`` in your ``Meta`` settings.
+
+ For more details and migration guidance, see the `Field Guide to V1 Opt-in`_.
+
.. warning::
- **Default Key Transformation Update**
@@ -1243,9 +1355,9 @@ What's New in v1.0
Starting with ``v1.0.0``, the default key transformation for JSON serialization
will change to keep keys *as-is* instead of converting them to `camelCase`.
- *New Default Behavior*: ``key_transform='NONE'`` will be the standard setting.
+ **New Default Behavior**: ``key_transform='NONE'`` will be the standard setting.
- *How to Prepare*: You can enforce this future behavior right now by using the ``JSONPyWizard`` helper:
+ **How to Prepare**: You can enforce this future behavior right now by using the ``JSONPyWizard`` helper:
.. code-block:: python3
@@ -1259,7 +1371,6 @@ What's New in v1.0
print(MyModel(my_field="value").to_dict())
# Output: {'my_field': 'value'}
-
- **Float to Int Conversion Change**
Starting in ``v1.0``, floats or float strings with fractional
@@ -1268,8 +1379,7 @@ What's New in v1.0
However, floats with no fractional parts (e.g., ``3.0``
or ``"3.0"``) will still convert to integers as before.
- *How to Prepare*: To ensure compatibility with the new behavior:
-
+ **How to Prepare**: To ensure compatibility with the new behavior:
- Use ``float`` annotations for fields that may include fractional values.
- Review your data and avoid passing fractional values (e.g., ``123.4``) to fields annotated as ``int``.
- Update tests or logic that rely on the current rounding behavior.
diff --git a/benchmarks/catch_all.png b/benchmarks/catch_all.png
new file mode 100644
index 00000000..7f438796
Binary files /dev/null and b/benchmarks/catch_all.png differ
diff --git a/benchmarks/catch_all.py b/benchmarks/catch_all.py
new file mode 100644
index 00000000..b869ec2b
--- /dev/null
+++ b/benchmarks/catch_all.py
@@ -0,0 +1,105 @@
+import logging
+from dataclasses import dataclass
+from typing import Any
+
+import pytest
+
+from dataclasses_json import (dataclass_json, Undefined, CatchAll as CatchAllDJ)
+from dataclass_wizard import (JSONWizard, CatchAll as CatchAllWizard)
+
+
+log = logging.getLogger(__name__)
+
+
+@dataclass()
+class DontCareAPIDump:
+ endpoint: str
+ data: dict[str, Any]
+
+
+@dataclass_json(undefined=Undefined.INCLUDE)
+@dataclass()
+class DontCareAPIDumpDJ(DontCareAPIDump):
+ unknown_things: CatchAllDJ
+
+
+@dataclass()
+class DontCareAPIDumpWizard(DontCareAPIDump, JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ unknown_things: CatchAllWizard
+
+
+# Fixtures for test data
+@pytest.fixture(scope='session')
+def data():
+ return {"endpoint": "some_api_endpoint",
+ "data": {"foo": 1, "bar": "2"},
+ "undefined_field_name": [1, 2, 3]}
+
+
+@pytest.fixture(scope='session')
+def data_no_extras():
+ return {"endpoint": "some_api_endpoint",
+ "data": {"foo": 1, "bar": "2"}}
+
+
+# Benchmark for deserialization (from_dict)
+@pytest.mark.benchmark(group="deserialization")
+def test_deserialize_wizard(benchmark, data):
+ benchmark(lambda: DontCareAPIDumpWizard.from_dict(data))
+
+
+@pytest.mark.benchmark(group="deserialization")
+def test_deserialize_json(benchmark, data):
+ benchmark(lambda: DontCareAPIDumpDJ.from_dict(data))
+
+
+# Benchmark for deserialization with no extra data
+@pytest.mark.benchmark(group="deserialization_no_extra_data")
+def test_deserialize_wizard_no_extras(benchmark, data_no_extras):
+ benchmark(lambda: DontCareAPIDumpWizard.from_dict(data_no_extras))
+
+
+@pytest.mark.benchmark(group="deserialization_no_extra_data")
+def test_deserialize_json_no_extras(benchmark, data_no_extras):
+ benchmark(lambda: DontCareAPIDumpDJ.from_dict(data_no_extras))
+
+
+# Benchmark for serialization (to_dict)
+@pytest.mark.benchmark(group="serialization")
+def test_serialize_wizard(benchmark, data):
+ dump1 = DontCareAPIDumpWizard.from_dict(data)
+ benchmark(lambda: dump1.to_dict())
+
+
+@pytest.mark.benchmark(group="serialization")
+def test_serialize_json(benchmark, data):
+ dump2 = DontCareAPIDumpDJ.from_dict(data)
+ benchmark(lambda: dump2.to_dict())
+
+
+def test_validate(data, data_no_extras):
+ dump1 = DontCareAPIDumpDJ.from_dict(data_no_extras) # DontCareAPIDump(endpoint='some_api_endpoint', data={'foo': 1, 'bar': '2'})
+ dump2 = DontCareAPIDumpWizard.from_dict(data_no_extras) # DontCareAPIDump(endpoint='some_api_endpoint', data={'foo': 1, 'bar': '2'})
+
+ assert dump1.endpoint == dump2.endpoint
+ assert dump1.data == dump2.data
+ assert dump1.unknown_things == dump2.unknown_things == {}
+
+ expected = {'endpoint': 'some_api_endpoint', 'data': {'foo': 1, 'bar': '2'}}
+
+ assert dump1.to_dict() == dump2.to_dict() == expected
+
+ dump1 = DontCareAPIDumpDJ.from_dict(data) # DontCareAPIDump(endpoint='some_api_endpoint', data={'foo': 1, 'bar': '2'})
+ dump2 = DontCareAPIDumpWizard.from_dict(data) # DontCareAPIDump(endpoint='some_api_endpoint', data={'foo': 1, 'bar': '2'})
+
+ assert dump1.endpoint == dump2.endpoint
+ assert dump1.data == dump2.data
+ assert dump1.unknown_things == dump2.unknown_things
+
+ expected = {'endpoint': 'some_api_endpoint', 'data': {'foo': 1, 'bar': '2'}, 'undefined_field_name': [1, 2, 3]}
+
+ assert dump1.to_dict() == dump2.to_dict() == expected
diff --git a/benchmarks/complex.py b/benchmarks/complex.py
index 199823a7..1295dbef 100644
--- a/benchmarks/complex.py
+++ b/benchmarks/complex.py
@@ -16,7 +16,7 @@
import attr
import mashumaro
-from dataclass_wizard import JSONWizard
+from dataclass_wizard import JSONWizard, LoadMeta
from dataclass_wizard.class_helper import create_new_class
from dataclass_wizard.utils.string_conv import to_snake_case
from dataclass_wizard.utils.type_conv import as_datetime
@@ -135,6 +135,10 @@ class PersonDJ:
attr_dict=vars(MyClass).copy())
+# Enable experimental `v1` mode for optimized de/serialization
+LoadMeta(v1=True).bind_to(MyClassWizard)
+
+
@pytest.fixture(scope='session')
def data():
return {
@@ -214,14 +218,14 @@ def test_load(request, data, data_2, data_dacite, n):
"""
[ RESULTS ON MAC OS X ]
- benchmarks.complex.complex - [INFO] dataclass-wizard 0.800521
- benchmarks.complex.complex - [INFO] dataclass-factory 0.827150
- benchmarks.complex.complex - [INFO] dataclasses-json 37.087781
- benchmarks.complex.complex - [INFO] dacite 9.421210
- benchmarks.complex.complex - [INFO] mashumaro 0.608496
- benchmarks.complex.complex - [INFO] pydantic 1.039472
- benchmarks.complex.complex - [INFO] jsons 39.677698
- benchmarks.complex.complex - [INFO] jsons (strict) 41.592585
+ benchmarks.complex.complex - [INFO] dataclass-wizard 0.373847
+ benchmarks.complex.complex - [INFO] dataclass-factory 0.777164
+ benchmarks.complex.complex - [INFO] dataclasses-json 28.177022
+ benchmarks.complex.complex - [INFO] dacite 6.619898
+ benchmarks.complex.complex - [INFO] mashumaro 0.351623
+ benchmarks.complex.complex - [INFO] pydantic 0.563395
+ benchmarks.complex.complex - [INFO] jsons 30.564242
+ benchmarks.complex.complex - [INFO] jsons (strict) 35.122489
"""
g = globals().copy()
g.update(locals())
diff --git a/benchmarks/nested.py b/benchmarks/nested.py
index 42dfefea..6041e9ea 100644
--- a/benchmarks/nested.py
+++ b/benchmarks/nested.py
@@ -13,7 +13,7 @@
from pydantic import BaseModel
import mashumaro
-from dataclass_wizard import JSONWizard
+from dataclass_wizard import JSONWizard, LoadMeta
from dataclass_wizard.class_helper import create_new_class
from dataclass_wizard.utils.string_conv import to_snake_case
from dataclass_wizard.utils.type_conv import as_datetime, as_date
@@ -141,6 +141,8 @@ class Data2DJ:
JsonsType = TypeVar('JsonsType', Data1, JsonSerializable)
# Model for `dataclasses-json`
DJType = TypeVar('DJType', Data1, DataClassJsonMixin)
+# Model for `mashumaro`
+MashumaroType = TypeVar('MashumaroType', Data1, mashumaro.DataClassDictMixin)
# Factory for `dataclass-factory`
factory = dataclass_factory.Factory()
@@ -150,12 +152,19 @@ class Data2DJ:
MyClassJsons: JsonsType = create_new_class(
Data1, (Data1, JsonSerializable), 'Jsons',
attr_dict=vars(Data1).copy())
+MyClassMashumaroModel: MashumaroType = create_new_class(
+ Data1, (Data1, mashumaro.DataClassDictMixin), 'Mashumaro',
+ attr_dict=vars(Data1).copy())
# Pydantic Model for Benchmarking
MyClassPydanticModel = MyClassPydantic
# Mashumaro Model for Benchmarking
-MyClassMashumaroModel = MyClassMashumaro
+# MyClassMashumaroModel = MyClassMashumaro
+
+
+# Enable experimental `v1` mode for optimized de/serialization
+LoadMeta(v1=True).bind_to(MyClassWizard)
@pytest.fixture(scope='session')
@@ -205,18 +214,20 @@ def test_load(request, data, n):
"""
[ RESULTS ON MAC OS X ]
- benchmarks.nested.nested - [INFO] dataclass-wizard 0.397123
- benchmarks.nested.nested - [INFO] dataclass-factory 0.418530
- benchmarks.nested.nested - [INFO] dataclasses-json 11.443072
- benchmarks.nested.nested - [INFO] mashumaro 0.158189
- benchmarks.nested.nested - [INFO] pydantic 0.346031
- benchmarks.nested.nested - [INFO] jsons 28.124958
- benchmarks.nested.nested - [INFO] jsons (strict) 28.816675
+ benchmarks.nested.nested - [INFO] dataclass-wizard 0.135700
+ benchmarks.nested.nested - [INFO] dataclass-factory 0.412265
+ benchmarks.nested.nested - [INFO] dataclasses-json 11.448704
+ benchmarks.nested.nested - [INFO] mashumaro 0.150680
+ benchmarks.nested.nested - [INFO] pydantic 0.328947
+ benchmarks.nested.nested - [INFO] jsons 25.052287
+ benchmarks.nested.nested - [INFO] jsons (strict) 43.233567
"""
g = globals().copy()
g.update(locals())
+ MyClassWizard.from_dict(data)
+
log.info('dataclass-wizard %f',
timeit('MyClassWizard.from_dict(data)', globals=g, number=n))
diff --git a/benchmarks/simple.py b/benchmarks/simple.py
index 62409a32..73a5819a 100644
--- a/benchmarks/simple.py
+++ b/benchmarks/simple.py
@@ -13,7 +13,7 @@
import attr
import mashumaro
-from dataclass_wizard import JSONWizard
+from dataclass_wizard import JSONWizard, LoadMeta
from dataclass_wizard.class_helper import create_new_class
from dataclass_wizard.utils.string_conv import to_snake_case
@@ -65,6 +65,10 @@ class MyClassMashumaro(mashumaro.DataClassDictMixin):
MyClassDJ: DJType = create_new_class(MyClass, (MyClass, DataClassJsonMixin), "DJ")
MyClassJsons: JsonsType = create_new_class(MyClass, (MyClass, JsonSerializable), "Jsons")
+# Enable experimental `v1` mode for optimized de/serialization
+LoadMeta(v1=True).bind_to(MyClassWizard)
+
+
@pytest.fixture(scope="session")
def data():
return {
@@ -77,7 +81,7 @@ def test_load(data, n):
"""
[ RESULTS ON MAC OS X ]
- benchmarks.simple.simple - [INFO] dataclass-wizard 0.076336
+ benchmarks.simple.simple - [INFO] dataclass-wizard 0.033917
benchmarks.simple.simple - [INFO] dataclass-factory 0.103837
benchmarks.simple.simple - [INFO] dataclasses-json 3.941902
benchmarks.simple.simple - [INFO] jsons 5.636863
diff --git a/dataclass_wizard/__init__.py b/dataclass_wizard/__init__.py
index 52652220..cddd4a1c 100644
--- a/dataclass_wizard/__init__.py
+++ b/dataclass_wizard/__init__.py
@@ -2,8 +2,8 @@
Dataclass Wizard
~~~~~~~~~~~~~~~~
-Marshal dataclasses to/from JSON and Python dict objects. Support properties
-with initial values. Generate a dataclass schema for JSON input.
+Lightning-fast JSON wizardry for Python dataclasses โ effortless
+serialization with no external tools required!
Sample Usage:
@@ -120,7 +120,8 @@
from .bases_meta import LoadMeta, DumpMeta, EnvMeta
from .dumpers import DumpMixin, setup_default_dumper, asdict
-from .loaders import LoadMixin, setup_default_loader, fromlist, fromdict
+from .loaders import LoadMixin, setup_default_loader
+from .loader_selection import fromlist, fromdict
from .models import (env_field, json_field, json_key, path_field, skip_if_field,
KeyPath, Container,
Pattern, DatePattern, TimePattern, DateTimePattern,
diff --git a/dataclass_wizard/__version__.py b/dataclass_wizard/__version__.py
index d6abffd3..a19cced8 100644
--- a/dataclass_wizard/__version__.py
+++ b/dataclass_wizard/__version__.py
@@ -3,9 +3,9 @@
"""
__title__ = 'dataclass-wizard'
-__description__ = ('Effortlessly marshal dataclasses to/from JSON. '
- 'Leverage field properties with default values. '
- 'Generate dataclass schemas from JSON input.')
+
+__description__ = ('Lightning-fast JSON wizardry for Python dataclasses โ '
+ 'effortless serialization with no external tools required!')
__url__ = 'https://github.com/rnag/dataclass-wizard'
__version__ = '0.32.1'
__author__ = 'Ritvik Nag'
diff --git a/dataclass_wizard/abstractions.py b/dataclass_wizard/abstractions.py
index 3fc7f5e2..4ac7940b 100644
--- a/dataclass_wizard/abstractions.py
+++ b/dataclass_wizard/abstractions.py
@@ -9,6 +9,7 @@
from .bases import META
from .models import Extras
+from .v1.models import TypeInfo
from .type_def import T, TT
@@ -252,3 +253,217 @@ def get_parser_for_annotation(cls, ann_type,
class AbstractDumper(ABC):
__slots__ = ()
+
+
+class AbstractLoaderGenerator(ABC):
+ """
+ Abstract code generator which defines helper methods to generate the
+ code for deserializing an object `o` of a given annotated type into
+ the corresponding dataclass field during dynamic function construction.
+ """
+ __slots__ = ()
+
+ @staticmethod
+ @abstractmethod
+ def transform_json_field(string: str) -> str:
+ """
+ Transform a JSON field name (which will typically be camel-cased)
+ into the conventional format for a dataclass field name
+ (which will ideally be snake-cased).
+ """
+
+ @staticmethod
+ @abstractmethod
+ def default_load_to(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code for the default load function if no other types match.
+ Generally, this will be a stub load method.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_after_type_check(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load an object after confirming its type.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_str(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a string field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_int(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into an integer field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_float(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a float field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_bool(_: str, extras: Extras) -> str:
+ """
+ Generate code to load a value into a boolean field.
+ Adds a helper function `as_bool` to the local context.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_bytes(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a bytes field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_bytearray(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a bytearray field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_none(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a None.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_literal(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to confirm a value is equivalent to one
+ of the provided literals.
+ """
+
+ @classmethod
+ @abstractmethod
+ def load_to_union(cls, tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a `Union[X, Y, ...]` (one of [X, Y, ...] possible types)
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_enum(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into an Enum field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_uuid(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a UUID field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_iterable(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into an iterable field (list, set, etc.).
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_tuple(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a tuple field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_named_tuple(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a named tuple field.
+ """
+
+ @classmethod
+ @abstractmethod
+ def load_to_named_tuple_untyped(cls, tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into an untyped named tuple.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_dict(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a dictionary field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_defaultdict(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a defaultdict field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_typed_dict(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a typed dictionary field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_decimal(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a Decimal field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_datetime(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a datetime field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_time(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a time field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_date(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a date field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_timedelta(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a timedelta field.
+ """
+
+ @staticmethod
+ def load_to_dataclass(tp: TypeInfo, extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to load a value into a `dataclass` type field.
+ """
+
+ @classmethod
+ @abstractmethod
+ def get_string_for_annotation(cls,
+ tp: TypeInfo,
+ extras: Extras) -> 'str | TypeInfo':
+ """
+ Generate code to get the parser (dispatcher) for a given annotation type.
+
+ `base_cls` is the original class object, useful when the annotated
+ type is a :class:`typing.ForwardRef` object.
+ """
diff --git a/dataclass_wizard/abstractions.pyi b/dataclass_wizard/abstractions.pyi
index 4850ae89..4f52743c 100644
--- a/dataclass_wizard/abstractions.pyi
+++ b/dataclass_wizard/abstractions.pyi
@@ -11,7 +11,7 @@ from typing import (
Text, Sequence, Iterable, Generic
)
-from .models import Extras
+from .models import Extras, TypeInfo
from .type_def import (
DefFactory, FrozenKeys, ListOfJSONObject, JSONObject, Encoder,
M, N, T, TT, NT, E, U, DD, LSQ
@@ -422,3 +422,221 @@ class AbstractDumper(ABC):
to subclass from DumpMixin.
"""
...
+
+
+class AbstractLoaderGenerator(ABC):
+ """
+ Abstract code generator which defines helper methods to generate the
+ code for deserializing an object `o` of a given annotated type into
+ the corresponding dataclass field during dynamic function construction.
+ """
+ __slots__ = ()
+
+ @staticmethod
+ @abstractmethod
+ def transform_json_field(string: str) -> str:
+ """
+ Transform a JSON field name (which will typically be camel-cased)
+ into the conventional format for a dataclass field name
+ (which will ideally be snake-cased).
+ """
+
+ @staticmethod
+ @abstractmethod
+ def default_load_to(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code for the default load function if no other types match.
+ Generally, this will be a stub load method.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_after_type_check(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load an object after confirming its type.
+
+ :param tp: The type information (including annotation) of the field as a string.
+ :param extras: Additional context or dependencies for code generation.
+ :raises ParseError: If the object type is not as expected.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_str(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a string field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_int(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into an integer field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_float(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a float field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_bool(_: str, extras: Extras) -> str:
+ """
+ Generate code to load a value into a boolean field.
+ Adds a helper function `as_bool` to the local context.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_bytes(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a bytes field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_bytearray(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a bytearray field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_none(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a None.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_literal(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to confirm a value is equivalent to one
+ of the provided literals.
+ """
+
+ @classmethod
+ @abstractmethod
+ def load_to_union(cls, tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a `Union[X, Y, ...]` (one of [X, Y, ...] possible types)
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_enum(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into an Enum field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_uuid(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a UUID field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_iterable(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into an iterable field (list, set, etc.).
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_tuple(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a tuple field.
+ """
+
+ @classmethod
+ @abstractmethod
+ def load_to_named_tuple(cls, tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a named tuple field.
+ """
+
+ @classmethod
+ @abstractmethod
+ def load_to_named_tuple_untyped(cls, tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into an untyped named tuple.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_dict(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a dictionary field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_defaultdict(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a defaultdict field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_typed_dict(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a typed dictionary field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_decimal(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a Decimal field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_datetime(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a datetime field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_time(tp: TypeInfo, extras: Extras) -> str:
+ """
+ Generate code to load a value into a time field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_date(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a date field.
+ """
+
+ @staticmethod
+ @abstractmethod
+ def load_to_timedelta(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a timedelta field.
+ """
+
+ @staticmethod
+ def load_to_dataclass(tp: TypeInfo, extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to load a value into a `dataclass` type field.
+ """
+
+ @classmethod
+ @abstractmethod
+ def get_string_for_annotation(cls,
+ tp: TypeInfo,
+ extras: Extras) -> str | TypeInfo:
+ """
+ Generate code to get the parser (dispatcher) for a given annotation type.
+
+ `base_cls` is the original class object, useful when the annotated
+ type is a :class:`typing.ForwardRef` object.
+ """
diff --git a/dataclass_wizard/bases.py b/dataclass_wizard/bases.py
index 542cf9fd..e7c38381 100644
--- a/dataclass_wizard/bases.py
+++ b/dataclass_wizard/bases.py
@@ -1,12 +1,12 @@
from abc import ABCMeta, abstractmethod
-from collections.abc import Sequence
-from typing import Callable, Type, Dict, Optional, ClassVar, Union, TypeVar, Sequence
+from typing import Callable, Type, Dict, Optional, ClassVar, Union, TypeVar
from .constants import TAG
from .decorators import cached_class_property
-from .models import Condition
from .enums import DateTimeTo, LetterCase, LetterCasePriority
+from .models import Condition
from .type_def import FrozenKeys, EnvFileType
+from .v1.enums import KeyAction, KeyCase
# Create a generic variable that can be 'AbstractMeta', or any subclass.
@@ -46,10 +46,12 @@ def __or__(cls: META, other: META) -> META:
# defined on the abstract class. Use `other` instead, which
# *will* be a concrete subclass of `AbstractMeta`.
src = other
+ # noinspection PyTypeChecker
for k in src.fields_to_merge:
if k in other_dict:
base_dict[k] = other_dict[k]
else:
+ # noinspection PyTypeChecker
for k in src.fields_to_merge:
if k in src_dict:
base_dict[k] = src_dict[k]
@@ -70,6 +72,7 @@ def __or__(cls: META, other: META) -> META:
# In a reversed MRO, the inheritance tree looks like this:
# |___ object -> AbstractMeta -> BaseJSONWizardMeta -> ...
# So here, we want to choose the third-to-last class in the list.
+ # noinspection PyUnresolvedReferences
src = src.__mro__[-3]
# noinspection PyTypeChecker
@@ -88,6 +91,7 @@ def __and__(cls: META, other: META) -> META:
other_dict = other.__dict__
# Set meta attributes here.
+ # noinspection PyTypeChecker
for k in cls.all_fields:
if k in other_dict:
setattr(cls, k, other_dict[k])
@@ -107,26 +111,26 @@ class AbstractMeta(metaclass=ABCOrAndMeta):
__special_attrs__ = frozenset({
'recursive',
'json_key_to_field',
+ 'v1_field_to_alias',
'tag',
})
# Class attribute which enables us to detect a `JSONWizard.Meta` subclass.
__is_inner_meta__ = False
- # True to enable Debug mode for additional (more verbose) log output.
+ # Enable Debug mode for more verbose log output.
#
- # The value can also be a `str` or `int` which specifies
- # the minimum level for logs in this library to show up.
+ # This setting can be a `bool`, `int`, or `str`:
+ # - `True` enables debug mode with default verbosity.
+ # - A `str` or `int` specifies the minimum log level (e.g., 'DEBUG', 10).
#
- # For example, a message is logged whenever an unknown JSON key is
- # encountered when `from_dict` or `from_json` is called.
+ # Debug mode provides additional helpful log messages, including:
+ # - Logging unknown JSON keys encountered during `from_dict` or `from_json`.
+ # - Detailed error messages for invalid types during unmarshalling.
#
- # This also results in more helpful messages during error handling, which
- # can be useful when debugging the cause when values are an invalid type
- # (i.e. they don't match the annotation for the field) when unmarshalling
- # a JSON object to a dataclass instance.
+ # Note: Enabling Debug mode may have a minor performance impact.
#
- # Note there is a minor performance impact when DEBUG mode is enabled.
+ # @deprecated and will be removed in V1 - Use `v1_debug` instead.
debug_enabled: ClassVar['bool | int | str'] = False
# When enabled, a specified Meta config for the main dataclass (i.e. the
@@ -219,6 +223,68 @@ class AbstractMeta(metaclass=ABCOrAndMeta):
# the :func:`dataclasses.field`) in the serialization process.
skip_defaults_if: ClassVar[Condition] = None
+ # Enable opt-in to the "experimental" major release `v1` feature.
+ # This feature offers optimized performance for de/serialization.
+ # Defaults to False.
+ v1: ClassVar[bool] = False
+
+ # Enable Debug mode for more verbose log output.
+ #
+ # This setting can be a `bool`, `int`, or `str`:
+ # - `True` enables debug mode with default verbosity.
+ # - A `str` or `int` specifies the minimum log level (e.g., 'DEBUG', 10).
+ #
+ # Debug mode provides additional helpful log messages, including:
+ # - Logging unknown JSON keys encountered during `from_dict` or `from_json`.
+ # - Detailed error messages for invalid types during unmarshalling.
+ #
+ # Note: Enabling Debug mode may have a minor performance impact.
+ v1_debug: ClassVar['bool | int | str'] = False
+
+ # Specifies the letter case used to match JSON keys when mapping them
+ # to dataclass fields.
+ #
+ # This setting determines how dataclass fields are transformed to match
+ # the expected case of JSON keys during lookup. It does not affect keys
+ # in `TypedDict` or `NamedTuple` subclasses.
+ #
+ # By default, JSON keys are assumed to be in `snake_case`, and fields
+ # are matched directly without transformation.
+ #
+ # The setting is case-insensitive and supports shorthand assignment,
+ # such as using the string 'C' instead of 'CAMEL'.
+ #
+ # If set to `A` or `AUTO`, all valid key casing transforms are attempted
+ # at runtime, and the result is cached for subsequent lookups.
+ v1_key_case: ClassVar[Union[KeyCase, str]] = None
+
+ # A custom mapping of dataclass fields to their JSON aliases (keys) used
+ # during deserialization (`from_dict` or `from_json`) and serialization
+ # (`to_dict` or `to_json`).
+ #
+ # This mapping overrides default behavior, including implicit field-to-key
+ # transformations (e.g., "my_field" -> "myField").
+ #
+ # By default, the reverse mapping (JSON alias to field) is applied during
+ # serialization, unless explicitly overridden.
+ v1_field_to_alias: ClassVar[Dict[str, str]] = None
+
+ # Defines the action to take when an unknown JSON key is encountered during
+ # `from_dict` or `from_json` calls. An unknown key is one that does not map
+ # to any dataclass field.
+ #
+ # Valid options are:
+ # - `"ignore"` (default): Silently ignore unknown keys.
+ # - `"warn"`: Log a warning for each unknown key. Requires `debug_enabled`
+ # to be `True` and properly configured logging.
+ # - `"raise"`: Raise an `UnknownKeyError` for the first unknown key encountered.
+ v1_on_unknown_key: ClassVar[KeyAction] = None
+
+ # Unsafe: Enables parsing of dataclasses in unions without requiring
+ # the presence of a `tag_key`, i.e., a dictionary key identifying the
+ # tag field in the input. Defaults to False.
+ v1_unsafe_parse_dataclass_in_union: ClassVar[bool] = False
+
# noinspection PyMethodParameters
@cached_class_property
def all_fields(cls) -> FrozenKeys:
@@ -346,11 +412,13 @@ class AbstractEnvMeta(metaclass=ABCOrAndMeta):
# the :func:`dataclasses.field`) in the serialization process.
skip_defaults_if: ClassVar[Condition] = None
+ # noinspection PyMethodParameters
@cached_class_property
def all_fields(cls) -> FrozenKeys:
"""Return a list of all class attributes"""
return frozenset(AbstractEnvMeta.__annotations__)
+ # noinspection PyMethodParameters
@cached_class_property
def fields_to_merge(cls) -> FrozenKeys:
"""Return a list of class attributes, minus `__special_attrs__`"""
diff --git a/dataclass_wizard/bases_meta.py b/dataclass_wizard/bases_meta.py
index 18924a92..3a63e801 100644
--- a/dataclass_wizard/bases_meta.py
+++ b/dataclass_wizard/bases_meta.py
@@ -6,7 +6,6 @@
"""
import logging
from datetime import datetime, date
-from typing import Type, Optional, Dict, Union, Sequence
from .abstractions import AbstractJSONWizard
from .bases import AbstractMeta, META, AbstractEnvMeta
@@ -14,18 +13,17 @@
META_INITIALIZER, _META,
get_outer_class_name, get_class_name, create_new_class,
json_field_to_dataclass_field, dataclass_field_to_json_field,
- field_to_env_var,
+ field_to_env_var, DATACLASS_FIELD_TO_ALIAS_FOR_LOAD,
)
-from .constants import TAG
from .decorators import try_with_load
from .dumpers import get_dumper
from .enums import DateTimeTo, LetterCase, LetterCasePriority
+from .v1.enums import KeyAction, KeyCase
from .environ.loaders import EnvLoader
from .errors import ParseError
-from .loaders import get_loader
+from .loader_selection import get_loader
from .log import LOG
-from .models import Condition
-from .type_def import E, EnvFileType
+from .type_def import E
from .utils.type_conv import date_to_timestamp, as_enum
@@ -53,7 +51,7 @@ def _enable_debug_mode_if_needed(cls_loader, possible_lvl):
load_hooks[typ] = try_with_load(load_hooks[typ])
-def _as_enum_safe(cls: type, name: str, base_type: Type[E]) -> Optional[E]:
+def _as_enum_safe(cls: type, name: str, base_type: type[E]) -> 'E | None':
"""
Attempt to return the value for class attribute :attr:`attr_name` as
a :type:`base_type`.
@@ -114,6 +112,8 @@ def _init_subclass(cls):
setattr(AbstractMeta, attr, getattr(cls, attr, None))
if cls.json_key_to_field:
AbstractMeta.json_key_to_field = cls.json_key_to_field
+ if cls.v1_field_to_alias:
+ AbstractMeta.v1_field_to_alias = cls.v1_field_to_alias
# Create a new class of `Type[W]`, and then pass `create=False` so
# that we don't create new loader / dumper for the class.
@@ -121,12 +121,17 @@ def _init_subclass(cls):
cls.bind_to(new_cls, create=False)
@classmethod
- def bind_to(cls, dataclass: Type, create=True, is_default=True):
+ def bind_to(cls, dataclass: type, create=True, is_default=True,
+ base_loader=None):
- cls_loader = get_loader(dataclass, create=create)
+ cls_loader = get_loader(dataclass, create=create,
+ base_cls=base_loader, v1=cls.v1)
cls_dumper = get_dumper(dataclass, create=create)
- if cls.debug_enabled:
+ if cls.v1_debug:
+ _enable_debug_mode_if_needed(cls_loader, cls.v1_debug)
+
+ elif cls.debug_enabled:
_enable_debug_mode_if_needed(cls_loader, cls.debug_enabled)
if cls.json_key_to_field is not None:
@@ -169,10 +174,30 @@ def bind_to(cls, dataclass: Type, create=True, is_default=True):
cls_loader.transform_json_field = _as_enum_safe(
cls, 'key_transform_with_load', LetterCase)
+ if cls.v1_key_case is not None:
+ cls_loader.transform_json_field = _as_enum_safe(
+ cls, 'v1_key_case', KeyCase)
+
+ if (field_to_alias := cls.v1_field_to_alias) is not None:
+
+ add_for_load = field_to_alias.pop('__load__', True)
+ add_for_dump = field_to_alias.pop('__dump__', True)
+
+ if add_for_load:
+ DATACLASS_FIELD_TO_ALIAS_FOR_LOAD[dataclass].update(
+ field_to_alias)
+
+ if add_for_dump:
+ dataclass_field_to_json_field(dataclass).update(
+ field_to_alias)
+
if cls.key_transform_with_dump is not None:
cls_dumper.transform_dataclass_field = _as_enum_safe(
cls, 'key_transform_with_dump', LetterCase)
+ if cls.v1_on_unknown_key is not None:
+ cls.v1_on_unknown_key = _as_enum_safe(cls, 'v1_on_unknown_key', KeyAction)
+
# Finally, if needed, save the meta config for the outer class. This
# will allow us to access this config as part of the JSON load/dump
# process if needed.
@@ -226,7 +251,7 @@ def _init_subclass(cls):
cls.bind_to(new_cls, create=False)
@classmethod
- def bind_to(cls, env_class: Type, create=True, is_default=True):
+ def bind_to(cls, env_class: type, create=True, is_default=True):
cls_loader = get_loader(env_class, create=create, base_cls=EnvLoader)
cls_dumper = get_dumper(env_class, create=create)
@@ -258,15 +283,7 @@ def bind_to(cls, env_class: Type, create=True, is_default=True):
# noinspection PyPep8Naming
-def LoadMeta(*, debug_enabled: 'bool | int | str' = False,
- recursive: bool = True,
- recursive_classes: bool = False,
- raise_on_unknown_json_key: bool = False,
- json_key_to_field: Dict[str, str] = None,
- key_transform: Union[LetterCase, str] = None,
- tag: str = None,
- tag_key: str = TAG,
- auto_assign_tags: bool = False) -> META:
+def LoadMeta(**kwargs) -> META:
"""
Helper function to setup the ``Meta`` Config for the JSON load
(de-serialization) process, which is intended for use alongside the
@@ -283,20 +300,10 @@ def LoadMeta(*, debug_enabled: 'bool | int | str' = False,
.. _Docs: https://dataclass-wizard.readthedocs.io/en/latest/common_use_cases/meta.html
"""
+ base_dict = kwargs | {'__slots__': ()}
- # Set meta attributes here.
- base_dict = {
- '__slots__': (),
- 'raise_on_unknown_json_key': raise_on_unknown_json_key,
- 'recursive_classes': recursive_classes,
- 'key_transform_with_load': key_transform,
- 'json_key_to_field': json_key_to_field,
- 'debug_enabled': debug_enabled,
- 'recursive': recursive,
- 'tag': tag,
- 'tag_key': tag_key,
- 'auto_assign_tags': auto_assign_tags,
- }
+ if 'key_transform' in kwargs:
+ base_dict['key_transform_with_load'] = base_dict.pop('key_transform')
# Create a new subclass of :class:`AbstractMeta`
# noinspection PyTypeChecker
@@ -304,15 +311,7 @@ def LoadMeta(*, debug_enabled: 'bool | int | str' = False,
# noinspection PyPep8Naming
-def DumpMeta(*, debug_enabled: 'bool | int | str' = False,
- recursive: bool = True,
- marshal_date_time_as: Union[DateTimeTo, str] = None,
- key_transform: Union[LetterCase, str] = None,
- tag: str = None,
- skip_defaults: bool = False,
- skip_if: Condition = None,
- skip_defaults_if: Condition = None,
- ) -> META:
+def DumpMeta(**kwargs) -> META:
"""
Helper function to setup the ``Meta`` Config for the JSON dump
(serialization) process, which is intended for use alongside the
@@ -331,17 +330,10 @@ def DumpMeta(*, debug_enabled: 'bool | int | str' = False,
"""
# Set meta attributes here.
- base_dict = {
- '__slots__': (),
- 'marshal_date_time_as': marshal_date_time_as,
- 'key_transform_with_dump': key_transform,
- 'skip_defaults': skip_defaults,
- 'skip_if': skip_if,
- 'skip_defaults_if': skip_defaults_if,
- 'debug_enabled': debug_enabled,
- 'recursive': recursive,
- 'tag': tag,
- }
+ base_dict = kwargs | {'__slots__': ()}
+
+ if 'key_transform' in kwargs:
+ base_dict['key_transform_with_dump'] = base_dict.pop('key_transform')
# Create a new subclass of :class:`AbstractMeta`
# noinspection PyTypeChecker
@@ -349,18 +341,7 @@ def DumpMeta(*, debug_enabled: 'bool | int | str' = False,
# noinspection PyPep8Naming
-def EnvMeta(*, debug_enabled: 'bool | int | str' = False,
- env_file: EnvFileType = None,
- env_prefix: str = '',
- secrets_dir: 'EnvFileType | Sequence[EnvFileType]' = None,
- field_to_env_var: dict[str, str] = None,
- key_lookup_with_load: Union[LetterCasePriority, str] = LetterCasePriority.SCREAMING_SNAKE,
- key_transform_with_dump: Union[LetterCase, str] = LetterCase.SNAKE,
- # marshal_date_time_as: Union[DateTimeTo, str] = None,
- skip_defaults: bool = False,
- skip_if: Condition = None,
- skip_defaults_if: Condition = None,
- ) -> META:
+def EnvMeta(**kwargs) -> META:
"""
Helper function to setup the ``Meta`` Config for the EnvWizard.
@@ -376,19 +357,7 @@ def EnvMeta(*, debug_enabled: 'bool | int | str' = False,
"""
# Set meta attributes here.
- base_dict = {
- '__slots__': (),
- 'debug_enabled': debug_enabled,
- 'env_file': env_file,
- 'env_prefix': env_prefix,
- 'secrets_dir': secrets_dir,
- 'field_to_env_var': field_to_env_var,
- 'key_lookup_with_load': key_lookup_with_load,
- 'key_transform_with_dump': key_transform_with_dump,
- 'skip_defaults': skip_defaults,
- 'skip_if': skip_if,
- 'skip_defaults_if': skip_defaults_if,
- }
+ base_dict = kwargs | {'__slots__': ()}
# Create a new subclass of :class:`AbstractMeta`
# noinspection PyTypeChecker
diff --git a/dataclass_wizard/bases_meta.pyi b/dataclass_wizard/bases_meta.pyi
new file mode 100644
index 00000000..968965ab
--- /dev/null
+++ b/dataclass_wizard/bases_meta.pyi
@@ -0,0 +1,100 @@
+"""
+Ideally should be in the `bases` module, however we'll run into a Circular
+Import scenario if we move it there, since the `loaders` and `dumpers` modules
+both import directly from `bases`.
+
+"""
+from dataclasses import MISSING
+
+from .bases import AbstractMeta, META, AbstractEnvMeta
+from .constants import TAG
+from .enums import DateTimeTo, LetterCase, LetterCasePriority
+from .v1.enums import KeyAction, KeyCase
+from .models import Condition
+from .type_def import E, EnvFileType
+
+
+# global flag to determine if debug mode was ever enabled
+_debug_was_enabled = False
+
+
+def _enable_debug_mode_if_needed(cls_loader, possible_lvl: bool | int | str):
+ ...
+
+
+def _as_enum_safe(cls: type, name: str, base_type: type[E]) -> E | None:
+ ...
+
+
+class BaseJSONWizardMeta(AbstractMeta):
+
+ __slots__ = ()
+
+ @classmethod
+ def _init_subclass(cls):
+ ...
+
+ @classmethod
+ def bind_to(cls, dataclass: type, create=True, is_default=True,
+ base_loader=None):
+ ...
+
+
+class BaseEnvWizardMeta(AbstractEnvMeta):
+
+ __slots__ = ()
+
+ @classmethod
+ def _init_subclass(cls):
+ ...
+
+ @classmethod
+ def bind_to(cls, env_class: type, create=True, is_default=True):
+ ...
+
+
+# noinspection PyPep8Naming
+def LoadMeta(*, debug_enabled: 'bool | int | str' = MISSING,
+ recursive: bool = True,
+ recursive_classes: bool = MISSING,
+ raise_on_unknown_json_key: bool = MISSING,
+ json_key_to_field: dict[str, str] = MISSING,
+ key_transform: LetterCase | str = MISSING,
+ tag: str = MISSING,
+ tag_key: str = TAG,
+ auto_assign_tags: bool = MISSING,
+ v1: bool = MISSING,
+ v1_key_case: KeyCase | str | None = MISSING,
+ v1_field_to_alias: dict[str, str] = MISSING,
+ v1_on_unknown_key: KeyAction | str | None = KeyAction.IGNORE,
+ v1_unsafe_parse_dataclass_in_union: bool = MISSING) -> META:
+ ...
+
+
+# noinspection PyPep8Naming
+def DumpMeta(*, debug_enabled: 'bool | int | str' = MISSING,
+ recursive: bool = True,
+ marshal_date_time_as: DateTimeTo | str = MISSING,
+ key_transform: LetterCase | str = MISSING,
+ tag: str = MISSING,
+ skip_defaults: bool = MISSING,
+ skip_if: Condition = MISSING,
+ skip_defaults_if: Condition = MISSING,
+ ) -> META:
+ ...
+
+
+# noinspection PyPep8Naming
+def EnvMeta(*, debug_enabled: 'bool | int | str' = MISSING,
+ env_file: EnvFileType = MISSING,
+ env_prefix: str = MISSING,
+ secrets_dir: 'EnvFileType | Sequence[EnvFileType]' = MISSING,
+ field_to_env_var: dict[str, str] = MISSING,
+ key_lookup_with_load: LetterCasePriority | str = LetterCasePriority.SCREAMING_SNAKE,
+ key_transform_with_dump: LetterCase | str = LetterCase.SNAKE,
+ # marshal_date_time_as: DateTimeTo | str = MISSING,
+ skip_defaults: bool = MISSING,
+ skip_if: Condition = MISSING,
+ skip_defaults_if: Condition = MISSING,
+ ) -> META:
+ ...
diff --git a/dataclass_wizard/class_helper.py b/dataclass_wizard/class_helper.py
index 265a4cfa..b751045f 100644
--- a/dataclass_wizard/class_helper.py
+++ b/dataclass_wizard/class_helper.py
@@ -10,7 +10,7 @@
from .utils.typing_compat import (
is_annotated, get_args, eval_forward_ref_if_needed
)
-
+from .v1.models import Field
# A cached mapping of dataclass to the list of fields, as returned by
# `dataclasses.fields()`.
@@ -29,6 +29,9 @@
# A mapping of dataclass to its loader.
CLASS_TO_LOADER = {}
+# V1: A mapping of dataclass to its loader.
+CLASS_TO_V1_LOADER = {}
+
# A mapping of dataclass to its dumper.
CLASS_TO_DUMPER = {}
@@ -36,6 +39,11 @@
# and load hook.
FIELD_NAME_TO_LOAD_PARSER = {}
+# Since the load process in V1 doesn't use Parsers currently, we use a sentinel
+# mapping to confirm if we need to setup the load config for a dataclass
+# on an initial run.
+IS_V1_LOAD_CONFIG_SETUP = set()
+
# Since the dump process doesn't use Parsers currently, we use a sentinel
# mapping to confirm if we need to setup the dump config for a dataclass
# on an initial run.
@@ -47,8 +55,14 @@
# A cached mapping, per dataclass, of instance field name to JSON path
DATACLASS_FIELD_TO_JSON_PATH = defaultdict(dict)
+# V1 Load: A cached mapping, per dataclass, of instance field name to alias path
+DATACLASS_FIELD_TO_ALIAS_PATH_FOR_LOAD = defaultdict(dict)
+
+# V1 Load: A cached mapping, per dataclass, of instance field name to alias
+DATACLASS_FIELD_TO_ALIAS_FOR_LOAD = defaultdict(dict)
+
# A cached mapping, per dataclass, of instance field name to JSON field
-DATACLASS_FIELD_TO_JSON_FIELD = defaultdict(dict)
+DATACLASS_FIELD_TO_ALIAS = defaultdict(dict)
# A cached mapping, per dataclass, of instance field name to `SkipIf` condition
DATACLASS_FIELD_TO_SKIP_IF = defaultdict(dict)
@@ -67,24 +81,19 @@
_META = {}
-def dataclass_to_loader(cls):
-
- return CLASS_TO_LOADER[cls]
-
-
def dataclass_to_dumper(cls):
return CLASS_TO_DUMPER[cls]
-def set_class_loader(class_or_instance, loader):
+def set_class_loader(cls_to_loader, class_or_instance, loader):
cls = get_class(class_or_instance)
loader_cls = get_class(loader)
- CLASS_TO_LOADER[cls] = get_class(loader_cls)
+ cls_to_loader[cls] = loader_cls
- return CLASS_TO_LOADER[cls]
+ return loader_cls
def set_class_dumper(cls, dumper):
@@ -106,7 +115,7 @@ def dataclass_field_to_json_path(cls):
def dataclass_field_to_json_field(cls):
- return DATACLASS_FIELD_TO_JSON_FIELD[cls]
+ return DATACLASS_FIELD_TO_ALIAS[cls]
def dataclass_field_to_skip_if(cls):
@@ -143,6 +152,7 @@ def _setup_load_config_for_cls(cls_loader,
dataclass_field_to_path = DATACLASS_FIELD_TO_JSON_PATH[cls]
set_paths = False if dataclass_field_to_path else True
+ v1_disabled = config is None or not config.v1
name_to_parser = {}
@@ -208,16 +218,20 @@ def _setup_load_config_for_cls(cls_loader,
#
# Changed in v0.31.0: Get the __call__() method as defined
# on `AbstractParser`, if it exists
- name_to_parser[f.name] = getattr(p := cls_loader.get_parser_for_annotation(
- field_type, cls, field_extras
- ), '__call__', p)
+ if v1_disabled:
+ name_to_parser[f.name] = getattr(p := cls_loader.get_parser_for_annotation(
+ field_type, cls, field_extras
+ ), '__call__', p)
- parser_dict = DictWithLowerStore(name_to_parser)
- # only cache the load parser for the class if `save` is enabled
- if save:
- FIELD_NAME_TO_LOAD_PARSER[cls] = parser_dict
+ if v1_disabled:
+ parser_dict = DictWithLowerStore(name_to_parser)
+ # only cache the load parser for the class if `save` is enabled
+ if save:
+ FIELD_NAME_TO_LOAD_PARSER[cls] = parser_dict
- return parser_dict
+ return parser_dict
+
+ return None
def setup_dump_config_for_cls_if_needed(cls):
@@ -225,10 +239,10 @@ def setup_dump_config_for_cls_if_needed(cls):
if cls in IS_DUMP_CONFIG_SETUP:
return
- dataclass_to_json_field = DATACLASS_FIELD_TO_JSON_FIELD[cls]
+ field_to_alias = DATACLASS_FIELD_TO_ALIAS[cls]
- dataclass_field_to_path = DATACLASS_FIELD_TO_JSON_PATH[cls]
- set_paths = False if dataclass_field_to_path else True
+ field_to_path = DATACLASS_FIELD_TO_JSON_PATH[cls]
+ set_paths = False if field_to_path else True
dataclass_field_to_skip_if = DATACLASS_FIELD_TO_SKIP_IF[cls]
@@ -242,15 +256,15 @@ def setup_dump_config_for_cls_if_needed(cls):
# the class-specific mapping of dataclass field name to JSON key.
if isinstance(f, JSONField):
if not f.json.dump:
- dataclass_to_json_field[f.name] = ExplicitNull
+ field_to_alias[f.name] = ExplicitNull
elif f.json.all:
keys = f.json.keys
if f.json.path:
if set_paths:
- dataclass_field_to_path[f.name] = keys
- dataclass_to_json_field[f.name] = ''
+ field_to_path[f.name] = keys
+ field_to_alias[f.name] = ''
else:
- dataclass_to_json_field[f.name] = keys[0]
+ field_to_alias[f.name] = keys[0]
elif f.metadata:
if value := f.metadata.get('__remapping__'):
@@ -258,18 +272,18 @@ def setup_dump_config_for_cls_if_needed(cls):
keys = value.keys
if value.path:
if set_paths:
- dataclass_field_to_path[f.name] = keys
- dataclass_to_json_field[f.name] = ''
+ field_to_path[f.name] = keys
+ field_to_alias[f.name] = ''
else:
- dataclass_to_json_field[f.name] = keys[0]
+ field_to_alias[f.name] = keys[0]
elif value := f.metadata.get('__skip_if__'):
if isinstance(value, Condition):
dataclass_field_to_skip_if[f.name] = value
# Check for a "Catch All" field
if field_type is CatchAll:
- dataclass_to_json_field[f.name] = ExplicitNull
- dataclass_to_json_field[CATCH_ALL] = f.name
+ field_to_alias[f.name] = ExplicitNull
+ field_to_alias[CATCH_ALL] = f.name
# Check if the field annotation is an `Annotated` type. If so,
# look for any `JSON` objects in the arguments; for each object,
@@ -279,15 +293,15 @@ def setup_dump_config_for_cls_if_needed(cls):
for extra in get_args(field_type)[1:]:
if isinstance(extra, JSON):
if not extra.dump:
- dataclass_to_json_field[f.name] = ExplicitNull
+ field_to_alias[f.name] = ExplicitNull
elif extra.all:
keys = extra.keys
if extra.path:
if set_paths:
- dataclass_field_to_path[f.name] = keys
- dataclass_to_json_field[f.name] = ''
+ field_to_path[f.name] = keys
+ field_to_alias[f.name] = ''
else:
- dataclass_to_json_field[f.name] = keys[0]
+ field_to_alias[f.name] = keys[0]
elif isinstance(extra, Condition):
if not getattr(extra, '_wrapped', False):
raise InvalidConditionError(cls, f.name) from None
@@ -298,26 +312,171 @@ def setup_dump_config_for_cls_if_needed(cls):
IS_DUMP_CONFIG_SETUP[cls] = True
-def call_meta_initializer_if_needed(cls):
+def v1_dataclass_field_to_alias(
+ cls,
+ # cls_loader,
+ # config,
+ # save=True
+):
+
+ if cls not in IS_V1_LOAD_CONFIG_SETUP:
+ return _setup_v1_load_config_for_cls(cls)
+
+ return DATACLASS_FIELD_TO_ALIAS_FOR_LOAD[cls]
+
+def _process_field(name: str,
+ f: Field,
+ set_paths: bool,
+ load_dataclass_field_to_path,
+ dump_dataclass_field_to_path,
+ load_dataclass_field_to_alias,
+ dump_dataclass_field_to_alias):
+ """Process a :class:`Field` for a dataclass field."""
+
+ if f.path is not None:
+ if set_paths:
+ if f.load_alias is not ExplicitNull:
+ load_dataclass_field_to_path[name] = f.path
+ if not f.skip and f.dump_alias is not ExplicitNull:
+ dump_dataclass_field_to_path[name] = f.path
+ # TODO I forget why this is needed :o
+ if f.skip:
+ dump_dataclass_field_to_alias[name] = ExplicitNull
+ elif f.dump_alias is not ExplicitNull:
+ dump_dataclass_field_to_alias[name] = ''
+
+ else:
+ if f.load_alias is not None:
+ load_dataclass_field_to_alias[name] = f.load_alias
+ if f.skip:
+ dump_dataclass_field_to_alias[name] = ExplicitNull
+ elif f.dump_alias is not None:
+ dump_dataclass_field_to_alias[name] = f.dump_alias
+
+
+def _setup_v1_load_config_for_cls(
+ cls,
+ # cls_loader,
+ # config,
+ # save=True
+):
+
+ load_dataclass_field_to_alias = DATACLASS_FIELD_TO_ALIAS_FOR_LOAD[cls]
+ dump_dataclass_field_to_alias = DATACLASS_FIELD_TO_ALIAS[cls]
+
+ dataclass_field_to_path = DATACLASS_FIELD_TO_ALIAS_PATH_FOR_LOAD[cls]
+ dump_dataclass_field_to_path = DATACLASS_FIELD_TO_JSON_PATH[cls]
+
+ set_paths = False if dataclass_field_to_path else True
+
+ for f in dataclass_init_fields(cls):
+ # field_extras: Extras = {'config': config}
+
+ field_type = f.type = eval_forward_ref_if_needed(f.type, cls)
+
+ # isinstance(f, Field) == True
+
+ # Check if the field is a known `Field` subclass. If so, update
+ # the class-specific mapping of JSON key to dataclass field name.
+ if isinstance(f, Field):
+ _process_field(f.name, f, set_paths,
+ dataclass_field_to_path,
+ dump_dataclass_field_to_path,
+ load_dataclass_field_to_alias,
+ dump_dataclass_field_to_alias)
+
+ elif f.metadata:
+ if value := f.metadata.get('__remapping__'):
+ if isinstance(value, Field):
+ _process_field(f.name, value, set_paths,
+ dataclass_field_to_path,
+ dump_dataclass_field_to_path,
+ load_dataclass_field_to_alias,
+ dump_dataclass_field_to_alias)
+
+ # Check for a "Catch All" field
+ if field_type is CatchAll:
+ load_dataclass_field_to_alias[CATCH_ALL] \
+ = dump_dataclass_field_to_alias[CATCH_ALL] \
+ = f'{f.name}{"" if f.default is MISSING else "?"}'
+
+ # Check if the field annotation is an `Annotated` type. If so,
+ # look for any `JSON` objects in the arguments; for each object,
+ # update the class-specific mapping of JSON key to dataclass field
+ # name.
+ elif is_annotated(field_type):
+ ann_type, *extras = get_args(field_type)
+ for extra in extras:
+ if isinstance(extra, Field):
+ _process_field(f.name, extra, set_paths,
+ dataclass_field_to_path,
+ dump_dataclass_field_to_path,
+ load_dataclass_field_to_alias,
+ dump_dataclass_field_to_alias)
+ # elif isinstance(extra, PatternedDT):
+ # field_extras['pattern'] = extra
+
+ IS_V1_LOAD_CONFIG_SETUP.add(cls)
+
+ return load_dataclass_field_to_alias
+
+
+def call_meta_initializer_if_needed(cls, package_name='dataclass_wizard'):
"""
Calls the Meta initializer when the inner :class:`Meta` is sub-classed.
"""
+ # TODO add tests
+
+ # skip classes provided by this library
+ if cls.__module__.startswith(f'{package_name}.'):
+ return
+
cls_name = get_class_name(cls)
if cls_name in META_INITIALIZER:
META_INITIALIZER[cls_name](cls)
+ # Get the last immediate superclass
+ base = cls.__base__
+
+ # skip base `object` and classes provided by this library
+ if (base is not object
+ and not base.__module__.startswith(f'{package_name}.')):
+
+ base_cls_name = get_class_name(base)
+
+ if base_cls_name in META_INITIALIZER:
+ META_INITIALIZER[base_cls_name](cls)
+
def get_meta(cls, base_cls=AbstractMeta):
"""
Retrieves the Meta config for the :class:`AbstractJSONWizard` subclass.
- return _META.get(cls, AbstractMeta)
This config is set when the inner :class:`Meta` is sub-classed.
"""
return _META.get(cls, base_cls)
+def create_meta(cls, cls_name=None, **kwargs):
+ """
+ Sets the Meta config for the :class:`AbstractJSONWizard` subclass.
+
+ WARNING: Only use if the Meta config is undefined,
+ e.g. `get_meta` for the `cls` returns `base_cls`.
+
+ """
+ from .bases_meta import BaseJSONWizardMeta
+
+ cls_dict = {'__slots__': (), **kwargs}
+
+ meta = type((cls_name or cls.__name__) + 'Meta',
+ (BaseJSONWizardMeta, ),
+ cls_dict)
+
+ _META[cls] = meta
+
+
def dataclass_fields(cls):
if cls not in FIELDS:
@@ -326,9 +485,9 @@ def dataclass_fields(cls):
return FIELDS[cls]
-def dataclass_init_fields(cls):
-
- return tuple(f for f in dataclass_fields(cls) if f.init)
+def dataclass_init_fields(cls, as_list=False):
+ init_fields = [f for f in dataclass_fields(cls) if f.init]
+ return init_fields if as_list else tuple(init_fields)
def dataclass_field_names(cls):
@@ -336,6 +495,11 @@ def dataclass_field_names(cls):
return tuple(f.name for f in dataclass_fields(cls))
+def dataclass_init_field_names(cls):
+
+ return tuple(f.name for f in dataclass_init_fields(cls))
+
+
def dataclass_field_to_default(cls):
if cls not in FIELD_TO_DEFAULT:
@@ -427,4 +591,4 @@ def is_subclass_safe(cls, class_or_tuple):
try:
return issubclass(cls, class_or_tuple)
except TypeError:
- return cls is class_or_tuple
+ return False
diff --git a/dataclass_wizard/class_helper.pyi b/dataclass_wizard/class_helper.pyi
index 9945a299..6c117418 100644
--- a/dataclass_wizard/class_helper.pyi
+++ b/dataclass_wizard/class_helper.pyi
@@ -1,8 +1,8 @@
from collections import defaultdict
from dataclasses import Field
-from typing import Any, Callable
+from typing import Any, Callable, Literal, overload
-from .abstractions import W, AbstractLoader, AbstractDumper, AbstractParser, E
+from .abstractions import W, AbstractLoader, AbstractDumper, AbstractParser, E, AbstractLoaderGenerator
from .bases import META, AbstractMeta
from .models import Condition
from .type_def import ExplicitNullType, T
@@ -27,6 +27,9 @@ CLASS_TO_DUMP_FUNC: dict[type, Any] = {}
# A mapping of dataclass to its loader.
CLASS_TO_LOADER: dict[type, type[AbstractLoader]] = {}
+# V1: A mapping of dataclass to its loader.
+CLASS_TO_V1_LOADER: dict[type, type[AbstractLoaderGenerator]] = {}
+
# A mapping of dataclass to its dumper.
CLASS_TO_DUMPER: dict[type, type[AbstractDumper]] = {}
@@ -34,6 +37,11 @@ CLASS_TO_DUMPER: dict[type, type[AbstractDumper]] = {}
# and load hook.
FIELD_NAME_TO_LOAD_PARSER: dict[type, DictWithLowerStore[str, AbstractParser]] = {}
+# Since the load process in V1 doesn't use Parsers currently, we use a sentinel
+# mapping to confirm if we need to setup the load config for a dataclass
+# on an initial run.
+IS_V1_LOAD_CONFIG_SETUP: set[type] = set()
+
# Since the dump process doesn't use Parsers currently, we use a sentinel
# mapping to confirm if we need to setup the dump config for a dataclass
# on an initial run.
@@ -45,8 +53,14 @@ JSON_FIELD_TO_DATACLASS_FIELD: dict[type, dict[str, str | ExplicitNullType]] = d
# A cached mapping, per dataclass, of instance field name to JSON path
DATACLASS_FIELD_TO_JSON_PATH: dict[type, dict[str, PathType]] = defaultdict(dict)
+# V1: A cached mapping, per dataclass, of instance field name to JSON path
+DATACLASS_FIELD_TO_ALIAS_PATH_FOR_LOAD: dict[type, dict[str, PathType]] = defaultdict(dict)
+
+# V1: A cached mapping, per dataclass, of instance field name to JSON field
+DATACLASS_FIELD_TO_ALIAS_FOR_LOAD: dict[type, dict[str, str]] = defaultdict(dict)
+
# A cached mapping, per dataclass, of instance field name to JSON field
-DATACLASS_FIELD_TO_JSON_FIELD: dict[type, dict[str, str]] = defaultdict(dict)
+DATACLASS_FIELD_TO_ALIAS: dict[type, dict[str, str]] = defaultdict(dict)
# A cached mapping, per dataclass, of instance field name to `SkipIf` condition
DATACLASS_FIELD_TO_SKIP_IF: dict[type, dict[str, Condition]] = defaultdict(dict)
@@ -64,19 +78,13 @@ META_INITIALIZER: dict[str, Callable[[type[W]], None]] = {}
_META: dict[type, META] = {}
-def dataclass_to_loader(cls: type) -> type[AbstractLoader]:
- """
- Returns the loader for a dataclass.
- """
-
-
def dataclass_to_dumper(cls: type) -> type[AbstractDumper]:
"""
Returns the dumper for a dataclass.
"""
-def set_class_loader(class_or_instance, loader: type[AbstractLoader]):
+def set_class_loader(cls_to_loader, class_or_instance, loader: type[AbstractLoader]):
"""
Set (and return) the loader for a dataclass.
"""
@@ -106,6 +114,12 @@ def dataclass_field_to_json_field(cls: type) -> dict[str, str]:
"""
+def dataclass_field_to_alias_for_load(cls: type) -> dict[str, str]:
+ """
+ V1: Returns a mapping of dataclass field to alias or JSON key.
+ """
+
+
def dataclass_field_to_skip_if(cls: type) -> dict[str, Condition]:
"""
Returns a mapping of dataclass field to SkipIf condition.
@@ -177,7 +191,31 @@ def setup_dump_config_for_cls_if_needed(cls: type) -> None:
"""
-def call_meta_initializer_if_needed(cls: type[W | E]) -> None:
+def v1_dataclass_field_to_alias(cls: type) -> dict[str, str]: ...
+
+def _setup_v1_load_config_for_cls(cls: type):
+ """
+ This function processes a class `cls` on an initial run, and sets up the
+ load process for `cls` by iterating over each dataclass field. For each
+ field, it performs the following tasks:
+
+ * Check if the field's annotation is of type ``Annotated``. If so,
+ we iterate over each ``Annotated`` argument and find any special
+ :class:`JSON` objects (this can also be set via the helper function
+ ``json_key``). Assuming we find it, the class-specific mapping of
+ dataclass field name to JSON key is then updated with the input
+ passed in to this object.
+
+ * Check if the field type is a :class:`JSONField` object (this can
+ also be set by the helper function ``json_field``). Assuming this is
+ the case, the class-specific mapping of dataclass field name to
+ JSON key is then updated with the input passed in to
+ the :class:`JSON` attribute.
+ """
+
+
+def call_meta_initializer_if_needed(cls: type[W | E],
+ package_name='dataclass_wizard') -> None:
"""
Calls the Meta initializer when the inner :class:`Meta` is sub-classed.
"""
@@ -191,6 +229,16 @@ def get_meta(cls: type, base_cls: T = AbstractMeta) -> T | META:
"""
+def create_meta(cls: type, cls_name: str | None = None, **kwargs) -> None:
+ """
+ Sets the Meta config for the :class:`AbstractJSONWizard` subclass.
+
+ WARNING: Only use if the Meta config is undefined,
+ e.g. `get_meta` for the `cls` returns `base_cls`.
+
+ """
+
+
def dataclass_fields(cls: type) -> tuple[Field, ...]:
"""
Cache the `dataclasses.fields()` call for each class, as overall that
@@ -198,8 +246,13 @@ def dataclass_fields(cls: type) -> tuple[Field, ...]:
"""
+@overload
+def dataclass_init_fields(cls: type, as_list: Literal[True] = False) -> list[Field]:
+ """Get only the dataclass fields that would be passed into the constructor."""
+
-def dataclass_init_fields(cls: type) -> tuple[Field, ...]:
+@overload
+def dataclass_init_fields(cls: type, as_list: Literal[False] = False) -> tuple[Field]:
"""Get only the dataclass fields that would be passed into the constructor."""
@@ -207,6 +260,10 @@ def dataclass_field_names(cls: type) -> tuple[str, ...]:
"""Get the names of all dataclass fields"""
+def dataclass_init_field_names(cls: type) -> tuple[str, ...]:
+ """Get the names of all __init__() dataclass fields"""
+
+
def dataclass_field_to_default(cls: type) -> dict[str, Any]:
"""Get default values for the (optional) dataclass fields."""
diff --git a/dataclass_wizard/dumpers.py b/dataclass_wizard/dumpers.py
index cf8238c1..0919ec21 100644
--- a/dataclass_wizard/dumpers.py
+++ b/dataclass_wizard/dumpers.py
@@ -27,10 +27,12 @@
dataclass_to_dumper, set_class_dumper,
CLASS_TO_DUMP_FUNC, setup_dump_config_for_cls_if_needed, get_meta,
dataclass_field_to_load_parser, dataclass_field_to_json_path, is_builtin, dataclass_field_to_skip_if,
+ v1_dataclass_field_to_alias,
)
from .constants import _DUMP_HOOKS, TAG, CATCH_ALL
from .decorators import _alias
from .errors import show_deprecation_warning
+from .loader_selection import _get_load_fn_for_dataclass
from .log import LOG
from .models import get_skip_if_condition, finalize_skip_if
from .type_def import (
@@ -287,6 +289,9 @@ def dump_func_for_dataclass(cls: Type[T],
# sub-classes from `DumpMixIn`, these hooks could be customized.
hooks = cls_dumper.__DUMP_HOOKS__
+ # TODO this is temporary
+ if meta.v1:
+ _ = v1_dataclass_field_to_alias(cls)
# Set up the initial dump config for the dataclass.
setup_dump_config_for_cls_if_needed(cls)
@@ -311,11 +316,20 @@ def dump_func_for_dataclass(cls: Type[T],
# we don't process the class annotations here. So instead, generate
# the load parser for each field (if needed), but don't cache the
# result, as it's conceivable we might yet call `LoadMeta` later.
- from .loaders import get_loader
- cls_loader = get_loader(cls)
- # Use the cached result if it exists, but don't cache it ourselves.
- _ = dataclass_field_to_load_parser(
- cls_loader, cls, config, save=False)
+ from .loader_selection import get_loader
+
+ if meta.v1:
+ # TODO there must be a better way to do this,
+ # this is just a temporary workaround.
+ try:
+ _ = _get_load_fn_for_dataclass(cls, v1=True)
+ except Exception:
+ pass
+ else:
+ cls_loader = get_loader(cls, v1=meta.v1)
+ # Use the cached result if it exists, but don't cache it ourselves.
+ _ = dataclass_field_to_load_parser(
+ cls_loader, cls, config, save=False)
# Tag key to populate when a dataclass is in a `Union` with other types.
tag_key = meta.tag_key or TAG
@@ -336,9 +350,7 @@ def dump_func_for_dataclass(cls: Type[T],
'cls_to_asdict': nested_cls_to_dump_func,
}
- _globals = {
- 'T': T,
- }
+ _globals = {}
skip_if_condition = get_skip_if_condition(
meta.skip_if, _locals, '_skip_value')
@@ -351,11 +363,12 @@ def dump_func_for_dataclass(cls: Type[T],
# Code for `cls_asdict`
with fn_gen.function('cls_asdict',
- ['o:T',
+ ['o',
'dict_factory=dict',
"exclude:'list[str]|None'=None",
f'skip_defaults:bool={skip_defaults}'],
- return_type='JSONObject'):
+ 'JSONObject',
+ _locals):
if (
_pre_dict := getattr(cls, '_pre_dict', None)
@@ -485,7 +498,7 @@ def dump_func_for_dataclass(cls: Type[T],
fn_gen.add_line("return dict_factory(result)")
# Compile the code into a dynamic string
- functions = fn_gen.create_functions(locals=_locals, globals=_globals)
+ functions = fn_gen.create_functions(_globals)
cls_asdict = functions['cls_asdict']
diff --git a/dataclass_wizard/environ/dumpers.py b/dataclass_wizard/environ/dumpers.py
index 40cbfc97..c1a87d89 100644
--- a/dataclass_wizard/environ/dumpers.py
+++ b/dataclass_wizard/environ/dumpers.py
@@ -170,7 +170,8 @@ def dump_func_for_dataclass(cls: Type['E'],
'dict_factory=dict',
"exclude:'list[str]|None'=None",
f'skip_defaults:bool={skip_defaults}'],
- return_type='JSONObject'):
+ 'JSONObject',
+ _locals):
if (
_pre_dict := getattr(cls, '_pre_dict', None)
@@ -300,7 +301,7 @@ def dump_func_for_dataclass(cls: Type['E'],
fn_gen.add_line("return dict_factory(result)")
# Compile the code into a dynamic string
- functions = fn_gen.create_functions(locals=_locals, globals=_globals)
+ functions = fn_gen.create_functions(_globals)
cls_asdict = functions['cls_asdict']
diff --git a/dataclass_wizard/environ/wizard.py b/dataclass_wizard/environ/wizard.py
index 3b28e24d..47dc3657 100644
--- a/dataclass_wizard/environ/wizard.py
+++ b/dataclass_wizard/environ/wizard.py
@@ -14,7 +14,7 @@
from ..enums import LetterCase
from ..environ.loaders import EnvLoader
from ..errors import ExtraData, MissingVars, ParseError, type_name
-from ..loaders import get_loader
+from ..loader_selection import get_loader
from ..models import Extras, JSONField
from ..type_def import ExplicitNull, JSONObject, dataclass_transform
from ..utils.function_builder import FunctionBuilder
@@ -213,7 +213,6 @@ def _create_methods(cls):
_meta_env_file = meta.env_file
_locals = {'Env': Env,
- 'MISSING': MISSING,
'ParseError': ParseError,
'field_names': field_names,
'get_env': get_env,
@@ -224,6 +223,7 @@ def _create_methods(cls):
'cls': cls,
'fields_ordered': cls_fields.keys(),
'handle_err': _handle_parse_error,
+ 'MISSING': MISSING,
}
if meta.secrets_dir is None:
@@ -242,7 +242,7 @@ def _create_methods(cls):
fn_gen = FunctionBuilder()
- with fn_gen.function('__init__', init_params, None):
+ with fn_gen.function('__init__', init_params, None, _locals):
# reload cached var names from `os.environ` as needed.
with fn_gen.if_('_reload'):
@@ -333,11 +333,11 @@ def _create_methods(cls):
# with fn_gen.for_('attr in extra_kwargs'):
# fn_gen.add_line('setattr(self, attr, init_kwargs[attr])')
- with fn_gen.function('dict', ['self'], JSONObject):
+ with fn_gen.function('dict', ['self'], JSONObject, _locals):
parts = ','.join([f'{name!r}:self.{name}' for name, f in cls.__fields__.items()])
fn_gen.add_line(f'return {{{parts}}}')
- functions = fn_gen.create_functions(globals=_globals, locals=_locals)
+ functions = fn_gen.create_functions(_globals)
# set the `__init__()` method.
cls.__init__ = functions['__init__']
diff --git a/dataclass_wizard/errors.py b/dataclass_wizard/errors.py
index 8ff8248e..ecda39dc 100644
--- a/dataclass_wizard/errors.py
+++ b/dataclass_wizard/errors.py
@@ -51,6 +51,28 @@ class JSONWizardError(ABC, Exception):
_TEMPLATE: ClassVar[str]
+ @property
+ def class_name(self) -> Optional[str]:
+ return self._class_name or self._default_class_name
+
+ @class_name.setter
+ def class_name(self, cls: Optional[Type]):
+ # Set parent class for errors
+ self.parent_cls = cls
+ # Set class name
+ if getattr(self, '_class_name', None) is None:
+ # noinspection PyAttributeOutsideInit
+ self._class_name = self.name(cls)
+
+ @property
+ def parent_cls(self) -> Optional[type]:
+ return self._parent_cls
+
+ @parent_cls.setter
+ def parent_cls(self, cls: Optional[type]):
+ # noinspection PyAttributeOutsideInit
+ self._parent_cls = cls
+
@staticmethod
def name(obj) -> str:
"""Return the type or class name of an object"""
@@ -103,15 +125,6 @@ def __init__(self, base_err: Exception,
self._json_object = _json_object
self.fields = None
- @property
- def class_name(self) -> Optional[str]:
- return self._class_name or self._default_class_name
-
- @class_name.setter
- def class_name(self, cls: Optional[Type]):
- if self._class_name is None:
- self._class_name = self.name(cls)
-
@property
def field_name(self) -> Optional[str]:
return self._field_name
@@ -200,57 +213,97 @@ class MissingFields(JSONWizardError):
missing arguments)
"""
- _TEMPLATE = ('Failure calling constructor method of class `{cls}`. '
- 'Missing values for required dataclass fields.\n'
- ' have fields: {fields!r}\n'
- ' missing fields: {missing_fields!r}\n'
- ' input JSON object: {json_string}\n'
- ' error: {e!s}')
+ _TEMPLATE = ('`{cls}.__init__()` missing required fields.\n'
+ ' Provided: {fields!r}\n'
+ ' Missing: {missing_fields!r}\n'
+ '{expected_keys}'
+ ' Input JSON: {json_string}'
+ '{e}')
def __init__(self, base_err: Exception,
obj: JSONObject,
cls: Type,
- cls_kwargs: JSONObject,
- cls_fields: Tuple[Field, ...], **kwargs):
+ cls_fields: Tuple[Field, ...],
+ cls_kwargs: 'JSONObject | None' = None,
+ missing_fields: 'Collection[str] | None' = None,
+ missing_keys: 'Collection[str] | None' = None,
+ **kwargs):
super().__init__()
self.obj = obj
- self.fields = list(cls_kwargs.keys())
-
- self.missing_fields = [f.name for f in cls_fields
- if f.name not in self.fields
- and f.default is MISSING
- and f.default_factory is MISSING]
-
- # check if any field names match, and where the key transform could be the cause
- # see https://github.com/rnag/dataclass-wizard/issues/54 for more info
- normalized_json_keys = [normalize(key) for key in obj]
- if next((f for f in self.missing_fields if normalize(f) in normalized_json_keys), None):
- from .enums import LetterCase
- from .loaders import get_loader
-
- key_transform = get_loader(cls).transform_json_field
- if isinstance(key_transform, LetterCase):
- key_transform = key_transform.value.f
-
- kwargs['key transform'] = f'{key_transform.__name__}()'
- kwargs['resolution'] = 'For more details, please see https://github.com/rnag/dataclass-wizard/issues/54'
+ if missing_fields:
+ self.fields = [f.name for f in cls_fields
+ if f.name not in missing_fields
+ and f.default is MISSING
+ and f.default_factory is MISSING]
+ self.missing_fields = missing_fields
+ else:
+ self.fields = list(cls_kwargs.keys())
+ self.missing_fields = [f.name for f in cls_fields
+ if f.name not in self.fields
+ and f.default is MISSING
+ and f.default_factory is MISSING]
self.base_error = base_err
+ self.missing_keys = missing_keys
self.kwargs = kwargs
self.class_name: str = self.name(cls)
+ self.parent_cls = cls
@property
def message(self) -> str:
+ from .class_helper import get_meta
from .utils.json_util import safe_dumps
+ # need to determine this, as we can't
+ # directly import `class_helper.py`
+ meta = get_meta(self.parent_cls)
+ v1 = meta.v1
+
+ # check if any field names match, and where the key transform could be the cause
+ # see https://github.com/rnag/dataclass-wizard/issues/54 for more info
+
+ normalized_json_keys = [normalize(key) for key in self.obj]
+ if next((f for f in self.missing_fields if normalize(f) in normalized_json_keys), None):
+ from .enums import LetterCase
+ from .v1.enums import KeyCase
+ from .loader_selection import get_loader
+
+ key_transform = get_loader(self.parent_cls).transform_json_field
+ if isinstance(key_transform, (LetterCase, KeyCase)):
+ if key_transform.value is None:
+ key_transform = f'{key_transform.name}'
+ else:
+ key_transform = f'{key_transform.value.f.__name__}()'
+ elif key_transform is not None:
+ key_transform = f'{getattr(key_transform, "__name__", key_transform)}()'
+
+ self.kwargs['Key Transform'] = key_transform
+ self.kwargs['Resolution'] = 'For more details, please see https://github.com/rnag/dataclass-wizard/issues/54'
+
+ if v1:
+ self.kwargs['Resolution'] = ('Ensure that all required fields are provided in the input. '
+ 'For more details, see:\n'
+ ' https://github.com/rnag/dataclass-wizard/discussions/167')
+
+ if self.base_error is not None:
+ e = f'\n error: {self.base_error!s}'
+ else:
+ e = ''
+
+ if self.missing_keys is not None:
+ expected_keys = f' Expected Keys: {self.missing_keys!r}\n'
+ else:
+ expected_keys = ''
+
msg = self._TEMPLATE.format(
cls=self.class_name,
json_string=safe_dumps(self.obj),
- e=self.base_error,
+ e=e,
fields=self.fields,
+ expected_keys=expected_keys,
missing_fields=self.missing_fields)
if self.kwargs:
@@ -261,44 +314,56 @@ def message(self) -> str:
return msg
-class UnknownJSONKey(JSONWizardError):
+class UnknownKeysError(JSONWizardError):
"""
- Error raised when an unknown JSON key is encountered in the JSON load
- process.
+ Error raised when unknown JSON key(s) are
+ encountered in the JSON load process.
Note that this error class is only raised when the
- `raise_on_unknown_json_key` flag is enabled in the :class:`Meta` class.
+ `raise_on_unknown_json_key` flag is enabled in
+ the :class:`Meta` class.
"""
- _TEMPLATE = ('A JSON key is missing from the dataclass schema for class `{cls}`.\n'
- ' unknown key: {json_key!r}\n'
- ' dataclass fields: {fields!r}\n'
- ' input JSON object: {json_string}')
+ _TEMPLATE = ('One or more JSON keys are not mapped to the dataclass schema for class `{cls}`.\n'
+ ' Unknown key{s}: {unknown_keys!r}\n'
+ ' Dataclass fields: {fields!r}\n'
+ ' Input JSON object: {json_string}')
def __init__(self,
- json_key: str,
+ unknown_keys: 'list[str] | str',
obj: JSONObject,
cls: Type,
cls_fields: Tuple[Field, ...], **kwargs):
super().__init__()
- self.json_key = json_key
+ self.unknown_keys = unknown_keys
self.obj = obj
self.fields = [f.name for f in cls_fields]
self.kwargs = kwargs
self.class_name: str = self.name(cls)
- # self.class_name: str = type_name(cls)
+ @property
+ def json_key(self):
+ show_deprecation_warning(
+ UnknownKeysError.json_key.fget,
+ 'use `unknown_keys` instead',
+ )
+ return self.unknown_keys
@property
def message(self) -> str:
from .utils.json_util import safe_dumps
+ if not isinstance(self.unknown_keys, str) and len(self.unknown_keys) > 1:
+ s = 's'
+ else:
+ s = ''
msg = self._TEMPLATE.format(
cls=self.class_name,
+ s=s,
json_string=safe_dumps(self.obj),
fields=self.fields,
- json_key=self.json_key)
+ unknown_keys=self.unknown_keys)
if self.kwargs:
sep = '\n '
@@ -308,6 +373,10 @@ def message(self) -> str:
return msg
+# Alias for backwards-compatibility.
+UnknownJSONKey = UnknownKeysError
+
+
class MissingData(ParseError):
"""
Error raised when unable to create a class instance, as the JSON object
diff --git a/dataclass_wizard/errors.pyi b/dataclass_wizard/errors.pyi
new file mode 100644
index 00000000..db3e5d28
--- /dev/null
+++ b/dataclass_wizard/errors.pyi
@@ -0,0 +1,264 @@
+import warnings
+from abc import ABC, abstractmethod
+from dataclasses import Field
+from typing import (Any, ClassVar, Iterable, Callable, Collection, Sequence)
+
+
+# added as we can't import from `type_def`, as we run into a circular import.
+JSONObject = dict[str, Any]
+
+
+def type_name(obj: type) -> str:
+ """Return the type or class name of an object"""
+
+
+def show_deprecation_warning(
+ fn: Callable,
+ reason: str,
+ fmt: str = "Deprecated function {name} ({reason})."
+) -> None:
+ """
+ Display a deprecation warning for a given function.
+
+ @param fn: Function which is deprecated.
+ @param reason: Reason for the deprecation.
+ @param fmt: Format string for the name/reason.
+ """
+
+
+class JSONWizardError(ABC, Exception):
+ """
+ Base error class, for errors raised by this library.
+ """
+
+ _TEMPLATE: ClassVar[str]
+
+ _parent_cls: type
+ _class_name: str | None
+ _default_class_name: str | None
+
+ def class_name(self) -> str | None: ...
+ # noinspection PyRedeclaration
+ def class_name(self) -> None: ...
+
+ def parent_cls(self) -> type | None: ...
+ # noinspection PyRedeclaration
+ def parent_cls(self, value: type | None) -> None: ...
+
+ @staticmethod
+ def name(obj) -> str: ...
+
+ @property
+ @abstractmethod
+ def message(self) -> str:
+ """
+ Format and return an error message.
+ """
+
+ def __str__(self) -> str: ...
+
+
+class ParseError(JSONWizardError):
+ """
+ Base error when an error occurs during the JSON load process.
+ """
+
+ _TEMPLATE: str
+
+ obj: Any
+ obj_type: type
+ ann_type: type | Iterable | None
+ base_error: Exception
+ kwargs: dict[str, Any]
+ _class_name: str | None
+ _default_class_name: str | None
+ _field_name: str | None
+ _json_object: Any | None
+ fields: Collection[Field] | None
+
+ def __init__(self, base_err: Exception,
+ obj: Any,
+ ann_type: type | Iterable | None,
+ _default_class: type | None = None,
+ _field_name: str | None = None,
+ _json_object: Any = None,
+ **kwargs):
+ ...
+
+ @property
+ def field_name(self) -> str | None:
+ ...
+
+ @property
+ def json_object(self):
+ ...
+
+ @property
+ def message(self) -> str: ...
+
+
+class ExtraData(JSONWizardError):
+ """
+ Error raised when extra keyword arguments are passed in to the constructor
+ or `__init__()` method of an `EnvWizard` subclass.
+
+ Note that this error class is raised by default, unless a value for the
+ `extra` field is specified in the :class:`Meta` class.
+ """
+
+ _TEMPLATE: str
+
+ class_name: str
+ extra_kwargs: Collection[str]
+ field_names: Collection[str]
+
+ def __init__(self,
+ cls: type,
+ extra_kwargs: Collection[str],
+ field_names: Collection[str]):
+ ...
+
+ @property
+ def message(self) -> str: ...
+
+
+class MissingFields(JSONWizardError):
+ """
+ Error raised when unable to create a class instance (most likely due to
+ missing arguments)
+ """
+
+ _TEMPLATE: str
+
+ obj: JSONObject
+ fields: list[str]
+ missing_fields: Collection[str]
+ base_error: Exception
+ missing_keys: Collection[str] | None
+ kwargs: dict[str, Any]
+ class_name: str
+ parent_cls: type
+
+ def __init__(self, base_err: Exception,
+ obj: JSONObject,
+ cls: type,
+ cls_fields: tuple[Field, ...],
+ cls_kwargs: JSONObject | None = None,
+ missing_fields: Collection[str] | None = None,
+ missing_keys: Collection[str] | None = None,
+ **kwargs):
+ ...
+
+ @property
+ def message(self) -> str: ...
+
+
+class UnknownKeysError(JSONWizardError):
+ """
+ Error raised when unknown JSON key(s) are
+ encountered in the JSON load process.
+
+ Note that this error class is only raised when the
+ `raise_on_unknown_json_key` flag is enabled in
+ the :class:`Meta` class.
+ """
+
+ _TEMPLATE: str
+
+ unknown_keys: list[str] | str
+ obj: JSONObject
+ fields: list[str]
+ kwargs: dict[str, Any]
+ class_name: str
+
+ def __init__(self,
+ unknown_keys: list[str] | str,
+ obj: JSONObject,
+ cls: type,
+ cls_fields: tuple[Field, ...],
+ **kwargs):
+ ...
+
+ @property
+ @warnings.deprecated('use `unknown_keys` instead')
+ def json_key(self) -> list[str] | str: ...
+
+ @property
+ def message(self) -> str: ...
+
+
+# Alias for backwards-compatibility.
+UnknownJSONKey = UnknownKeysError
+
+
+class MissingData(ParseError):
+ """
+ Error raised when unable to create a class instance, as the JSON object
+ is None.
+ """
+
+ _TEMPLATE: str
+
+ nested_class_name: str
+
+ def __init__(self, nested_cls: type, **kwargs):
+ ...
+
+ @property
+ def message(self) -> str: ...
+
+
+class RecursiveClassError(JSONWizardError):
+ """
+ Error raised when we encounter a `RecursionError` due to cyclic
+ or self-referential dataclasses.
+ """
+
+ _TEMPLATE: str
+
+ class_name: str
+
+ def __init__(self, cls: type): ...
+
+ @property
+ def message(self) -> str: ...
+
+
+class InvalidConditionError(JSONWizardError):
+ """
+ Error raised when a condition is not wrapped in ``SkipIf``.
+ """
+
+ _TEMPLATE: str
+
+ class_name: str
+ field_name: str
+
+ def __init__(self, cls: type, field_name: str):
+ ...
+
+ @property
+ def message(self) -> str: ...
+
+
+class MissingVars(JSONWizardError):
+ """
+ Error raised when unable to create an instance of a EnvWizard subclass
+ (most likely due to missing environment variables in the Environment)
+
+ """
+ _TEMPLATE: str
+
+ class_name: str
+ fields: str
+ def_resolution: str
+ init_resolution: str
+ prefix: str
+
+ def __init__(self,
+ cls: type,
+ missing_vars: Sequence[tuple[str, str | None, str, Any]]):
+ ...
+
+ @property
+ def message(self) -> str: ...
diff --git a/dataclass_wizard/loader_selection.py b/dataclass_wizard/loader_selection.py
new file mode 100644
index 00000000..296aaff2
--- /dev/null
+++ b/dataclass_wizard/loader_selection.py
@@ -0,0 +1,109 @@
+from typing import Callable, Optional
+
+from .class_helper import (get_meta, CLASS_TO_LOAD_FUNC,
+ CLASS_TO_LOADER, CLASS_TO_V1_LOADER,
+ set_class_loader, create_new_class)
+from .constants import _LOAD_HOOKS
+from .type_def import T, JSONObject
+
+
+def fromdict(cls: type[T], d: JSONObject) -> T:
+ """
+ Converts a Python dictionary object to a dataclass instance.
+
+ Iterates over each dataclass field recursively; lists, dicts, and nested
+ dataclasses will likewise be initialized as expected.
+
+ When directly invoking this function, an optional Meta configuration for
+ the dataclass can be specified via ``LoadMeta``; by default, this will
+ apply recursively to any nested dataclasses. Here's a sample usage of this
+ below::
+
+ >>> LoadMeta(key_transform='CAMEL').bind_to(MyClass)
+ >>> fromdict(MyClass, {"myStr": "value"})
+
+ """
+ try:
+ load = CLASS_TO_LOAD_FUNC[cls]
+ except KeyError:
+ load = _get_load_fn_for_dataclass(cls)
+
+ return load(d)
+
+
+def fromlist(cls: type[T], list_of_dict: list[JSONObject]) -> list[T]:
+ """
+ Converts a Python list object to a list of dataclass instances.
+
+ Iterates over each dataclass field recursively; lists, dicts, and nested
+ dataclasses will likewise be initialized as expected.
+
+ """
+ try:
+ load = CLASS_TO_LOAD_FUNC[cls]
+ except KeyError:
+ load = _get_load_fn_for_dataclass(cls)
+
+ return [load(d) for d in list_of_dict]
+
+
+def _get_load_fn_for_dataclass(cls: type[T], v1=None) -> Callable[[JSONObject], T]:
+ if v1 is None:
+ v1 = getattr(get_meta(cls), 'v1', False)
+
+ if v1:
+ from .v1.loaders import load_func_for_dataclass as V1_load_func_for_dataclass
+ # noinspection PyTypeChecker
+ load = V1_load_func_for_dataclass(cls, {})
+ else:
+ from .loaders import load_func_for_dataclass
+ load = load_func_for_dataclass(cls)
+
+ # noinspection PyTypeChecker
+ return load
+
+
+def get_loader(class_or_instance=None, create=True,
+ base_cls: T = None,
+ v1: Optional[bool] = None) -> type[T]:
+ """
+ Get the loader for the class, using the following logic:
+
+ * Return the class if it's already a sub-class of :class:`LoadMixin`
+ * If `create` is enabled (which is the default), a new sub-class of
+ :class:`LoadMixin` for the class will be generated and cached on the
+ initial run.
+ * Otherwise, we will return the base loader, :class:`LoadMixin`, which
+ can potentially be shared by more than one dataclass.
+
+ """
+ if v1 is None:
+ v1 = getattr(get_meta(class_or_instance), 'v1', False)
+
+ if v1:
+ cls_to_loader = CLASS_TO_V1_LOADER
+ if base_cls is None:
+ from .v1.loaders import LoadMixin as V1_LoadMixin
+ base_cls = V1_LoadMixin
+ else:
+ cls_to_loader = CLASS_TO_LOADER
+ if base_cls is None:
+ from .loaders import LoadMixin
+ base_cls = LoadMixin
+
+ try:
+ return cls_to_loader[class_or_instance]
+
+ except KeyError:
+
+ if hasattr(class_or_instance, _LOAD_HOOKS):
+ return set_class_loader(
+ cls_to_loader, class_or_instance, class_or_instance)
+
+ elif create:
+ cls_loader = create_new_class(class_or_instance, (base_cls, ))
+ return set_class_loader(
+ cls_to_loader, class_or_instance, cls_loader)
+
+ return set_class_loader(
+ cls_to_loader, class_or_instance, base_cls)
diff --git a/dataclass_wizard/loaders.py b/dataclass_wizard/loaders.py
index db4d689b..b6fd0596 100644
--- a/dataclass_wizard/loaders.py
+++ b/dataclass_wizard/loaders.py
@@ -1,7 +1,6 @@
-from collections import defaultdict, deque, namedtuple
import collections.abc as abc
-
-from dataclasses import is_dataclass
+from collections import defaultdict, deque, namedtuple
+from dataclasses import is_dataclass, MISSING
from datetime import datetime, time, date, timedelta
from decimal import Decimal
from enum import Enum
@@ -12,22 +11,20 @@
NamedTupleMeta,
SupportsFloat, AnyStr, Text, Callable, Optional
)
-
from uuid import UUID
from .abstractions import AbstractLoader, AbstractParser
from .bases import BaseLoadHook, AbstractMeta, META
from .class_helper import (
- create_new_class,
- dataclass_to_loader, set_class_loader,
dataclass_field_to_load_parser, json_field_to_dataclass_field,
CLASS_TO_LOAD_FUNC, dataclass_fields, get_meta, is_subclass_safe, dataclass_field_to_json_path,
dataclass_init_fields, dataclass_field_to_default,
)
-from .constants import _LOAD_HOOKS, SINGLE_ARG_ALIAS, IDENTITY, CATCH_ALL
+from .constants import SINGLE_ARG_ALIAS, IDENTITY, CATCH_ALL
from .decorators import _alias, _single_arg_alias, resolve_alias_func, _identity
-from .errors import (ParseError, MissingFields, UnknownJSONKey,
+from .errors import (ParseError, MissingFields, UnknownKeysError,
MissingData, RecursiveClassError)
+from .loader_selection import fromdict, get_loader
from .log import LOG
from .models import Extras, PatternedDT
from .parsers import *
@@ -36,9 +33,9 @@
PyRequired, PyNotRequired,
M, N, T, E, U, DD, LSQ, NT
)
-from .utils.function_builder import FunctionBuilder
# noinspection PyProtectedMember
from .utils.dataclass_compat import _set_new_attribute
+from .utils.function_builder import FunctionBuilder
from .utils.object_path import safe_get
from .utils.string_conv import to_snake_case
from .utils.type_conv import (
@@ -544,74 +541,6 @@ def setup_default_loader(cls=LoadMixin):
cls.register_load_hook(timedelta, cls.load_to_timedelta)
-def get_loader(class_or_instance=None, create=True,
- base_cls: T = LoadMixin) -> Type[T]:
- """
- Get the loader for the class, using the following logic:
-
- * Return the class if it's already a sub-class of :class:`LoadMixin`
- * If `create` is enabled (which is the default), a new sub-class of
- :class:`LoadMixin` for the class will be generated and cached on the
- initial run.
- * Otherwise, we will return the base loader, :class:`LoadMixin`, which
- can potentially be shared by more than one dataclass.
-
- """
- try:
- return dataclass_to_loader(class_or_instance)
-
- except KeyError:
-
- if hasattr(class_or_instance, _LOAD_HOOKS):
- return set_class_loader(class_or_instance, class_or_instance)
-
- elif create:
- cls_loader = create_new_class(class_or_instance, (base_cls, ))
- return set_class_loader(class_or_instance, cls_loader)
-
- return set_class_loader(class_or_instance, base_cls)
-
-
-def fromdict(cls: Type[T], d: JSONObject) -> T:
- """
- Converts a Python dictionary object to a dataclass instance.
-
- Iterates over each dataclass field recursively; lists, dicts, and nested
- dataclasses will likewise be initialized as expected.
-
- When directly invoking this function, an optional Meta configuration for
- the dataclass can be specified via ``LoadMeta``; by default, this will
- apply recursively to any nested dataclasses. Here's a sample usage of this
- below::
-
- >>> LoadMeta(key_transform='CAMEL').bind_to(MyClass)
- >>> fromdict(MyClass, {"myStr": "value"})
-
- """
- try:
- load = CLASS_TO_LOAD_FUNC[cls]
- except KeyError:
- load = load_func_for_dataclass(cls)
-
- return load(d)
-
-
-def fromlist(cls: Type[T], list_of_dict: List[JSONObject]) -> List[T]:
- """
- Converts a Python list object to a list of dataclass instances.
-
- Iterates over each dataclass field recursively; lists, dicts, and nested
- dataclasses will likewise be initialized as expected.
-
- """
- try:
- load = CLASS_TO_LOAD_FUNC[cls]
- except KeyError:
- load = load_func_for_dataclass(cls)
-
- return [load(d) for d in list_of_dict]
-
-
def load_func_for_dataclass(
cls: Type[T],
is_main_class: bool = True,
@@ -624,9 +553,8 @@ def load_func_for_dataclass(
# Tuple describing the fields of this dataclass.
cls_fields = dataclass_fields(cls)
-
# Get the loader for the class, or create a new one as needed.
- cls_loader = get_loader(cls, base_cls=loader_cls)
+ cls_loader = get_loader(cls, base_cls=loader_cls, v1=False)
# Get the meta config for the class, or the default config otherwise.
meta = get_meta(cls)
@@ -701,7 +629,7 @@ def load_func_for_dataclass(
else:
loop_over_o = True
- with fn_gen.function('cls_fromdict', ['o']):
+ with fn_gen.function('cls_fromdict', ['o'], MISSING, _locals):
_pre_from_dict_method = getattr(cls, '_pre_from_dict', None)
if _pre_from_dict_method is not None:
@@ -749,7 +677,7 @@ def load_func_for_dataclass(
# Note this logic only runs the initial time, i.e. the first time
# we encounter the key in a JSON object.
#
- # :raises UnknownJSONKey: If there is no resolved field name for the
+ # :raises UnknownKeysError: If there is no resolved field name for the
# JSON key, and`raise_on_unknown_json_key` is enabled in the Meta
# config for the class.
@@ -777,8 +705,8 @@ def load_func_for_dataclass(
# Raise an error here (if needed)
if meta.raise_on_unknown_json_key:
- _globals['UnknownJSONKey'] = UnknownJSONKey
- fn_gen.add_line("raise UnknownJSONKey(json_key, o, cls, cls_fields) from None")
+ _globals['UnknownKeysError'] = UnknownKeysError
+ fn_gen.add_line("raise UnknownKeysError(json_key, o, cls, cls_fields) from None")
# Exclude JSON keys that don't map to any fields.
with fn_gen.if_('field is not ExplicitNull'):
@@ -838,11 +766,9 @@ def load_func_for_dataclass(
fn_gen.add_line("return cls(**init_kwargs)")
with fn_gen.except_(TypeError, 'e'):
- fn_gen.add_line("raise MissingFields(e, o, cls, init_kwargs, cls_fields) from None")
+ fn_gen.add_line("raise MissingFields(e, o, cls, cls_fields, init_kwargs) from None")
- functions = fn_gen.create_functions(
- locals=_locals, globals=_globals
- )
+ functions = fn_gen.create_functions(_globals)
cls_fromdict = functions['cls_fromdict']
diff --git a/dataclass_wizard/models.py b/dataclass_wizard/models.py
index 643c7dfd..8bacf8c3 100644
--- a/dataclass_wizard/models.py
+++ b/dataclass_wizard/models.py
@@ -1,14 +1,14 @@
import json
from dataclasses import MISSING, Field
from datetime import date, datetime, time
-from typing import Generic, Mapping, NewType
+from typing import Generic, Mapping, NewType, Any, TypedDict
from .constants import PY310_OR_ABOVE
from .decorators import cached_property
+from .type_def import T, DT, PyNotRequired
# noinspection PyProtectedMember
from .utils.dataclass_compat import _create_fn
from .utils.object_path import split_object_path
-from .type_def import T, DT, PyTypedDict
from .utils.type_conv import as_datetime, as_time, as_date
@@ -26,10 +26,16 @@
# DT_OR_NONE = Optional[DT]
-class Extras(PyTypedDict):
- # noinspection PyUnresolvedReferences,PyTypedDict
- config: 'META'
- pattern: 'PatternedDT'
+class Extras(TypedDict):
+ """
+ "Extra" config that can be used in the load / dump process.
+ """
+ config: PyNotRequired['META']
+ cls: type
+ cls_name: str
+ fn_gen: 'FunctionBuilder'
+ locals: dict[str, Any]
+ pattern: PyNotRequired['PatternedDT']
# noinspection PyShadowingBuiltins
diff --git a/dataclass_wizard/models.pyi b/dataclass_wizard/models.pyi
index 50a35159..3a2fd8a0 100644
--- a/dataclass_wizard/models.pyi
+++ b/dataclass_wizard/models.pyi
@@ -1,13 +1,14 @@
-from typing import TypedDict, overload, Any
import json
from dataclasses import MISSING, Field
from datetime import date, datetime, time
from typing import (Collection, Callable,
Generic, Mapping)
+from typing import TypedDict, overload, Any, NotRequired
from .bases import META
from .decorators import cached_property
from .type_def import T, DT, Encoder, FileEncoder
+from .utils.function_builder import FunctionBuilder
from .utils.object_path import PathPart, PathType
@@ -22,8 +23,12 @@ class Extras(TypedDict):
"""
"Extra" config that can be used in the load / dump process.
"""
- config: META
- pattern: PatternedDT
+ config: NotRequired[META]
+ cls: type
+ cls_name: str
+ fn_gen: FunctionBuilder
+ locals: dict[str, Any]
+ pattern: NotRequired[PatternedDT]
def json_key(*keys: str, all=False, dump=True):
diff --git a/dataclass_wizard/serial_json.py b/dataclass_wizard/serial_json.py
index e2e437d1..53e6e9ab 100644
--- a/dataclass_wizard/serial_json.py
+++ b/dataclass_wizard/serial_json.py
@@ -1,15 +1,18 @@
import json
import logging
+from dataclasses import is_dataclass, dataclass
from .abstractions import AbstractJSONWizard
from .bases_meta import BaseJSONWizardMeta, LoadMeta, DumpMeta
from .class_helper import call_meta_initializer_if_needed
from .dumpers import asdict
-from .loaders import fromdict, fromlist
+from .loader_selection import fromdict, fromlist
# noinspection PyProtectedMember
from .utils.dataclass_compat import _create_fn, _set_new_attribute
+from .type_def import dataclass_transform
+@dataclass_transform()
class JSONSerializable(AbstractJSONWizard):
__slots__ = ()
@@ -55,25 +58,45 @@ def list_to_json(cls,
return encoder(list_of_dict, **encoder_kwargs)
# noinspection PyShadowingBuiltins
- def __init_subclass__(cls, str=True, debug=False):
+ def __init_subclass__(cls, str=True, debug=False,
+ key_case=None,
+ _key_transform=None):
super().__init_subclass__()
+ load_meta_kwargs = {}
+
+ # if not is_dataclass(cls) and not cls.__module__.startswith('dataclass_wizard.'):
+ # # Apply the `@dataclass` decorator to the class
+ # # noinspection PyMethodFirstArgAssignment
+ # cls = dataclass(cls)
+
+ if key_case is not None:
+ load_meta_kwargs['v1'] = True
+ load_meta_kwargs['v1_key_case'] = key_case
+
+ if _key_transform is not None:
+ DumpMeta(key_transform=_key_transform).bind_to(cls)
+
if debug:
default_lvl = logging.DEBUG
logging.basicConfig(level=default_lvl)
# minimum logging level for logs by this library
min_level = default_lvl if isinstance(debug, bool) else debug
# set `debug_enabled` flag for the class's Meta
- LoadMeta(debug_enabled=min_level).bind_to(cls)
+ load_meta_kwargs['debug_enabled'] = min_level
# Calls the Meta initializer when inner :class:`Meta` is sub-classed.
call_meta_initializer_if_needed(cls)
+ if load_meta_kwargs:
+ LoadMeta(**load_meta_kwargs).bind_to(cls)
+
# Add a `__str__` method to the subclass, if needed
if str:
_set_new_attribute(cls, '__str__', _str_fn())
+ return cls
def _str_fn():
@@ -82,6 +105,13 @@ def _str_fn():
['return self.to_json(indent=2)'])
+def _str_pprint_fn():
+ from pprint import pformat
+ def __str__(self):
+ return pformat(self, width=70)
+ return __str__
+
+
# A handy alias in case it comes in useful to anyone :)
JSONWizard = JSONSerializable
@@ -89,9 +119,15 @@ def _str_fn():
class JSONPyWizard(JSONWizard):
"""Helper for JSONWizard that ensures dumping to JSON keeps keys as-is."""
- def __init_subclass__(cls, str=True, debug=False):
+ def __init_subclass__(cls, str=True, debug=False,
+ key_case=None,
+ _key_transform=None):
"""Bind child class to DumpMeta with no key transformation."""
- # set `key_transform_with_dump` for the class's Meta
- DumpMeta(key_transform='NONE').bind_to(cls)
+
# Call JSONSerializable.__init_subclass__()
- super().__init_subclass__(str, debug)
+ # set `key_transform_with_dump` for the class's Meta
+ cls = super().__init_subclass__(False, debug, key_case, 'NONE')
+ # Add a `__str__` method to the subclass, if needed
+ if str:
+ _set_new_attribute(cls, '__str__', _str_pprint_fn())
+ return cls
diff --git a/dataclass_wizard/serial_json.pyi b/dataclass_wizard/serial_json.pyi
index 1ac428e0..d0d87e18 100644
--- a/dataclass_wizard/serial_json.pyi
+++ b/dataclass_wizard/serial_json.pyi
@@ -1,8 +1,10 @@
import json
-from typing import AnyStr, Collection, Callable, Protocol
+from typing import AnyStr, Collection, Callable, Protocol, dataclass_transform
from .abstractions import AbstractJSONWizard, W
from .bases_meta import BaseJSONWizardMeta
+from .enums import LetterCase
+from .v1.enums import KeyCase
from .type_def import Decoder, Encoder, JSONObject, ListOfJSONObject
@@ -71,10 +73,13 @@ class JSONPyWizard(JSONSerializable, SerializerHookMixin):
def __init_subclass__(cls,
str: bool = True,
- debug: bool | str | int = False):
+ debug: bool | str | int = False,
+ key_case: KeyCase | str | None = None,
+ _key_transform: LetterCase | str | None = None):
"""Bind child class to DumpMeta with no key transformation."""
+@dataclass_transform()
class JSONSerializable(AbstractJSONWizard, SerializerHookMixin):
"""
Mixin class to allow a `dataclass` sub-class to be easily converted
@@ -171,7 +176,9 @@ class JSONSerializable(AbstractJSONWizard, SerializerHookMixin):
# noinspection PyShadowingBuiltins
def __init_subclass__(cls,
str: bool = True,
- debug: bool | str | int = False):
+ debug: bool | str | int = False,
+ key_case: KeyCase | str | None = None,
+ _key_transform: LetterCase | str | None = None):
"""
Checks for optional settings and flags that may be passed in by the
sub-class, and calls the Meta initializer when :class:`Meta` is sub-classed.
diff --git a/dataclass_wizard/type_def.py b/dataclass_wizard/type_def.py
index bca5bc68..dbbb45aa 100644
--- a/dataclass_wizard/type_def.py
+++ b/dataclass_wizard/type_def.py
@@ -1,12 +1,13 @@
__all__ = [
'PyForwardRef',
- 'PyLiteral',
'PyProtocol',
'PyDeque',
'PyTypedDict',
'PyTypedDicts',
'PyRequired',
'PyNotRequired',
+ 'PyReadOnly',
+ 'PyLiteralString',
'FrozenKeys',
'DefFactory',
'NoneType',
@@ -49,15 +50,21 @@
Union, NamedTuple, Callable, AnyStr, TextIO, BinaryIO,
Deque as PyDeque,
ForwardRef as PyForwardRef,
- Literal as PyLiteral,
Protocol as PyProtocol,
TypedDict as PyTypedDict, Iterable, Collection,
)
from uuid import UUID
-from .constants import PY311_OR_ABOVE
+from .constants import PY310_OR_ABOVE, PY311_OR_ABOVE, PY313_OR_ABOVE
+# The class of the `None` singleton, cached for re-usability
+if PY310_OR_ABOVE:
+ # https://docs.python.org/3/library/types.html#types.NoneType
+ from types import NoneType
+else:
+ NoneType = type(None)
+
# Type check for numeric types - needed because `bool` is technically
# a Number.
NUMBERS = int, float
@@ -102,9 +109,6 @@
# Default factory type, assuming a no-args constructor
DefFactory = Callable[[], T]
-# The class of the `None` singleton, cached for re-usability
-NoneType = type(None)
-
# For Python 3.8+, we need to use both `TypedDict` implementations (from both
# the `typing` and `typing_extensions` modules). Because it's not clear which
# version users might choose to use. And they might choose to use either, due
@@ -147,15 +151,26 @@
# Python 3.11 introduced `Required` and `NotRequired` wrappers for
# `TypedDict` fields (PEP 655). Python 3.9+ users can import the
# wrappers from `typing_extensions`.
-if PY311_OR_ABOVE: # pragma: no cover
- from typing import Required as PyRequired
- from typing import NotRequired as PyNotRequired
- from typing import dataclass_transform
-else:
- from typing_extensions import Required as PyRequired
- from typing_extensions import NotRequired as PyNotRequired
- from typing_extensions import dataclass_transform
+if PY313_OR_ABOVE: # pragma: no cover
+ from typing import (Required as PyRequired,
+ NotRequired as PyNotRequired,
+ ReadOnly as PyReadOnly,
+ LiteralString as PyLiteralString,
+ dataclass_transform)
+
+elif PY311_OR_ABOVE: # pragma: no cover
+ from typing import (Required as PyRequired,
+ NotRequired as PyNotRequired,
+ LiteralString as PyLiteralString,
+ dataclass_transform)
+ from typing_extensions import ReadOnly as PyReadOnly
+else:
+ from typing_extensions import (Required as PyRequired,
+ NotRequired as PyNotRequired,
+ ReadOnly as PyReadOnly,
+ LiteralString as PyLiteralString,
+ dataclass_transform)
# Forward references can be either strings or explicit `ForwardRef` objects.
# noinspection SpellCheckingInspection
diff --git a/dataclass_wizard/utils/function_builder.py b/dataclass_wizard/utils/function_builder.py
index 4fb5cd06..4de55a98 100644
--- a/dataclass_wizard/utils/function_builder.py
+++ b/dataclass_wizard/utils/function_builder.py
@@ -7,6 +7,7 @@
class FunctionBuilder:
__slots__ = (
'current_function',
+ 'prev_function',
'functions',
'globals',
'indent_level',
@@ -19,6 +20,16 @@ def __init__(self):
self.globals = {}
self.namespace = {}
+ def __ior__(self, other):
+ """
+ Allows `|=` operation for :class:`FunctionBuilder` objects,
+ e.g. ::
+ my_fn_builder |= other_fn_builder
+
+ """
+ self.functions |= other.functions
+ return self
+
def __enter__(self):
self.indent_level += 1
@@ -28,10 +39,24 @@ def __exit__(self, exc_type, exc_val, exc_tb):
if not indent_lvl:
self.finalize_function()
- def function(self, name: str, args: list, return_type=MISSING) -> 'FunctionBuilder':
+ # noinspection PyAttributeOutsideInit
+ def function(self, name: str, args: list, return_type=MISSING,
+ locals=None) -> 'FunctionBuilder':
"""Start a new function definition with optional return type."""
- # noinspection PyAttributeOutsideInit
- self.current_function = {"name": name, "args": args, "body": [], "return_type": return_type}
+ curr_fn = getattr(self, 'current_function', None)
+ if curr_fn is not None:
+ curr_fn['indent_level'] = self.indent_level
+ self.prev_function = curr_fn
+
+ self.current_function = {
+ "name": name,
+ "args": args,
+ "body": [],
+ "return_type": return_type,
+ "locals": locals if locals is not None else {},
+ }
+
+ self.indent_level = 0
return self
def _with_new_block(self,
@@ -129,7 +154,8 @@ def try_(self) -> 'FunctionBuilder':
def except_(self,
cls: type[Exception],
- var_name: 'str | None' = None):
+ var_name: 'str | None' = None,
+ *custom_classes: type[Exception]):
"""Equivalent to the `except` block in Python.
Sample Usage:
@@ -147,7 +173,18 @@ def except_(self,
statement = f'{cls_name} as {var_name}' if var_name else cls_name
if not is_builtin_class(cls):
- self.globals[cls_name] = cls
+ if cls_name not in self.globals:
+ # TODO
+ # LOG.debug('Ensuring class in globals, cls=%s', cls_name)
+ self.globals[cls_name] = cls
+
+ if custom_classes:
+ for cls in custom_classes:
+ if not is_builtin_class(cls):
+ cls_name = cls.__name__
+ if cls_name not in self.globals:
+ # LOG.debug('Ensuring class in globals, cls=%s', cls_name)
+ self.globals[cls_name] = cls
return self._with_new_block('except', statement)
@@ -175,19 +212,27 @@ def decrease_indent(self): # pragma: no cover
def finalize_function(self):
"""Finalize the function code and add to the list of functions."""
# Add the function body and don't re-add the function definition
- func_code = '\n'.join(self.current_function["body"])
- self.functions[self.current_function["name"]] = ({"args": self.current_function["args"],
- "return_type": self.current_function["return_type"],
- "code": func_code})
- self.current_function = None # Reset current function
+ curr_fn = self.current_function
+ func_code = '\n'.join(curr_fn["body"])
+ self.functions[curr_fn["name"]] = {
+ "args": curr_fn["args"],
+ "return_type": curr_fn["return_type"],
+ "locals": curr_fn["locals"],
+ "code": func_code
+ }
- def create_functions(self, *, globals=None, locals=None):
+ if (prev_fn := getattr(self, 'prev_function', None)) is not None:
+ self.indent_level = prev_fn.pop('indent_level')
+ self.current_function = prev_fn
+ self.prev_function = None
+ else:
+ self.current_function # Reset current function
+
+ def create_functions(self, _globals=None):
"""Create functions by compiling the code."""
# Note that we may mutate locals. Callers beware!
# The only callers are internal to this module, so no
# worries about external callers.
- if locals is None: # pragma: no cover
- locals = {}
# Compute the text of the entire function.
# txt = f' def {name}({args}){return_annotation}:\n{body}'
@@ -198,42 +243,59 @@ def create_functions(self, *, globals=None, locals=None):
# our purposes. So we put the things we need into locals and introduce a
# scope to allow the function we're creating to close over them.
- name_to_func_code = {}
+ fn_name_locals_and_code = []
for name, func in self.functions.items():
args = ','.join(func['args'])
body = func['code']
return_type = func['return_type']
+ locals = func['locals']
return_annotation = ''
if return_type is not MISSING:
locals[f'__dataclass_{name}_return_type__'] = return_type
return_annotation = f'->__dataclass_{name}_return_type__'
- name_to_func_code[name] = f'def {name}({args}){return_annotation}:\n{body}'
-
- local_vars = ', '.join(locals.keys())
+ fn_name_locals_and_code.append(
+ (name,
+ locals,
+ f'def {name}({args}){return_annotation}:\n{body}')
+ )
txt = '\n'.join([
- f"def __create_{name}_fn__({local_vars}):\n"
+ f"def __create_{name}_fn__({', '.join(locals.keys())}):\n"
f" {code}\n"
f" return {name}"
- for name, code in name_to_func_code.items()
+ for name, locals, code in fn_name_locals_and_code
])
# Print the generated code for debugging
# logging.debug(f"Generated function code:\n{all_func_code}")
- LOG.debug(f"Generated function code:\n{txt}")
+ LOG.debug("Generated function code:\n%s", txt)
ns = {}
- exec(txt, globals | self.globals, ns)
- final_ns = self.namespace = {
- name: ns[f'__create_{name}_fn__'](**locals)
- for name in name_to_func_code
- }
+ # TODO
+ _globals = self.globals if _globals is None else _globals | self.globals
+
+ LOG.debug("Globals before function compilation: %s", _globals)
+
+ exec(txt, _globals, ns)
+
+ # TODO do we need self.namespace?
+ final_ns = self.namespace = {}
+
+ # TODO: add function to dependent function `locals` rather than to `globals`
+
+ for name, locals, _ in fn_name_locals_and_code:
+ _globals[name] = final_ns[name] = ns[f'__create_{name}_fn__'](**locals)
+
+ # final_ns = self.namespace = {
+ # name: ns[f'__create_{name}_fn__'](**locals)
+ # for name, locals, _ in fn_name_locals_and_code
+ # }
# Print namespace for debugging
- LOG.debug(f"Namespace after function compilation: {self.namespace}")
+ LOG.debug("Namespace after function compilation: %s", final_ns)
return final_ns
diff --git a/dataclass_wizard/utils/object_path.py b/dataclass_wizard/utils/object_path.py
index 9be8c8d0..095437a4 100644
--- a/dataclass_wizard/utils/object_path.py
+++ b/dataclass_wizard/utils/object_path.py
@@ -3,7 +3,7 @@
from ..errors import ParseError
-def safe_get(data, path, default=MISSING):
+def safe_get(data, path, default=MISSING, raise_=True):
current_data = data
p = path # to avoid "unbound local variable" warnings
@@ -20,7 +20,7 @@ def safe_get(data, path, default=MISSING):
# AttributeError -
# raised when `data` is an invalid type, such as a `None`
except (IndexError, KeyError, AttributeError) as e:
- if default is MISSING:
+ if raise_ and default is MISSING:
raise _format_err(e, current_data, path, p) from None
return default
@@ -28,12 +28,12 @@ def safe_get(data, path, default=MISSING):
# raised when `data` is a `list`, but we try to use it like a `dict`
except TypeError:
e = TypeError('Invalid path')
- raise _format_err(e, current_data, path, p) from None
+ raise _format_err(e, current_data, path, p, True) from None
-def _format_err(e, current_data, path, current_path):
+def _format_err(e, current_data, path, current_path, invalid_path=False):
return ParseError(
- e, current_data, None,
+ e, current_data, dict if invalid_path else None,
path=' => '.join(repr(p) for p in path),
current_path=repr(current_path),
)
diff --git a/dataclass_wizard/utils/string_conv.py b/dataclass_wizard/utils/string_conv.py
index d775e708..432e2ca1 100644
--- a/dataclass_wizard/utils/string_conv.py
+++ b/dataclass_wizard/utils/string_conv.py
@@ -1,4 +1,5 @@
__all__ = ['normalize',
+ 'to_json_key',
'to_camel_case',
'to_pascal_case',
'to_lisp_case',
@@ -8,6 +9,8 @@
import re
from typing import Iterable, Dict, List
+from ..type_def import JSONObject
+
def normalize(string: str) -> str:
"""
@@ -17,6 +20,54 @@ def normalize(string: str) -> str:
return string.replace('-', '').replace('_', '').upper()
+def to_json_key(o: JSONObject,
+ field: str,
+ f2k: 'dict[str, str]') -> 'str | None':
+ """
+ Maps a dataclass field name to its corresponding key in a JSON object.
+
+ This function checks multiple naming conventions (e.g., camelCase,
+ PascalCase, kebab-case, etc.) to find the matching key in the JSON
+ object `o`. It also caches the mapping for future use.
+
+ Args:
+ o (dict[str, Any]): The JSON object to search for the key.
+ field (str): The dataclass field name to map.
+ f2k (dict[str, str]): A dictionary to cache field-to-key mappings.
+
+ Returns:
+ str: The matching JSON key for the given field.
+ None: If no matching key is found in `o`.
+ """
+ # Short path: key matching field name exists in `o`
+ if field in o:
+ key = field
+ # `camelCase`
+ elif (key := to_camel_case(field)) in o:
+ ...
+ # `PascalCase`: same as `camelCase` but first letter is capitalized
+ elif (key := key[0].upper() + key[1:]) in o:
+ ...
+ # `kebab-case`
+ elif (key := to_lisp_case(field)) in o:
+ ...
+ # `Upper-Kebab`: same as `kebab-case`, each word is title-cased
+ elif (key := key.title()) in o:
+ ...
+ # `Upper_Snake`
+ elif (key := key.replace('-', '_')) in o:
+ ...
+ # `snake_case`
+ elif (key := key.lower()) in o:
+ ...
+ else:
+ key = None
+
+ # Cache the result
+ f2k[field] = key
+ return key
+
+
def to_camel_case(string: str) -> str:
"""
Convert a string to Camel Case.
diff --git a/dataclass_wizard/utils/typing_compat.py b/dataclass_wizard/utils/typing_compat.py
index c83153b1..7a42e11a 100644
--- a/dataclass_wizard/utils/typing_compat.py
+++ b/dataclass_wizard/utils/typing_compat.py
@@ -5,6 +5,8 @@
__all__ = [
'is_literal',
'get_origin',
+ 'get_origin_v2',
+ 'is_typed_dict_type_qualifier',
'get_args',
'get_keys_for_typed_dict',
'is_typed_dict',
@@ -16,16 +18,24 @@
import functools
import sys
-import types
import typing
# noinspection PyUnresolvedReferences,PyProtectedMember
-from typing import _AnnotatedAlias
+from typing import Literal, Union, _AnnotatedAlias
from .string_conv import repl_or_with_union
from ..constants import PY310_OR_ABOVE, PY313_OR_ABOVE
-from ..type_def import FREF, PyLiteral, PyTypedDicts, PyForwardRef
+from ..type_def import (FREF,
+ PyRequired,
+ PyNotRequired,
+ PyReadOnly,
+ PyTypedDicts,
+ PyForwardRef)
+_TYPED_DICT_TYPE_QUALIFIERS = frozenset(
+ {PyRequired, PyNotRequired, PyReadOnly}
+)
+
# TODO maybe move this to `type_def` if it makes sense
TypedDictTypes = []
@@ -50,33 +60,55 @@ def _is_annotated(cls):
return isinstance(cls, _AnnotatedAlias)
+# TODO Remove
def is_literal(cls) -> bool:
try:
- return cls.__origin__ is PyLiteral
+ return cls.__origin__ is Literal
except AttributeError:
return False
+# Ref:
+# https://typing.readthedocs.io/en/latest/spec/typeddict.html#required-and-notrequired
+# https://typing.readthedocs.io/en/latest/spec/glossary.html#term-type-qualifier
+def is_typed_dict_type_qualifier(cls) -> bool:
+ return cls in _TYPED_DICT_TYPE_QUALIFIERS
+
# Ref:
# https://github.com/python/typing/blob/master/typing_extensions/src_py3/typing_extensions.py#L2111
if PY310_OR_ABOVE: # pragma: no cover
+ from types import GenericAlias, UnionType
_get_args = typing.get_args
_BASE_GENERIC_TYPES = (
typing._GenericAlias,
typing._SpecialForm,
- types.GenericAlias,
- types.UnionType,
+ GenericAlias,
+ UnionType,
)
+ _UNION_TYPES = frozenset({
+ UnionType,
+ Union,
+ })
+
_TYPING_LOCALS = None
def _process_forward_annotation(base_type):
return PyForwardRef(base_type, is_argument=False)
+ def is_union(cls) -> bool:
+ return cls in _UNION_TYPES
+
+ def get_origin_v2(cls):
+ if type(cls) is UnionType:
+ return UnionType
+
+ return getattr(cls, '__origin__', cls)
+
def _get_origin(cls, raise_=False):
- if isinstance(cls, types.UnionType):
- return typing.Union
+ if isinstance(cls, UnionType):
+ return Union
try:
return cls.__origin__
@@ -96,12 +128,18 @@ def _get_origin(cls, raise_=False):
# PEP 585 is introduced in Python 3.9
# PEP 604 (Allows writing union types as `X | Y`) is introduced
# in Python 3.10
- _TYPING_LOCALS = {'Union': typing.Union}
+ _TYPING_LOCALS = {'Union': Union}
def _process_forward_annotation(base_type):
return PyForwardRef(
repl_or_with_union(base_type), is_argument=False)
+ def is_union(cls) -> bool:
+ return cls is Union
+
+ def get_origin_v2(cls):
+ return getattr(cls, '__origin__', cls)
+
def _get_origin(cls, raise_=False):
try:
return cls.__origin__
@@ -111,7 +149,7 @@ def _get_origin(cls, raise_=False):
return cls
-def is_typed_dict(cls: typing.Type) -> bool:
+def is_typed_dict(cls: type) -> bool:
"""
Checks if `cls` is a sub-class of ``TypedDict``
"""
@@ -129,52 +167,44 @@ def is_generic(cls):
return isinstance(cls, _BASE_GENERIC_TYPES)
-def get_args(cls):
- """
- Get type arguments with all substitutions performed.
-
- For unions, basic simplifications used by Union constructor are performed.
- Examples::
- get_args(Dict[str, int]) == (str, int)
- get_args(int) == ()
- get_args(Union[int, Union[T, int], str][int]) == (int, str)
- get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int])
- get_args(Callable[[], T][int]) == ([], int)
- """
- return _get_args(cls)
+get_args = _get_args
+get_args.__doc__ = """
+Get type arguments with all substitutions performed.
+For unions, basic simplifications used by Union constructor are performed.
+Examples::
+ get_args(Dict[str, int]) == (str, int)
+ get_args(int) == ()
+ get_args(Union[int, Union[T, int], str][int]) == (int, str)
+ get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int])
+ get_args(Callable[[], T][int]) == ([], int)\
+"""
# TODO refactor to use `typing.get_origin` when time permits.
-def get_origin(cls, raise_=False):
- """
- Get the un-subscripted value of a type. If we're unable to retrieve this
- value, return type `cls` if `raise_` is false.
-
- This supports generic types, Callable, Tuple, Union, Literal, Final and
- ClassVar. Return None for unsupported types.
-
- Examples::
-
- get_origin(Literal[42]) is Literal
- get_origin(int) is int
- get_origin(ClassVar[int]) is ClassVar
- get_origin(Generic) is Generic
- get_origin(Generic[T]) is Generic
- get_origin(Union[T, int]) is Union
- get_origin(List[Tuple[T, T]][int]) == list
-
- :raise AttributeError: When the `raise_` flag is enabled, and we are
- unable to retrieve the un-subscripted value.
-
- """
- return _get_origin(cls, raise_=raise_)
-
+get_origin = _get_origin
+get_origin.__doc__ = """
+Get the un-subscripted value of a type. If we're unable to retrieve this
+value, return type `cls` if `raise_` is false.
+
+This supports generic types, Callable, Tuple, Union, Literal, Final and
+ClassVar. Return None for unsupported types.
+
+Examples::
+
+ get_origin(Literal[42]) is Literal
+ get_origin(int) is int
+ get_origin(ClassVar[int]) is ClassVar
+ get_origin(Generic) is Generic
+ get_origin(Generic[T]) is Generic
+ get_origin(Union[T, int]) is Union
+ get_origin(List[Tuple[T, T]][int]) == list
+
+:raise AttributeError: When the `raise_` flag is enabled, and we are
+ unable to retrieve the un-subscripted value.\
+"""
-def is_annotated(cls):
- """
- Detects a :class:`typing.Annotated` class.
- """
- return _is_annotated(cls)
+is_annotated = _is_annotated
+is_annotated.__doc__ = """Detects a :class:`typing.Annotated` class."""
if PY313_OR_ABOVE:
@@ -186,7 +216,7 @@ def is_annotated(cls):
def eval_forward_ref(base_type: FREF,
- cls: typing.Type):
+ cls: type):
"""
Evaluate a forward reference using the class globals, and return the
underlying type reference.
@@ -201,14 +231,17 @@ def eval_forward_ref(base_type: FREF,
return _eval_type(base_type, base_globals, _TYPING_LOCALS)
-def eval_forward_ref_if_needed(base_type: typing.Union[typing.Type, FREF],
- base_cls: typing.Type):
+_ForwardRefTypes = frozenset(FREF.__constraints__)
+
+
+def eval_forward_ref_if_needed(base_type: Union[type, FREF],
+ base_cls: type):
"""
If needed, evaluate a forward reference using the class globals, and
return the underlying type reference.
"""
- if isinstance(base_type, FREF.__constraints__):
+ if type(base_type) in _ForwardRefTypes:
# Evaluate the forward reference here.
base_type = eval_forward_ref(base_type, base_cls)
diff --git a/dataclass_wizard/v1/__init__.py b/dataclass_wizard/v1/__init__.py
new file mode 100644
index 00000000..50467633
--- /dev/null
+++ b/dataclass_wizard/v1/__init__.py
@@ -0,0 +1,4 @@
+__all__ = ['Alias',
+ 'AliasPath']
+
+from .models import Alias, AliasPath
diff --git a/dataclass_wizard/v1/enums.py b/dataclass_wizard/v1/enums.py
new file mode 100644
index 00000000..fd9e406a
--- /dev/null
+++ b/dataclass_wizard/v1/enums.py
@@ -0,0 +1,44 @@
+from enum import Enum
+
+from ..utils.string_conv import to_camel_case, to_pascal_case, to_lisp_case, to_snake_case
+from ..utils.wrappers import FuncWrapper
+
+
+class KeyAction(Enum):
+ """
+ Defines the action to take when an unknown key is encountered during deserialization.
+ """
+ IGNORE = 0 # Silently skip unknown keys.
+ RAISE = 1 # Raise an exception for the first unknown key.
+ WARN = 2 # Log a warning for each unknown key.
+ # INCLUDE = 3
+
+
+class KeyCase(Enum):
+ """
+ By default, performs no conversion on strings.
+ ex: `MY_FIELD_NAME` -> `MY_FIELD_NAME`
+
+ """
+ # Converts strings (generally in snake case) to camel case.
+ # ex: `my_field_name` -> `myFieldName`
+ CAMEL = C = FuncWrapper(to_camel_case)
+
+ # Converts strings to "upper" camel case.
+ # ex: `my_field_name` -> `MyFieldName`
+ PASCAL = P = FuncWrapper(to_pascal_case)
+ # Converts strings (generally in camel or snake case) to lisp case.
+ # ex: `myFieldName` -> `my-field-name`
+ KEBAB = K = FuncWrapper(to_lisp_case)
+ # Converts strings (generally in camel case) to snake case.
+ # ex: `myFieldName` -> `my_field_name`
+ SNAKE = S = FuncWrapper(to_snake_case)
+ # Auto-maps JSON keys to dataclass fields.
+ #
+ # All valid key casing transforms are attempted at runtime,
+ # and the result is cached for subsequent lookups.
+ # ex: `My-Field-Name` -> `my_field_name`
+ AUTO = A = None
+
+ def __call__(self, *args):
+ return self.value.f(*args)
diff --git a/dataclass_wizard/v1/loaders.py b/dataclass_wizard/v1/loaders.py
new file mode 100644
index 00000000..167bef67
--- /dev/null
+++ b/dataclass_wizard/v1/loaders.py
@@ -0,0 +1,1206 @@
+# TODO cleanup imports
+
+import collections.abc as abc
+from base64 import decodebytes
+from collections import defaultdict, deque
+from dataclasses import is_dataclass, MISSING, Field
+from datetime import datetime, time, date, timedelta
+from decimal import Decimal
+from enum import Enum
+from pathlib import Path
+# noinspection PyUnresolvedReferences,PyProtectedMember
+from typing import (
+ Any, Type, Dict, List, Tuple, Iterable, Sequence, Union,
+ NamedTupleMeta,
+ SupportsFloat, AnyStr, Text, Callable, Optional, cast, Literal, Annotated
+)
+from uuid import UUID
+
+from .models import TypeInfo
+from ..abstractions import AbstractLoaderGenerator
+from ..bases import BaseLoadHook, AbstractMeta
+from ..class_helper import (
+ v1_dataclass_field_to_alias, json_field_to_dataclass_field,
+ CLASS_TO_LOAD_FUNC, dataclass_fields, get_meta, is_subclass_safe, DATACLASS_FIELD_TO_ALIAS_PATH_FOR_LOAD,
+ dataclass_init_fields, dataclass_field_to_default, create_meta, dataclass_init_field_names,
+)
+from ..constants import CATCH_ALL, TAG
+from ..decorators import _identity
+from .enums import KeyAction, KeyCase
+from ..errors import (ParseError, MissingFields, UnknownKeysError,
+ MissingData, JSONWizardError)
+from ..loader_selection import get_loader, fromdict
+from ..log import LOG
+from ..models import Extras
+from ..type_def import (
+ DefFactory, NoneType, JSONObject,
+ PyLiteralString,
+ T
+)
+# noinspection PyProtectedMember
+from ..utils.dataclass_compat import _set_new_attribute
+from ..utils.function_builder import FunctionBuilder
+from ..utils.object_path import safe_get
+from ..utils.string_conv import to_json_key
+from ..utils.type_conv import (
+ as_bool, as_datetime, as_date, as_time, as_int, as_timedelta,
+)
+from ..utils.typing_compat import (
+ is_typed_dict, get_args, is_annotated,
+ eval_forward_ref_if_needed, get_origin_v2, is_union,
+ get_keys_for_typed_dict, is_typed_dict_type_qualifier,
+)
+
+
+# Atomic immutable types which don't require any recursive handling and for which deepcopy
+# returns the same object. We can provide a fast-path for these types in asdict and astuple.
+_SIMPLE_TYPES = (
+ # Common JSON Serializable types
+ NoneType,
+ bool,
+ int,
+ float,
+ str,
+ # Other common types
+ complex,
+ bytes,
+ # TODO support
+ # Other types that are also unaffected by deepcopy
+ # types.EllipsisType,
+ # types.NotImplementedType,
+ # types.CodeType,
+ # types.BuiltinFunctionType,
+ # types.FunctionType,
+ # type,
+ # range,
+ # property,
+)
+
+
+class LoadMixin(AbstractLoaderGenerator, BaseLoadHook):
+ """
+ This Mixin class derives its name from the eponymous `json.loads`
+ function. Essentially it contains helper methods to convert JSON strings
+ (or a Python dictionary object) to a `dataclass` which can often contain
+ complex types such as lists, dicts, or even other dataclasses nested
+ within it.
+
+ Refer to the :class:`AbstractLoader` class for documentation on any of the
+ implemented methods.
+
+ """
+ __slots__ = ()
+
+ def __init_subclass__(cls, **kwargs):
+ super().__init_subclass__()
+ setup_default_loader(cls)
+
+ transform_json_field = None
+
+ @staticmethod
+ @_identity
+ def default_load_to(tp: TypeInfo, extras: Extras) -> str:
+ # identity: o
+ return tp.v()
+
+ @staticmethod
+ def load_after_type_check(tp: TypeInfo, extras: Extras) -> str:
+ ...
+ # return f'{tp.v()} if instance({tp.v()}, {tp.t()}'
+
+ # if isinstance(o, base_type):
+ # return o
+ #
+ # e = ValueError(f'data type is not a {base_type!s}')
+ # raise ParseError(e, o, base_type)
+
+ @staticmethod
+ def load_to_str(tp: TypeInfo, extras: Extras) -> str:
+ # TODO skip None check if in Optional
+ # return f'{tp.name}({tp.v()})'
+ return f"'' if {(v := tp.v())} is None else {tp.name}({v})"
+
+ @staticmethod
+ def load_to_int(tp: TypeInfo, extras: Extras) -> str:
+ # TODO
+ extras['locals'].setdefault('as_int', as_int)
+
+ # TODO
+ return f"as_int({tp.v()}, {tp.name})"
+
+ @staticmethod
+ def load_to_float(tp: TypeInfo, extras: Extras) -> str:
+ # alias: base_type(o)
+ return f'{tp.name}({tp.v()})'
+
+ @staticmethod
+ def load_to_bool(tp: TypeInfo, extras: Extras) -> str:
+ extras['locals'].setdefault('as_bool', as_bool)
+ return f"as_bool({tp.v()})"
+ # Uncomment for efficiency!
+ # extras['locals']['_T'] = _TRUTHY_VALUES
+ # return f'{tp.v()} if (t := type({tp.v()})) is bool else ({tp.v()}.lower() in _T if t is str else {tp.v()} == 1)'
+
+ @staticmethod
+ def load_to_bytes(tp: TypeInfo, extras: Extras) -> str:
+ extras['locals'].setdefault('decodebytes', decodebytes)
+ return f'decodebytes({tp.v()}.encode())'
+
+ @staticmethod
+ def load_to_bytearray(tp: TypeInfo, extras: Extras) -> str:
+ extras['locals'].setdefault('decodebytes', decodebytes)
+ return f'{tp.name}(decodebytes({tp.v()}.encode()))'
+
+ @staticmethod
+ def load_to_none(tp: TypeInfo, extras: Extras) -> str:
+ return 'None'
+
+ @staticmethod
+ def load_to_enum(tp: TypeInfo, extras: Extras) -> str:
+ # alias: base_type(o)
+ return tp.v()
+
+ # load_to_uuid = load_to_enum
+ @staticmethod
+ def load_to_uuid(tp: TypeInfo, extras: Extras):
+ # alias: base_type(o)
+ return tp.wrap_builtin(tp.v(), extras)
+
+ @classmethod
+ def load_to_iterable(cls, tp: TypeInfo, extras: Extras):
+ v, v_next, i_next = tp.v_and_next()
+ gorg = tp.origin
+
+ try:
+ elem_type = tp.args[0]
+ except:
+ elem_type = Any
+
+ string = cls.get_string_for_annotation(
+ tp.replace(origin=elem_type, i=i_next), extras)
+
+ # TODO
+ if issubclass(gorg, (set, frozenset)):
+ start_char = '{'
+ end_char = '}'
+ else:
+ start_char = '['
+ end_char = ']'
+
+ result = f'{start_char}{string} for {v_next} in {v}{end_char}'
+
+ return tp.wrap(result, extras)
+
+ @classmethod
+ def load_to_tuple(cls, tp: TypeInfo, extras: Extras):
+ args = tp.args
+
+ # Determine the code string for the annotation
+
+ # Check if the `Tuple` appears in the variadic form
+ # i.e. Tuple[str, ...]
+ if args:
+ is_variadic = args[-1] is ...
+ else:
+ args = (Any, ...)
+ is_variadic = True
+
+ if is_variadic:
+ # Parser that handles the variadic form of :class:`Tuple`'s,
+ # i.e. ``Tuple[str, ...]``
+ #
+ # Per `PEP 484`_, only **one** required type is allowed before the
+ # ``Ellipsis``. That is, ``Tuple[int, ...]`` is valid whereas
+ # ``Tuple[int, str, ...]`` would be invalid. `See here`_ for more info.
+ #
+ # .. _PEP 484: https://www.python.org/dev/peps/pep-0484/
+ # .. _See here: https://github.com/python/typing/issues/180
+ v, v_next, i_next = tp.v_and_next()
+
+ string = cls.get_string_for_annotation(
+ tp.replace(origin=args[0], i=i_next), extras)
+
+ # A one-element tuple containing the parser for the first type
+ # argument.
+ # Given `Tuple[T, ...]`, we only need a parser for `T`
+ # self.first_elem_parser = get_parser(elem_types[0], cls, extras),
+ # Total count should be `Infinity` here, since the variadic form
+ # accepts any number of possible arguments.
+ # self.total_count: N = float('inf')
+ # self.required_count = 0
+
+ result = f'[{string} for {v_next} in {v}]'
+ # Wrap because we need to create a tuple from list comprehension
+ force_wrap = True
+
+ else:
+ string = ', '.join([
+ cls.get_string_for_annotation(
+ tp.replace(origin=arg, index=k),
+ extras)
+ for k, arg in enumerate(args)])
+
+ result = f'({string}, )'
+ force_wrap = False
+
+ return tp.wrap(result, extras, force=force_wrap)
+
+ @classmethod
+ def load_to_named_tuple(cls, tp: TypeInfo, extras: Extras):
+
+ fn_gen = FunctionBuilder()
+
+ extras_cp: Extras = extras.copy()
+ extras_cp['locals'] = _locals = {
+ 'msg': "`dict` input is not supported for NamedTuple, use a dataclass instead."
+ }
+
+ fn_name = f'_load_{extras["cls_name"]}_nt_typed_{tp.name}'
+
+ field_names = []
+ result_list = []
+ num_fields = 0
+ # TODO set __annotations__?
+ for x, y in tp.origin.__annotations__.items():
+ result_list.append(cls.get_string_for_annotation(
+ tp.replace(origin=y, index=num_fields), extras_cp))
+ field_names.append(x)
+ num_fields += 1
+
+ with fn_gen.function(fn_name, ['v1'], None, _locals):
+ fn_gen.add_line('fields = []')
+ with fn_gen.try_():
+ for i, string in enumerate(result_list):
+ fn_gen.add_line(f'fields.append({string})')
+ with fn_gen.except_(IndexError):
+ fn_gen.add_line('pass')
+ with fn_gen.except_(KeyError):
+ # Input object is a `dict`
+ # TODO should we support dict for namedtuple?
+ fn_gen.add_line('raise TypeError(msg) from None')
+ fn_gen.add_line(f'return {tp.wrap("*fields", extras_cp, prefix="nt_")}')
+
+ extras['fn_gen'] |= fn_gen
+
+ return f'{fn_name}({tp.v()})'
+
+ @classmethod
+ def load_to_named_tuple_untyped(cls, tp: TypeInfo, extras: Extras):
+ # Check if input object is `dict` or `list`.
+ #
+ # Assuming `Point` is a `namedtuple`, this performs
+ # the equivalent logic as:
+ # Point(**x) if isinstance(x, dict) else Point(*x)
+ v = tp.v()
+ star, dbl_star = tp.multi_wrap(extras, 'nt_', f'*{v}', f'**{v}')
+ return f'{dbl_star} if isinstance({v}, dict) else {star}'
+
+ @classmethod
+ def _build_dict_comp(cls, tp, v, i_next, k_next, v_next, kt, vt, extras):
+ tp_k_next = tp.replace(origin=kt, i=i_next, prefix='k')
+ string_k = cls.get_string_for_annotation(tp_k_next, extras)
+
+ tp_v_next = tp.replace(origin=vt, i=i_next, prefix='v')
+ string_v = cls.get_string_for_annotation(tp_v_next, extras)
+
+ return f'{{{string_k}: {string_v} for {k_next}, {v_next} in {v}.items()}}'
+
+ @classmethod
+ def load_to_dict(cls, tp: TypeInfo, extras: Extras):
+ v, k_next, v_next, i_next = tp.v_and_next_k_v()
+
+ try:
+ kt, vt = tp.args
+ except ValueError:
+ # TODO
+ kt = vt = Any
+
+ result = cls._build_dict_comp(
+ tp, v, i_next, k_next, v_next, kt, vt, extras)
+
+ # TODO
+ return tp.wrap(result, extras)
+
+ @classmethod
+ def load_to_defaultdict(cls, tp: TypeInfo, extras: Extras):
+ v, k_next, v_next, i_next = tp.v_and_next_k_v()
+ default_factory: DefFactory
+
+ try:
+ kt, vt = tp.args
+ default_factory = getattr(vt, '__origin__', vt)
+ except ValueError:
+ # TODO
+ kt = vt = default_factory = Any
+
+ result = cls._build_dict_comp(
+ tp, v, i_next, k_next, v_next, kt, vt, extras)
+
+ return tp.wrap_dd(default_factory, result, extras)
+
+ @classmethod
+ def load_to_typed_dict(cls, tp: TypeInfo, extras: Extras):
+ fn_gen = FunctionBuilder()
+
+ req_keys, opt_keys = get_keys_for_typed_dict(tp.origin)
+
+ extras_cp: Extras = extras.copy()
+ extras_cp['locals'] = _locals = {}
+
+ fn_name = f'_load_{extras["cls_name"]}_typeddict_{tp.name}'
+
+ result_list = []
+ # TODO set __annotations__?
+ annotations = tp.origin.__annotations__
+
+ # Set required keys for the `TypedDict`
+ for k in req_keys:
+ field_tp = annotations[k]
+ field_name = repr(k)
+ string = cls.get_string_for_annotation(
+ tp.replace(origin=field_tp,
+ index=field_name), extras_cp)
+
+ result_list.append(f'{field_name}: {string}')
+
+ with fn_gen.function(fn_name, ['v1'], None, _locals):
+ with fn_gen.try_():
+ fn_gen.add_lines('result = {',
+ *(f' {r},' for r in result_list),
+ '}')
+
+ # Set optional keys for the `TypedDict` (if they exist)
+ for k in opt_keys:
+ field_tp = annotations[k]
+ field_name = repr(k)
+ string = cls.get_string_for_annotation(
+ tp.replace(origin=field_tp,
+ i=2), extras_cp)
+ with fn_gen.if_(f'(v2 := v1.get({field_name}, MISSING)) is not MISSING'):
+ fn_gen.add_line(f'result[{field_name}] = {string}')
+ fn_gen.add_line('return result')
+ with fn_gen.except_(Exception, 'e'):
+ with fn_gen.if_('type(e) is KeyError'):
+ fn_gen.add_line('name = e.args[0]; e = KeyError(f"Missing required key: {name!r}")')
+ with fn_gen.elif_('not isinstance(v1, dict)'):
+ fn_gen.add_line('e = TypeError("Incorrect type for object")')
+ fn_gen.add_line('raise ParseError(e, v1, {}) from None')
+
+ extras['fn_gen'] |= fn_gen
+
+ return f'{fn_name}({tp.v()})'
+
+ @classmethod
+ def load_to_union(cls, tp: TypeInfo, extras: Extras):
+ fn_gen = FunctionBuilder()
+ config = extras['config']
+
+ tag_key = config.tag_key or TAG
+ auto_assign_tags = config.auto_assign_tags
+
+ fields = f'fields_{tp.field_i}'
+
+ extras_cp: Extras = extras.copy()
+ extras_cp['locals'] = _locals = {
+ fields: tp.args,
+ 'tag_key': tag_key,
+ }
+
+ actual_cls = extras['cls']
+
+ fn_name = f'load_to_{extras["cls_name"]}_union_{tp.field_i}'
+
+ # TODO handle dataclasses in union (tag)
+
+ with fn_gen.function(fn_name, ['v1'], None, _locals):
+
+ dataclass_tag_to_lines: dict[str, list] = {}
+
+ type_checks = []
+ try_parse_at_end = []
+
+ for possible_tp in tp.args:
+
+ possible_tp = eval_forward_ref_if_needed(possible_tp, actual_cls)
+
+ tp_new = TypeInfo(possible_tp, field_i=tp.field_i)
+
+ if possible_tp is NoneType:
+ with fn_gen.if_('v1 is None'):
+ fn_gen.add_line('return None')
+ continue
+
+ if is_dataclass(possible_tp):
+ # we see a dataclass in `Union` declaration
+ meta = get_meta(possible_tp)
+ tag = meta.tag
+ assign_tags_to_cls = auto_assign_tags or meta.auto_assign_tags
+ cls_name = possible_tp.__name__
+
+ if assign_tags_to_cls and not tag:
+ tag = cls_name
+ # We don't want to mutate the base Meta class here
+ if meta is AbstractMeta:
+ create_meta(possible_tp, cls_name, tag=tag)
+ else:
+ meta.tag = cls_name
+
+ if tag:
+ string = cls.get_string_for_annotation(tp_new, extras_cp)
+
+ dataclass_tag_to_lines[tag] = [
+ f'if tag == {tag!r}:',
+ f' return {string}'
+ ]
+ continue
+
+ elif not config.v1_unsafe_parse_dataclass_in_union:
+ e = ValueError(f'Cannot parse dataclass types in a Union without one of the following `Meta` settings:\n\n'
+ ' * `auto_assign_tags = True`\n'
+ f' - Set on class `{extras["cls_name"]}`.\n\n'
+ f' * `tag = "{cls_name}"`\n'
+ f' - Set on class `{possible_tp.__qualname__}`.\n\n'
+ ' * `v1_unsafe_parse_dataclass_in_union = True`\n'
+ f' - Set on class `{extras["cls_name"]}`\n\n'
+ 'For more information, refer to:\n'
+ ' https://dataclass-wizard.readthedocs.io/en/latest/common_use_cases/dataclasses_in_union_types.html')
+ raise e from None
+
+ string = cls.get_string_for_annotation(tp_new, extras_cp)
+
+ try_parse_lines = [
+ 'try:',
+ f' return {string}',
+ 'except Exception:',
+ ' pass',
+ ]
+
+ # TODO disable for dataclasses
+
+ if possible_tp in _SIMPLE_TYPES or is_subclass_safe(get_origin_v2(possible_tp), _SIMPLE_TYPES):
+ tn = tp_new.type_name(extras_cp)
+ type_checks.extend([
+ f'if tp is {tn}:',
+ ' return v1'
+ ])
+ list_to_add = try_parse_at_end
+ else:
+ list_to_add = type_checks
+
+ list_to_add.extend(try_parse_lines)
+
+ if dataclass_tag_to_lines:
+
+ with fn_gen.try_():
+ fn_gen.add_line(f'tag = v1[tag_key]')
+
+ with fn_gen.except_(Exception):
+ fn_gen.add_line('pass')
+
+ with fn_gen.else_():
+
+ for lines in dataclass_tag_to_lines.values():
+ fn_gen.add_lines(*lines)
+
+ fn_gen.add_line(
+ "raise ParseError("
+ "TypeError('Object with tag was not in any of Union types'),"
+ f"v1,{fields},"
+ "input_tag=tag,"
+ "tag_key=tag_key,"
+ f"valid_tags={list(dataclass_tag_to_lines)})"
+ )
+
+ fn_gen.add_line('tp = type(v1)')
+
+ if type_checks:
+ fn_gen.add_lines(*type_checks)
+
+ if try_parse_at_end:
+ fn_gen.add_lines(*try_parse_at_end)
+
+ # Invalid type for Union
+ fn_gen.add_line("raise ParseError("
+ "TypeError('Object was not in any of Union types'),"
+ f"v1,{fields},"
+ "tag_key=tag_key"
+ ")")
+
+ extras['fn_gen'] |= fn_gen
+
+ return f'{fn_name}({tp.v()})'
+
+ @staticmethod
+ def load_to_literal(tp: TypeInfo, extras: Extras):
+ fn_gen = FunctionBuilder()
+
+ fields = f'fields_{tp.field_i}'
+
+ extras_cp: Extras = extras.copy()
+ extras_cp['locals'] = _locals = {
+ fields: frozenset(tp.args),
+ }
+
+ fn_name = f'load_to_{extras["cls_name"]}_literal_{tp.field_i}'
+
+ with fn_gen.function(fn_name, ['v1'], None, _locals):
+
+ with fn_gen.if_(f'{tp.v()} in {fields}'):
+ fn_gen.add_line('return v1')
+
+ # No such Literal with the value of `o`
+ fn_gen.add_line("e = ValueError('Value not in expected Literal values')")
+ fn_gen.add_line(f'raise ParseError(e, v1, {fields}, '
+ f'allowed_values=list({fields}))')
+
+ # TODO Checks for Literal equivalence, as mentioned here:
+ # https://www.python.org/dev/peps/pep-0586/#equivalence-of-two-literals
+
+ # extras_cp['locals'][fields] = {
+ # a: type(a) for a in tp.args
+ # }
+ #
+ # with fn_gen.function(fn_name, ['v1'], None, _locals):
+ #
+ # with fn_gen.try_():
+ # with fn_gen.if_(f'type({tp.v()}) is {fields}[{tp.v()}]'):
+ # fn_gen.add_line('return v1')
+ #
+ # # The value of `o` is in the ones defined for the Literal, but
+ # # also confirm the type matches the one defined for the Literal.
+ # fn_gen.add_line("e = TypeError('Value did not match expected type for the Literal')")
+ #
+ # fn_gen.add_line('raise ParseError('
+ # f'e, v1, {fields}, '
+ # 'have_type=type(v1), '
+ # f'desired_type={fields}[v1], '
+ # f'desired_value=next(v for v in {fields} if v == v1), '
+ # f'allowed_values=list({fields})'
+ # ')')
+ # with fn_gen.except_(KeyError):
+ # # No such Literal with the value of `o`
+ # fn_gen.add_line("e = ValueError('Value not in expected Literal values')")
+ # fn_gen.add_line('raise ParseError('
+ # f'e, v1, {fields}, allowed_values=list({fields})'
+ # f')')
+
+ extras['fn_gen'] |= fn_gen
+
+ return f'{fn_name}({tp.v()})'
+
+ @staticmethod
+ def load_to_decimal(tp: TypeInfo, extras: Extras):
+ s = f'str({tp.v()}) if isinstance({tp.v()}, float) else {tp.v()}'
+ return tp.wrap_builtin(s, extras)
+
+ # alias: base_type(o)
+ load_to_path = load_to_uuid
+
+ @staticmethod
+ def load_to_datetime(tp: TypeInfo, extras: Extras):
+ # alias: as_datetime
+ tp.ensure_in_locals(extras, as_datetime, datetime)
+ return f'as_datetime({tp.v()}, {tp.name})'
+
+ @staticmethod
+ def load_to_time(tp: TypeInfo, extras: Extras):
+ # alias: as_time
+ tp.ensure_in_locals(extras, as_time, time)
+ return f'as_time({tp.v()}, {tp.name})'
+
+ @staticmethod
+ def load_to_date(tp: TypeInfo, extras: Extras):
+ # alias: as_date
+ tp.ensure_in_locals(extras, as_date, date)
+ return f'as_date({tp.v()}, {tp.name})'
+
+ @staticmethod
+ def load_to_timedelta(tp: TypeInfo, extras: Extras):
+ # alias: as_timedelta
+ tp.ensure_in_locals(extras, as_timedelta, timedelta)
+ return f'as_timedelta({tp.v()}, {tp.name})'
+
+ @staticmethod
+ def load_to_dataclass(tp: TypeInfo, extras: Extras):
+ fn_name = load_func_for_dataclass(
+ tp.origin, extras, False)
+
+ return f'{fn_name}({tp.v()})'
+
+ @classmethod
+ def get_string_for_annotation(cls,
+ tp,
+ extras):
+
+ hooks = cls.__LOAD_HOOKS__
+
+ # type_ann = tp.origin
+ type_ann = eval_forward_ref_if_needed(tp.origin, extras['cls'])
+
+ origin = get_origin_v2(type_ann)
+ name = getattr(origin, '__name__', origin)
+
+ args = None
+ wrap = False
+
+ if is_annotated(type_ann) or is_typed_dict_type_qualifier(origin):
+ # Given `Required[T]` or `NotRequired[T]`, we only need `T`
+ # noinspection PyUnresolvedReferences
+ type_ann = get_args(type_ann)[0]
+ origin = get_origin_v2(type_ann)
+ name = getattr(origin, '__name__', origin)
+ # origin = type_ann.__args__[0]
+
+ # -> Union[x]
+ if is_union(origin):
+ args = get_args(type_ann)
+
+ # Special case for Optional[x], which is actually Union[x, None]
+ if NoneType in args and len(args) == 2:
+ string = cls.get_string_for_annotation(
+ tp.replace(origin=args[0], args=None, name=None), extras)
+ return f'None if {tp.v()} is None else {string}'
+
+ load_hook = cls.load_to_union
+
+ # raise NotImplementedError('`Union` support is not yet fully implemented!')
+
+ elif origin is Literal:
+ load_hook = cls.load_to_literal
+ args = get_args(type_ann)
+
+ # TODO maybe add `load_to_literal_string`
+ elif origin is PyLiteralString:
+ load_hook = cls.load_to_str
+ origin = str
+ name = 'str'
+
+ # -> Atomic, immutable types which don't require
+ # any iterative / recursive handling.
+ # TODO use subclass safe
+ elif origin in _SIMPLE_TYPES or is_subclass_safe(origin, _SIMPLE_TYPES):
+ load_hook = hooks.get(origin)
+
+ elif (load_hook := hooks.get(origin)) is not None:
+ # TODO
+ try:
+ args = get_args(type_ann)
+ except ValueError:
+ args = Any,
+
+ # https://stackoverflow.com/questions/76520264/dataclasswizard-after-upgrading-to-python3-11-is-not-working-as-expected
+ elif origin is Any:
+ load_hook = cls.default_load_to
+
+ elif issubclass(origin, tuple) and hasattr(origin, '_fields'):
+
+ if getattr(origin, '__annotations__', None):
+ # Annotated as a `typing.NamedTuple` subtype
+ load_hook = cls.load_to_named_tuple
+
+ # load_hook = hooks.get(NamedTupleMeta)
+ # return NamedTupleParser(
+ # base_cls, extras, base_type, load_hook,
+ # cls.get_parser_for_annotation
+ # )
+ else:
+ # Annotated as a `collections.namedtuple` subtype
+ load_hook = cls.load_to_named_tuple_untyped
+
+ # TODO type(cls)
+ elif is_typed_dict(origin):
+ load_hook = cls.load_to_typed_dict
+
+ elif is_dataclass(origin):
+ # return a dynamically generated `fromdict`
+ # for the `cls` (base_type)
+ load_hook = cls.load_to_dataclass
+
+ elif origin in (abc.Sequence, abc.MutableSequence, abc.Collection):
+ if origin is abc.Sequence:
+ load_hook = cls.load_to_tuple
+ # desired (non-generic) origin type
+ name = 'tuple'
+ origin = tuple
+ # Re-map type arguments to variadic tuple format,
+ # e.g. `Sequence[int]` -> `tuple[int, ...]`
+ try:
+ args = (get_args(type_ann)[0], ...)
+ except (IndexError, ValueError):
+ args = Any,
+ else:
+ load_hook = cls.load_to_iterable
+ # desired (non-generic) origin type
+ name = 'list'
+ origin = list
+ # Get type arguments, e.g. `Sequence[int]` -> `int`
+ try:
+ args = get_args(type_ann)
+ except ValueError:
+ args = Any,
+
+ else:
+
+ # TODO everything should use `get_origin_v2`
+ try:
+ args = get_args(type_ann)
+ except ValueError:
+ args = Any,
+
+ if load_hook is None:
+ # TODO END
+ for t in hooks:
+ if issubclass(origin, (t,)):
+ load_hook = hooks[t]
+ wrap = True
+ break
+ else:
+ wrap = False
+
+ tp.origin = origin
+ tp.args = args
+ tp.name = name
+
+ if load_hook is not None:
+ result = load_hook(tp, extras)
+ # Only wrap result if not already wrapped
+ if wrap:
+ if (wrapped := getattr(result, '_wrapped', None)) is not None:
+ return wrapped
+ return tp.wrap(result, extras)
+ return result
+
+ # No matching hook is found for the type.
+ # TODO do we want to add a `Meta` field to not raise
+ # an error but perform a default action?
+ err = TypeError('Provided type is not currently supported.')
+ pe = ParseError(
+ err, origin, type_ann,
+ resolution='Consider decorating the class with `@dataclass`',
+ unsupported_type=origin
+ )
+ raise pe from None
+
+
+def setup_default_loader(cls=LoadMixin):
+ """
+ Setup the default type hooks to use when converting `str` (json) or a
+ Python `dict` object to a `dataclass` instance.
+
+ Note: `cls` must be :class:`LoadMixIn` or a sub-class of it.
+ """
+ # TODO maybe `dict.update` might be better?
+
+ # Technically a complex type, however check this
+ # first, since `StrEnum` and `IntEnum` are subclasses
+ # of `str` and `int`
+ cls.register_load_hook(Enum, cls.load_to_enum)
+ # Simple types
+ cls.register_load_hook(str, cls.load_to_str)
+ cls.register_load_hook(float, cls.load_to_float)
+ cls.register_load_hook(bool, cls.load_to_bool)
+ cls.register_load_hook(int, cls.load_to_int)
+ cls.register_load_hook(bytes, cls.load_to_bytes)
+ cls.register_load_hook(bytearray, cls.load_to_bytearray)
+ cls.register_load_hook(NoneType, cls.load_to_none)
+ # Complex types
+ cls.register_load_hook(UUID, cls.load_to_uuid)
+ cls.register_load_hook(set, cls.load_to_iterable)
+ cls.register_load_hook(frozenset, cls.load_to_iterable)
+ cls.register_load_hook(deque, cls.load_to_iterable)
+ cls.register_load_hook(list, cls.load_to_iterable)
+ cls.register_load_hook(tuple, cls.load_to_tuple)
+ # `typing` Generics
+ # cls.register_load_hook(Literal, cls.load_to_literal)
+ # noinspection PyTypeChecker
+ cls.register_load_hook(defaultdict, cls.load_to_defaultdict)
+ cls.register_load_hook(dict, cls.load_to_dict)
+ cls.register_load_hook(Decimal, cls.load_to_decimal)
+ cls.register_load_hook(Path, cls.load_to_path)
+ # Dates and times
+ cls.register_load_hook(datetime, cls.load_to_datetime)
+ cls.register_load_hook(time, cls.load_to_time)
+ cls.register_load_hook(date, cls.load_to_date)
+ cls.register_load_hook(timedelta, cls.load_to_timedelta)
+
+
+def add_to_missing_fields(missing_fields: 'list[str] | None', field: str):
+ if missing_fields is None:
+ missing_fields = [field]
+ else:
+ missing_fields.append(field)
+ return missing_fields
+
+
+def check_and_raise_missing_fields(
+ _locals, o, cls, fields: tuple[Field, ...]):
+
+ missing_fields = [f.name for f in fields
+ if f.init
+ and f'__{f.name}' not in _locals
+ and (f.default is MISSING
+ and f.default_factory is MISSING)]
+
+ missing_keys = [v1_dataclass_field_to_alias(cls)[field]
+ for field in missing_fields]
+
+ raise MissingFields(
+ None, o, cls, fields, None, missing_fields,
+ missing_keys
+ ) from None
+
+def load_func_for_dataclass(
+ cls: type,
+ extras: Extras,
+ is_main_class: bool = True,
+ loader_cls=LoadMixin,
+ base_meta_cls: type = AbstractMeta,
+) -> Union[Callable[[JSONObject], T], str]:
+
+ # TODO dynamically generate for multiple nested classes at once
+
+ # Tuple describing the fields of this dataclass.
+ fields = dataclass_fields(cls)
+
+ cls_init_fields = dataclass_init_fields(cls, True)
+ cls_init_field_names = dataclass_init_field_names(cls)
+
+ field_to_default = dataclass_field_to_default(cls)
+
+ has_defaults = True if field_to_default else False
+
+ # Get the loader for the class, or create a new one as needed.
+ cls_loader = get_loader(cls, base_cls=loader_cls, v1=True)
+
+ # Get the meta config for the class, or the default config otherwise.
+ meta = get_meta(cls, base_meta_cls)
+
+ if is_main_class: # we are being run for the main dataclass
+ # If the `recursive` flag is enabled and a Meta config is provided,
+ # apply the Meta recursively to any nested classes.
+ #
+ # Else, just use the base `AbstractMeta`.
+ config = meta if meta.recursive else base_meta_cls
+
+ _globals = {
+ 'add': add_to_missing_fields,
+ 're_raise': re_raise,
+ 'ParseError': ParseError,
+ # 'LOG': LOG,
+ 'raise_missing_fields': check_and_raise_missing_fields,
+ 'MISSING': MISSING,
+ }
+
+ # we are being run for a nested dataclass
+ else:
+ # config for nested dataclasses
+ config = extras['config']
+
+ if config is not base_meta_cls:
+ # we want to apply the meta config from the main dataclass
+ # recursively.
+ meta = meta | config
+ meta.bind_to(cls, is_default=False)
+
+ key_case: 'V1LetterCase | None' = cls_loader.transform_json_field
+
+ field_to_alias = v1_dataclass_field_to_alias(cls)
+ check_aliases = True if field_to_alias else False
+
+ # This contains a mapping of the original field name to the parser for its
+ # annotated type; the item lookup *can* be case-insensitive.
+ # try:
+ # field_to_parser = dataclass_field_to_load_parser(cls_loader, cls, config)
+ # except RecursionError:
+ # if meta.recursive_classes:
+ # # recursion-safe loader is already in use; something else must have gone wrong
+ # raise
+ # else:
+ # raise RecursiveClassError(cls) from None
+
+ field_to_path = DATACLASS_FIELD_TO_ALIAS_PATH_FOR_LOAD[cls]
+ has_alias_paths = True if field_to_path else False
+
+ # Fix for using `auto_assign_tags` and `raise_on_unknown_json_key` together
+ # See https://github.com/rnag/dataclass-wizard/issues/137
+ has_tag_assigned = meta.tag is not None
+ if (has_tag_assigned and
+ # Ensure `tag_key` isn't a dataclass field,
+ # to avoid issues with our logic.
+ # See https://github.com/rnag/dataclass-wizard/issues/148
+ meta.tag_key not in cls_init_field_names):
+ expect_tag_as_unknown_key = True
+ else:
+ expect_tag_as_unknown_key = False
+
+ _locals = {
+ 'cls': cls,
+ 'fields': fields,
+ }
+
+ if key_case is KeyCase.AUTO:
+ _locals['f2k'] = field_to_alias
+ _locals['to_key'] = to_json_key
+
+ on_unknown_key = meta.v1_on_unknown_key
+
+ catch_all_field = field_to_alias.pop(CATCH_ALL, None)
+ has_catch_all = catch_all_field is not None
+
+ if has_catch_all:
+ pre_assign = 'i+=1; '
+ catch_all_field_stripped = catch_all_field.rstrip('?')
+ catch_all_idx = cls_init_field_names.index(catch_all_field_stripped)
+ # remove catch all field from list, so we don't iterate over it
+ del cls_init_fields[catch_all_idx]
+ else:
+ pre_assign = ''
+ catch_all_field_stripped = catch_all_idx = None
+
+ if on_unknown_key is not None:
+ should_raise = on_unknown_key is KeyAction.RAISE
+ should_warn = on_unknown_key is KeyAction.WARN
+ if should_warn or should_raise:
+ pre_assign = 'i+=1; '
+ else:
+ should_raise = should_warn = None
+
+ if has_alias_paths:
+ _locals['safe_get'] = safe_get
+
+ # Initialize the FuncBuilder
+ fn_gen = FunctionBuilder()
+
+ cls_name = cls.__name__
+ # noinspection PyTypeChecker
+ new_extras: Extras = {
+ 'config': config,
+ 'locals': _locals,
+ 'cls': cls,
+ 'cls_name': cls_name,
+ 'fn_gen': fn_gen,
+ }
+
+ fn_name = f'__dataclass_wizard_from_dict_{cls_name}__'
+
+ with fn_gen.function(fn_name, ['o'], MISSING, _locals):
+
+ if (_pre_from_dict := getattr(cls, '_pre_from_dict', None)) is not None:
+ _locals['__pre_from_dict__'] = _pre_from_dict
+ fn_gen.add_line('o = __pre_from_dict__(o)')
+
+ # Need to create a separate dictionary to copy over the constructor
+ # args, as we don't want to mutate the original dictionary object.
+ if has_defaults:
+ fn_gen.add_line('init_kwargs = {}')
+ if pre_assign:
+ fn_gen.add_line('i = 0')
+
+ vars_for_fields = []
+
+ if cls_init_fields:
+
+ with fn_gen.try_():
+
+ if expect_tag_as_unknown_key and pre_assign:
+ with fn_gen.if_(f'{meta.tag_key!r} in o'):
+ fn_gen.add_line('i+=1')
+
+ for i, f in enumerate(cls_init_fields):
+ val = 'v1'
+ name = f.name
+ var = f'__{name}'
+
+ if (check_aliases
+ and (key := field_to_alias.get(name)) is not None
+ and name != key):
+ f_assign = f'field={name!r}; key={key!r}; {val}=o.get(key, MISSING)'
+
+ elif (has_alias_paths
+ and (path := field_to_path.get(name)) is not None):
+
+ if name in field_to_default:
+ f_assign = f'field={name!r}; {val}=safe_get(o, {path!r}, MISSING, False)'
+ else:
+ f_assign = f'field={name!r}; {val}=safe_get(o, {path!r})'
+
+ # TODO raise some useful message like (ex. on IndexError):
+ # Field "my_str" of type tuple[float, str] in A2 has invalid value ['123']
+
+ elif key_case is None:
+ field_to_alias[name] = name
+ f_assign = f'field={name!r}; {val}=o.get(field, MISSING)'
+ elif key_case is KeyCase.AUTO:
+ f_assign = f'field={name!r}; key=f2k.get(field) or to_key(o,field,f2k); {val}=o.get(key, MISSING)'
+ else:
+ field_to_alias[name] = key = key_case(name)
+ f_assign = f'field={name!r}; key={key!r}; {val}=o.get(key, MISSING)'
+
+ string = generate_field_code(cls_loader, new_extras, f, i)
+
+ if name in field_to_default:
+ fn_gen.add_line(f_assign)
+
+ with fn_gen.if_(f'{val} is not MISSING'):
+ fn_gen.add_line(f'{pre_assign}init_kwargs[field] = {string}')
+
+ else:
+ # TODO confirm this is ok
+ # vars_for_fields.append(f'{name}={var}')
+ vars_for_fields.append(var)
+ fn_gen.add_line(f_assign)
+
+ with fn_gen.if_(f'{val} is not MISSING'):
+ fn_gen.add_line(f'{pre_assign}{var} = {string}')
+
+ # create a broad `except Exception` block, as we will be
+ # re-raising all exception(s) as a custom `ParseError`.
+ with fn_gen.except_(Exception, 'e', ParseError):
+ fn_gen.add_line("re_raise(e, cls, o, fields, field, locals().get('v1'))")
+
+ if has_catch_all:
+ if expect_tag_as_unknown_key:
+ # add an alias for the tag key, so we don't capture it
+ field_to_alias['...'] = meta.tag_key
+
+ if 'f2k' in _locals:
+ # If this is the case, then `AUTO` key transform mode is enabled
+ # line = 'extra_keys = o.keys() - f2k.values()'
+ aliases_var = 'f2k.values()'
+
+ else:
+ aliases_var = 'aliases'
+ _locals['aliases'] = set(field_to_alias.values())
+
+ catch_all_def = f'{{k: o[k] for k in o if k not in {aliases_var}}}'
+
+ if catch_all_field.endswith('?'): # Default value
+ with fn_gen.if_('len(o) != i'):
+ fn_gen.add_line(f'init_kwargs[{catch_all_field_stripped!r}] = {catch_all_def}')
+ else:
+ var = f'__{catch_all_field_stripped}'
+ fn_gen.add_line(f'{var} = {{}} if len(o) == i else {catch_all_def}')
+ vars_for_fields.insert(catch_all_idx, var)
+
+ elif should_warn or should_raise:
+ if expect_tag_as_unknown_key:
+ # add an alias for the tag key, so we don't raise an error when we see it
+ field_to_alias['...'] = meta.tag_key
+
+ if 'f2k' in _locals:
+ # If this is the case, then `AUTO` key transform mode is enabled
+ line = 'extra_keys = o.keys() - f2k.values()'
+ else:
+ _locals['aliases'] = set(field_to_alias.values())
+ line = 'extra_keys = set(o) - aliases'
+
+ with fn_gen.if_('len(o) != i'):
+ fn_gen.add_line(line)
+ if should_raise:
+ # Raise an error here (if needed)
+ _locals['UnknownKeysError'] = UnknownKeysError
+ fn_gen.add_line("raise UnknownKeysError(extra_keys, o, cls, fields) from None")
+ elif should_warn:
+ # Show a warning here
+ _locals['LOG'] = LOG
+ fn_gen.add_line(r"LOG.warning('Found %d unknown keys %r not mapped to the dataclass schema.\n"
+ r" Class: %r\n Dataclass fields: %r', len(extra_keys), extra_keys, cls.__qualname__, [f.name for f in fields])")
+
+ # Now pass the arguments to the constructor method, and return
+ # the new dataclass instance. If there are any missing fields,
+ # we raise them here.
+
+ if has_defaults:
+ vars_for_fields.append('**init_kwargs')
+ init_parts = ', '.join(vars_for_fields)
+ with fn_gen.try_():
+ fn_gen.add_line(f"return cls({init_parts})")
+ with fn_gen.except_(UnboundLocalError):
+ # raise `MissingFields`, as required dataclass fields
+ # are not present in the input object `o`.
+ fn_gen.add_line("raise_missing_fields(locals(), o, cls, fields)")
+
+ # Save the load function for the main dataclass, so we don't need to run
+ # this logic each time.
+ if is_main_class:
+ # noinspection PyUnboundLocalVariable
+ functions = fn_gen.create_functions(_globals)
+
+ cls_fromdict = functions[fn_name]
+
+ # Check if the class has a `from_dict`, and it's
+ # a class method bound to `fromdict`.
+ if ((from_dict := getattr(cls, 'from_dict', None)) is not None
+ and getattr(from_dict, '__func__', None) is fromdict):
+
+ LOG.debug("setattr(%s, 'from_dict', %s)", cls_name, fn_name)
+ _set_new_attribute(cls, 'from_dict', cls_fromdict)
+
+ _set_new_attribute(
+ cls, '__dataclass_wizard_from_dict__', cls_fromdict)
+ LOG.debug(
+ "setattr(%s, '__dataclass_wizard_from_dict__', %s)",
+ cls_name, fn_name)
+
+ # TODO in `v1`, we will use class attribute (set above) instead.
+ CLASS_TO_LOAD_FUNC[cls] = cls_fromdict
+
+ return cls_fromdict
+
+ # Update the FunctionBuilder
+ extras['fn_gen'] |= fn_gen
+
+ return fn_name
+
+def generate_field_code(cls_loader: LoadMixin,
+ extras: Extras,
+ field: Field,
+ field_i: int) -> 'str | TypeInfo':
+
+ cls = extras['cls']
+ field_type = field.type = eval_forward_ref_if_needed(field.type, cls)
+
+ try:
+ return cls_loader.get_string_for_annotation(
+ TypeInfo(field_type, field_i=field_i), extras
+ )
+
+ except ParseError as pe:
+ pe.class_name = cls
+ pe.field_name = field.name
+ raise pe from None
+
+
+def re_raise(e, cls, o, fields, field, value):
+ # If the object `o` is None, then raise an error with
+ # the relevant info included.
+ if o is None:
+ raise MissingData(cls) from None
+
+ # Check if the object `o` is some other type than what we expect -
+ # for example, we could be passed in a `list` type instead.
+ if not isinstance(o, dict):
+ base_err = TypeError('Incorrect type for `from_dict()`')
+ e = ParseError(base_err, o, dict, cls, desired_type=dict)
+
+ add_fields = True
+ if type(e) is not ParseError:
+ if isinstance(e, JSONWizardError):
+ add_fields = False
+ else:
+ tp = getattr(next((f for f in fields if f.name == field), None), 'type', Any)
+ e = ParseError(e, value, tp)
+
+ # We run into a parsing error while loading the field value;
+ # Add additional info on the Exception object before re-raising it.
+ #
+ # First confirm these values are not already set by an
+ # inner dataclass. If so, it likely makes it easier to
+ # debug the cause. Note that this should already be
+ # handled by the `setter` methods.
+ if add_fields:
+ e.class_name, e.fields, e.field_name, e.json_object = cls, fields, field, o
+ else:
+ e.class_name, e.field_name, e.json_object = cls, field, o
+
+ raise e from None
diff --git a/dataclass_wizard/v1/models.py b/dataclass_wizard/v1/models.py
new file mode 100644
index 00000000..3fd6378f
--- /dev/null
+++ b/dataclass_wizard/v1/models.py
@@ -0,0 +1,353 @@
+from dataclasses import MISSING, Field as _Field
+from typing import Any, TypedDict
+
+from ..constants import PY310_OR_ABOVE
+from ..log import LOG
+from ..type_def import DefFactory, ExplicitNull
+# noinspection PyProtectedMember
+from ..utils.object_path import split_object_path
+from ..utils.typing_compat import get_origin_v2, PyNotRequired
+
+
+_BUILTIN_COLLECTION_TYPES = frozenset({
+ list,
+ set,
+ dict,
+ tuple
+})
+
+
+class TypeInfo:
+
+ __slots__ = (
+ # type origin (ex. `List[str]` -> `List`)
+ 'origin',
+ # type arguments (ex. `Dict[str, int]` -> `(str, int)`)
+ 'args',
+ # name of type origin (ex. `List[str]` -> 'list')
+ 'name',
+ # index of iteration, *only* unique within the scope of a field assignment!
+ 'i',
+ # index of field within the dataclass, *guaranteed* to be unique.
+ 'field_i',
+ # prefix of value in assignment (prepended to `i`),
+ # defaults to 'v' if not specified.
+ 'prefix',
+ # index of assignment (ex. `2 -> v1[2]`, *or* a string `"key" -> v4["key"]`)
+ 'index',
+ # optional attribute, that indicates if we should wrap the
+ # assignment with `name` -- ex. `(1, 2)` -> `deque((1, 2))`
+ '_wrapped',
+ )
+
+ def __init__(self, origin,
+ args=None,
+ name=None,
+ i=1,
+ field_i=1,
+ prefix='v',
+ index=None):
+
+ self.name = name
+ self.origin = origin
+ self.args = args
+ self.i = i
+ self.field_i = field_i
+ self.prefix = prefix
+ self.index = index
+
+ def replace(self, **changes):
+ # Validate that `instance` is an instance of the class
+ # if not isinstance(instance, TypeInfo):
+ # raise TypeError(f"Expected an instance of {TypeInfo.__name__}, got {type(instance).__name__}")
+
+ # Extract current values from __slots__
+ current_values = {slot: getattr(self, slot)
+ for slot in TypeInfo.__slots__
+ if not slot.startswith('_')}
+
+ # Apply the changes
+ current_values.update(changes)
+
+ # Create and return a new instance with updated attributes
+ # noinspection PyArgumentList
+ return TypeInfo(**current_values)
+
+ @staticmethod
+ def ensure_in_locals(extras, *types):
+ locals = extras['locals']
+ for tp in types:
+ locals.setdefault(tp.__name__, tp)
+
+ def type_name(self, extras):
+ """Return type name as string (useful for `Union` type checks)"""
+ if self.name is None:
+ self.name = get_origin_v2(self.origin).__name__
+
+ return self._wrap_inner(extras, force=True)
+
+ def v(self):
+ return (f'{self.prefix}{self.i}' if (idx := self.index) is None
+ else f'{self.prefix}{self.i}[{idx}]')
+
+ def v_and_next(self):
+ next_i = self.i + 1
+ return self.v(), f'v{next_i}', next_i
+
+ def v_and_next_k_v(self):
+ next_i = self.i + 1
+ return self.v(), f'k{next_i}', f'v{next_i}', next_i
+
+ def wrap_dd(self, default_factory: DefFactory, result: str, extras):
+ tn = self._wrap_inner(extras, is_builtin=True)
+ tn_df = self._wrap_inner(extras, default_factory, 'df_')
+ result = f'{tn}({tn_df}, {result})'
+ setattr(self, '_wrapped', result)
+ return self
+
+ def multi_wrap(self, extras, prefix='', *result, force=False):
+ tn = self._wrap_inner(extras, prefix=prefix, force=force)
+ if tn is not None:
+ result = [f'{tn}({r})' for r in result]
+
+ return result
+
+ def wrap(self, result: str, extras, force=False, prefix=''):
+ if (tn := self._wrap_inner(extras, prefix=prefix, force=force)) is not None:
+ result = f'{tn}({result})'
+
+ setattr(self, '_wrapped', result)
+ return self
+
+ def wrap_builtin(self, result: str, extras):
+ tn = self._wrap_inner(extras, is_builtin=True)
+ result = f'{tn}({result})'
+
+ setattr(self, '_wrapped', result)
+ return self
+
+ def _wrap_inner(self, extras,
+ tp=None,
+ prefix='',
+ is_builtin=False,
+ force=False) -> 'str | None':
+
+ if tp is None:
+ tp = self.origin
+ name = self.name
+ return_name = False
+ else:
+ name = tp.__name__
+ return_name = True
+
+ if force:
+ return_name = True
+
+ if tp not in _BUILTIN_COLLECTION_TYPES:
+ # TODO?
+ if is_builtin or (mod := tp.__module__) == 'collections':
+ tn = name
+ LOG.debug(f'Ensuring %s=%s', tn, name)
+ extras['locals'].setdefault(tn, tp)
+ elif mod == 'builtins':
+ tn = name
+ else:
+ tn = f'{prefix}{name}_{self.field_i}'
+ LOG.debug(f'Adding %s=%s', tn, name)
+ extras['locals'][tn] = tp
+
+ return tn
+
+ return name if return_name else None
+
+ def __str__(self):
+ return getattr(self, '_wrapped', '')
+
+ def __repr__(self):
+ items = ', '.join([f'{v}={getattr(self, v)!r}'
+ for v in self.__slots__
+ if not v.startswith('_')])
+
+ return f'{self.__class__.__name__}({items})'
+
+
+class Extras(TypedDict):
+ """
+ "Extra" config that can be used in the load / dump process.
+ """
+ config: PyNotRequired['META']
+ cls: type
+ cls_name: str
+ fn_gen: 'FunctionBuilder'
+ locals: dict[str, Any]
+ pattern: PyNotRequired['PatternedDT']
+
+
+# Instances of Field are only ever created from within this module,
+# and only from the field() function, although Field instances are
+# exposed externally as (conceptually) read-only objects.
+#
+# name and type are filled in after the fact, not in __init__.
+# They're not known at the time this class is instantiated, but it's
+# convenient if they're available later.
+#
+# When cls._FIELDS is filled in with a list of Field objects, the name
+# and type fields will have been populated.
+
+# In Python 3.10, dataclasses adds a new parameter to the :class:`Field`
+# constructor: `kw_only`
+#
+# Ref: https://docs.python.org/3.10/library/dataclasses.html#dataclasses.dataclass
+if PY310_OR_ABOVE: # pragma: no cover
+
+ # noinspection PyPep8Naming,PyShadowingBuiltins
+ def Alias(all=None, *,
+ load=None,
+ dump=None,
+ skip=False,
+ path=None,
+ default=MISSING,
+ default_factory=MISSING,
+ init=True, repr=True,
+ hash=None, compare=True,
+ metadata=None, kw_only=False):
+
+ if default is not MISSING and default_factory is not MISSING:
+ raise ValueError('cannot specify both default and default_factory')
+
+ if all is not None:
+ load = dump = all
+
+ return Field(load, dump, skip, path, default, default_factory, init, repr,
+ hash, compare, metadata, kw_only)
+
+ # noinspection PyPep8Naming,PyShadowingBuiltins
+ def AliasPath(all=None, *,
+ load=None,
+ dump=None,
+ skip=False,
+ default=MISSING,
+ default_factory=MISSING,
+ init=True, repr=True,
+ hash=None, compare=True,
+ metadata=None, kw_only=False):
+
+ if load is not None:
+ all = load
+ load = None
+ dump = ExplicitNull
+
+ elif dump is not None:
+ all = dump
+ dump = None
+ load = ExplicitNull
+
+ if isinstance(all, str):
+ all = split_object_path(all)
+
+ return Field(load, dump, skip, all, default, default_factory, init, repr,
+ hash, compare, metadata, kw_only)
+
+
+ class Field(_Field):
+
+ __slots__ = ('load_alias',
+ 'dump_alias',
+ 'skip',
+ 'path')
+
+ # noinspection PyShadowingBuiltins
+ def __init__(self,
+ load_alias, dump_alias, skip, path,
+ default, default_factory, init, repr, hash, compare,
+ metadata, kw_only):
+
+ super().__init__(default, default_factory, init, repr, hash,
+ compare, metadata, kw_only)
+
+ if path is not None:
+ if isinstance(path, str):
+ path = split_object_path(path) if path else (path, )
+
+ self.load_alias = load_alias
+ self.dump_alias = dump_alias
+ self.skip = skip
+ self.path = path
+
+else: # pragma: no cover
+ # noinspection PyPep8Naming,PyShadowingBuiltins
+ def Alias(all=None, *,
+ load=None,
+ dump=None,
+ skip=False,
+ path=None,
+ default=MISSING,
+ default_factory=MISSING,
+ init=True, repr=True,
+ hash=None, compare=True, metadata=None):
+
+ if default is not MISSING and default_factory is not MISSING:
+ raise ValueError('cannot specify both default and default_factory')
+
+ if all is not None:
+ load = dump = all
+
+ return Field(load, dump, skip, path,
+ default, default_factory, init, repr,
+ hash, compare, metadata)
+
+ # noinspection PyPep8Naming,PyShadowingBuiltins
+ def AliasPath(all=None, *,
+ load=None,
+ dump=None,
+ skip=False,
+ default=MISSING,
+ default_factory=MISSING,
+ init=True, repr=True,
+ hash=None, compare=True,
+ metadata=None):
+
+ if load is not None:
+ all = load
+ load = None
+ dump = ExplicitNull
+
+ elif dump is not None:
+ all = dump
+ dump = None
+ load = ExplicitNull
+
+ if isinstance(all, str):
+ all = split_object_path(all)
+
+ if isinstance(all, str):
+ all = split_object_path(all)
+
+ return Field(load, dump, skip, all, default, default_factory, init, repr,
+ hash, compare, metadata)
+
+
+ class Field(_Field):
+
+ __slots__ = ('load_alias',
+ 'dump_alias',
+ 'skip',
+ 'path')
+
+ # noinspection PyArgumentList,PyShadowingBuiltins
+ def __init__(self,
+ load_alias, dump_alias, skip, path,
+ default, default_factory, init, repr, hash, compare,
+ metadata):
+
+ super().__init__(default, default_factory, init, repr, hash,
+ compare, metadata)
+
+ if path is not None:
+ if isinstance(path, str):
+ path = split_object_path(path) if path else (path,)
+
+ self.load_alias = load_alias
+ self.dump_alias = dump_alias
+ self.skip = skip
+ self.path = path
diff --git a/dataclass_wizard/v1/models.pyi b/dataclass_wizard/v1/models.pyi
new file mode 100644
index 00000000..1d57857e
--- /dev/null
+++ b/dataclass_wizard/v1/models.pyi
@@ -0,0 +1,238 @@
+from dataclasses import MISSING, Field as _Field, dataclass
+from typing import (Collection, Callable,
+ Mapping)
+from typing import TypedDict, overload, Any, NotRequired, Self
+
+from ..bases import META
+from ..models import Condition
+from ..type_def import DefFactory
+from ..utils.function_builder import FunctionBuilder
+from ..utils.object_path import PathType
+
+
+# Define a simple type (alias) for the `CatchAll` field
+CatchAll = Mapping | None
+
+# Type for a string or a collection of strings.
+_STR_COLLECTION = str | Collection[str]
+
+
+@dataclass(order=True)
+class TypeInfo:
+ __slots__ = ...
+ # type origin (ex. `List[str]` -> `List`)
+ origin: type
+ # type arguments (ex. `Dict[str, int]` -> `(str, int)`)
+ args: tuple[type, ...] | None = None
+ # name of type origin (ex. `List[str]` -> 'list')
+ name: str | None = None
+ # index of iteration, *only* unique within the scope of a field assignment!
+ i: int = 1
+ # index of field within the dataclass, *guaranteed* to be unique.
+ field_i: int = 1
+ # prefix of value in assignment (prepended to `i`),
+ # defaults to 'v' if not specified.
+ prefix: str = 'v'
+ # index of assignment (ex. `2 -> v1[2]`, *or* a string `"key" -> v4["key"]`)
+ index: int | None = None
+
+ def replace(self, **changes) -> TypeInfo: ...
+ @staticmethod
+ def ensure_in_locals(extras: dict[str, Any], *types: Callable) -> None: ...
+ def type_name(self, extras: dict[str, Any]) -> str: ...
+ def v(self) -> str: ...
+ def v_and_next(self) -> tuple[str, str, int]: ...
+ def v_and_next_k_v(self) -> tuple[str, str, str, int]: ...
+ def multi_wrap(self, extras, prefix='', *result, force=False) -> list[str]: ...
+ def wrap(self, result: str, extras: Extras, force=False, prefix='') -> Self: ...
+ def wrap_builtin(self, result: str, extras: Extras) -> Self: ...
+ def wrap_dd(self, default_factory: DefFactory, result: str, extras: Extras) -> Self: ...
+ def _wrap_inner(self, extras: Extras,
+ tp: type | DefFactory | None = None,
+ prefix: str = '',
+ is_builtin: bool = False,
+ force=False) -> str | None: ...
+
+class Extras(TypedDict):
+ """
+ "Extra" config that can be used in the load / dump process.
+ """
+ config: NotRequired[META]
+ cls: type
+ cls_name: str
+ fn_gen: FunctionBuilder
+ locals: dict[str, Any]
+ pattern: NotRequired[PatternedDT]
+
+
+# noinspection PyPep8Naming
+def AliasPath(all: PathType | str | None = None, *,
+ load : PathType | str | None = None,
+ dump : PathType | str | None = None,
+ skip: bool = False,
+ default=MISSING,
+ default_factory: Callable[[], MISSING] = MISSING,
+ init=True, repr=True,
+ hash=None, compare=True, metadata=None, kw_only=False):
+ """
+ Creates a dataclass field mapped to one or more nested JSON paths.
+
+ This function is an alias for ``dataclasses.field(...)``, with additional
+ logic for associating a field with one or more JSON key paths, including
+ nested structures. It can be used to specify custom mappings between
+ dataclass fields and complex, nested JSON key names.
+
+ This mapping is **case-sensitive** and applies to the provided JSON keys
+ or nested paths. For example, passing "myField" will not match "myfield"
+ in JSON, and vice versa.
+
+ `all` represents one or more nested JSON keys (as strings or a collection of strings)
+ to associate with the dataclass field. The keys can include paths like `a.b.c`
+ or even more complex nested paths such as `a["nested"]["key"]`.
+
+ Arguments:
+ all (_STR_COLLECTION): The JSON key(s) or nested path(s) to associate with the dataclass field.
+ default (Any): The default value for the field. Mutually exclusive with `default_factory`.
+ default_factory (Callable[[], Any]): A callable to generate the default value.
+ Mutually exclusive with `default`.
+ init (bool): Include the field in the generated `__init__` method. Defaults to True.
+ repr (bool): Include the field in the `__repr__` output. Defaults to True.
+ hash (bool): Include the field in the `__hash__` method. Defaults to None.
+ compare (bool): Include the field in comparison methods. Defaults to True.
+ metadata (dict): Metadata to associate with the field. Defaults to None.
+
+ Returns:
+ JSONField: A dataclass field with logic for mapping to one or more nested JSON paths.
+
+ Example #1:
+ >>> from dataclasses import dataclass
+ >>> @dataclass
+ >>> class Example:
+ >>> my_str: str = AliasPath(['a.b.c.1', 'x.y["-1"].z'], default=42)
+ >>> # Maps nested paths ('a', 'b', 'c', 1) and ('x', 'y', '-1', 'z')
+ >>> # to the `my_str` attribute.
+
+ Example #2:
+
+ >>> from typing import Annotated
+ >>> my_str: Annotated[str, AliasPath('my."7".nested.path.-321')]
+ >>> # where path.keys == ('my', '7', 'nested', 'path', -321)
+ """
+
+
+# noinspection PyPep8Naming
+def Alias(all: str | None = None, *,
+ load: str | None = None,
+ dump: str | None = None,
+ skip: bool = False,
+ path: PathType | str | None = None,
+ default=MISSING,
+ default_factory: Callable[[], MISSING] = MISSING,
+ init=True, repr=True,
+ hash=None, compare=True, metadata=None, kw_only=False):
+ """
+ This is a helper function that sets the same defaults for keyword
+ arguments as the ``dataclasses.field`` function. It can be thought of as
+ an alias to ``dataclasses.field(...)``, but one which also represents
+ a mapping of one or more JSON key names to a dataclass field.
+
+ This is only in *addition* to the default key transform; for example, a
+ JSON key appearing as "myField", "MyField" or "my-field" will already map
+ to a dataclass field "my_field" by default (assuming the key transform
+ converts to snake case).
+
+ The mapping to each JSON key name is case-sensitive, so passing "myfield"
+ will not match a "myField" key in a JSON string or a Python dict object.
+
+ `keys` is a string, or a collection (list, tuple, etc.) of strings. It
+ represents one of more JSON keys to associate with the dataclass field.
+
+ When `all` is passed as True (default is False), it will also associate
+ the reverse mapping, i.e. from dataclass field to JSON key. If multiple
+ JSON keys are passed in, it uses the first one provided in this case.
+ This mapping is then used when ``to_dict`` or ``to_json`` is called,
+ instead of the default key transform.
+
+ When `dump` is passed as False (default is True), this field will be
+ skipped, or excluded, in the serialization process to JSON.
+ """
+ ...
+
+
+def skip_if_field(condition: Condition, *,
+ default=MISSING,
+ default_factory: Callable[[], MISSING] = MISSING,
+ init=True, repr=True,
+ hash=None, compare=True, metadata=None,
+ kw_only: bool = MISSING):
+ """
+ Defines a dataclass field with a ``SkipIf`` condition.
+
+ This function is a shortcut for ``dataclasses.field(...)``,
+ adding metadata to specify a condition. If the condition
+ evaluates to ``True``, the field is skipped during
+ JSON serialization.
+
+ Arguments:
+ condition (Condition): The condition, if true skips serializing the field.
+ default (Any): The default value for the field. Mutually exclusive with `default_factory`.
+ default_factory (Callable[[], Any]): A callable to generate the default value.
+ Mutually exclusive with `default`.
+ init (bool): Include the field in the generated `__init__` method. Defaults to True.
+ repr (bool): Include the field in the `__repr__` output. Defaults to True.
+ hash (bool): Include the field in the `__hash__` method. Defaults to None.
+ compare (bool): Include the field in comparison methods. Defaults to True.
+ metadata (dict): Metadata to associate with the field. Defaults to None.
+ kw_only (bool): If true, the field will become a keyword-only parameter to __init__().
+ Returns:
+ Field: A dataclass field with correct metadata set.
+
+ Example:
+ >>> from dataclasses import dataclass
+ >>> @dataclass
+ >>> class Example:
+ >>> my_str: str = skip_if_field(IS_NOT(True))
+ >>> # Creates a condition which skips serializing `my_str`
+ >>> # if its value `is not True`.
+ """
+
+
+class Field(_Field):
+ """
+ Alias to a :class:`dataclasses.Field`, but one which also represents a
+ mapping of one or more JSON key names to a dataclass field.
+
+ See the docs on the :func:`json_field` function for more info.
+ """
+ __slots__ = ('load_alias',
+ 'dump_alias',
+ 'skip',
+ 'path')
+
+ load_alias: str | None
+ dump_alias: str | None
+ # keys: tuple[str, ...] | PathType
+ skip: bool
+ path: PathType | None
+
+ # In Python 3.10, dataclasses adds a new parameter to the :class:`Field`
+ # constructor: `kw_only`
+ #
+ # Ref: https://docs.python.org/3.10/library/dataclasses.html#dataclasses.dataclass
+ @overload
+ def __init__(self,
+ load_alias: str | None,
+ dump_alias: str | None,
+ skip: bool,
+ path: PathType, default, default_factory, init, repr, hash, compare,
+ metadata, kw_only):
+ ...
+
+ @overload
+ def __init__(self, alias: str | None,
+ load_alias: str | None,
+ dump_alias: str | None,
+ skip: bool,
+ path: PathType, default, default_factory, init, repr, hash, compare,
+ metadata):
+ ...
diff --git a/dataclass_wizard/wizard_mixins.py b/dataclass_wizard/wizard_mixins.py
index f0b6bc6c..752f6aaf 100644
--- a/dataclass_wizard/wizard_mixins.py
+++ b/dataclass_wizard/wizard_mixins.py
@@ -13,7 +13,7 @@
from .dumpers import asdict
from .enums import LetterCase
from .lazy_imports import toml, toml_w, yaml
-from .loaders import fromdict, fromlist
+from .loader_selection import fromdict, fromlist
from .models import Container
from .serial_json import JSONSerializable
diff --git a/docs/_static/dark_mode.css b/docs/_static/dark_mode.css
new file mode 100644
index 00000000..721dec8d
--- /dev/null
+++ b/docs/_static/dark_mode.css
@@ -0,0 +1,44 @@
+/* General dark mode body */
+body.dark-mode {
+ background-color: #1e1e1e;
+ color: #cfcfcf;
+}
+
+/* Main page content */
+body.dark-mode .body {
+ background-color: #1e1e1e;
+ color: #cfcfcf;
+}
+
+/* Fix for the main content on index */
+body.dark-mode .content {
+ background-color: #1e1e1e;
+ color: #cfcfcf;
+}
+
+/* Sidebar elements */
+body.dark-mode .wy-nav-side,
+body.dark-mode .wy-side-nav-search {
+ background-color: #22272e;
+ color: #cfcfcf;
+}
+
+/* Headings */
+body.dark-mode h1,
+body.dark-mode h2,
+body.dark-mode h3,
+body.dark-mode h4 {
+ color: #ffffff;
+}
+
+/* Links */
+body.dark-mode a {
+ color: #79b8ff;
+}
+
+/* Code blocks */
+body.dark-mode pre,
+body.dark-mode code {
+ background-color: #2d333b;
+ color: #f0f0f0;
+}
diff --git a/docs/_static/dark_mode_toggle.js b/docs/_static/dark_mode_toggle.js
new file mode 100644
index 00000000..c44c39b5
--- /dev/null
+++ b/docs/_static/dark_mode_toggle.js
@@ -0,0 +1,27 @@
+document.addEventListener("DOMContentLoaded", function () {
+ const toggleButton = document.createElement("button");
+ toggleButton.innerText = "๐ Dark Mode";
+ toggleButton.style.cssText = `
+ position: fixed;
+ bottom: 20px;
+ right: 20px;
+ padding: 8px 12px;
+ background-color: #444;
+ color: white;
+ border: none;
+ cursor: pointer;
+ z-index: 1000;
+ `;
+
+ document.body.appendChild(toggleButton);
+
+ toggleButton.addEventListener("click", function () {
+ document.body.classList.toggle("dark-mode");
+ localStorage.setItem("dark-mode", document.body.classList.contains("dark-mode"));
+ });
+
+ // Persist dark mode preference across pages
+ if (localStorage.getItem("dark-mode") === "true") {
+ document.body.classList.add("dark-mode");
+ }
+});
diff --git a/docs/conf.py b/docs/conf.py
index 1d81eaf4..23ea0626 100755
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -125,6 +125,11 @@
html_css_files = [
'custom.css',
+ 'dark_mode.css',
+]
+
+html_js_files = [
+ 'dark_mode_toggle.js',
]
# Custom sidebar templates, maps document names to template names.
diff --git a/docs/dataclass_wizard.rst b/docs/dataclass_wizard.rst
index 6e07e032..613f1f12 100644
--- a/docs/dataclass_wizard.rst
+++ b/docs/dataclass_wizard.rst
@@ -9,6 +9,7 @@ Subpackages
dataclass_wizard.environ
dataclass_wizard.utils
+ dataclass_wizard.v1
dataclass_wizard.wizard_cli
Submodules
@@ -94,6 +95,14 @@ dataclass\_wizard.lazy\_imports module
:undoc-members:
:show-inheritance:
+dataclass\_wizard.loader\_selection module
+------------------------------------------
+
+.. automodule:: dataclass_wizard.loader_selection
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
dataclass\_wizard.loaders module
--------------------------------
diff --git a/recipe/meta.yaml b/recipe/meta.yaml
index 61d0c4d7..c1637054 100644
--- a/recipe/meta.yaml
+++ b/recipe/meta.yaml
@@ -42,7 +42,7 @@ requirements:
- setuptools
run:
- python
- - typing-extensions >=4 # [py<=310]
+ - typing-extensions >=4.9.0 # [py<=312]
test:
imports:
@@ -65,7 +65,7 @@ about:
# (even if the license doesn't require it) using the license_file entry.
# See https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#license-file
license_file: LICENSE
- summary: Marshal dataclasses to/from JSON. Use field properties with initial values. Construct a dataclass schema with JSON input.
+ summary: Lightning-fast JSON wizardry for Python dataclasses โ effortless serialization with no external tools required!
# The remaining entries in this section are optional, but recommended.
description: |
The dataclass-wizard library provides a set of simple, yet
@@ -79,7 +79,7 @@ about:
The dataclass-wizard is pure Python code that relies entirely on
stdlib, with the only added dependency being
`typing-extensions`
- for Python 3.9 and below.
+ for Python 3.12 and below.
doc_url: https://{{ name }}.readthedocs.io/
dev_url: {{ repo_url }}
diff --git a/requirements-bench.txt b/requirements-bench.txt
new file mode 100644
index 00000000..64bf6c05
--- /dev/null
+++ b/requirements-bench.txt
@@ -0,0 +1,9 @@
+# Benchmark tests
+matplotlib
+pytest-benchmark[histogram]
+dataclasses-json==0.6.7
+jsons==1.6.3
+dataclass-factory==2.16 # pyup: ignore
+dacite==1.8.1
+mashumaro==3.15
+pydantic==2.10.3
diff --git a/requirements-test.txt b/requirements-test.txt
index df8dfca7..bd935ae5 100644
--- a/requirements-test.txt
+++ b/requirements-test.txt
@@ -2,10 +2,3 @@ pytest==8.3.4
pytest-mock>=3.6.1
pytest-cov==6.0.0
# pytest-runner==5.3.1
-# Benchmark tests
-dataclasses-json==0.6.7
-jsons==1.6.3
-dataclass-factory==2.16 # pyup: ignore
-dacite==1.8.1
-mashumaro==3.15
-pydantic==2.10.2
diff --git a/run_bench.py b/run_bench.py
new file mode 100644
index 00000000..e7cbe8d5
--- /dev/null
+++ b/run_bench.py
@@ -0,0 +1,94 @@
+import glob
+import json
+import os
+import shutil
+import subprocess
+import matplotlib.pyplot as plt
+
+
+def run_benchmarks():
+ # Ensure the `.benchmarks` folder exists
+ os.makedirs(".benchmarks", exist_ok=True)
+
+ # Run pytest benchmarks and save results
+ print("Running benchmarks...")
+ result = subprocess.run(
+ ["pytest", "benchmarks/catch_all.py", "--benchmark-save=benchmark_results"],
+ capture_output=True,
+ text=True
+ )
+ print(result.stdout)
+
+
+def load_benchmark_results(file_path):
+ """Load the benchmark results from the provided JSON file."""
+ with open(file_path, "r") as f:
+ return json.load(f)
+
+
+def plot_relative_performance(results):
+ """Plot relative performance for different benchmark groups."""
+ benchmarks = results["benchmarks"]
+
+ # Extract and format data
+ names = []
+ ops = []
+ for bm in benchmarks:
+ group = bm.get("group", "")
+ library = "dataclass-wizard" if "wizard" in bm["name"] else "dataclasses-json"
+ formatted_name = f"{group} ({library})"
+ names.append(formatted_name)
+ ops.append(bm["stats"]["ops"])
+
+ # Calculate relative performance (ratio of each ops to the slowest ops)
+ baseline = min(ops)
+ relative_performance = [op / baseline for op in ops]
+
+ # Plot bar chart
+ plt.figure(figsize=(10, 6))
+ bars = plt.barh(names, relative_performance, color="skyblue")
+ plt.xlabel("Performance Relative to Slowest (times faster)")
+ plt.title("Catch All: Relative Performance of dataclass-wizard vs dataclasses-json")
+ plt.tight_layout()
+
+ # Add data labels to the bars
+ for bar, rel_perf in zip(bars, relative_performance):
+ plt.text(bar.get_width() + 0.1, bar.get_y() + bar.get_height() / 2,
+ f"{rel_perf:.1f}x", va="center")
+
+ # Save and display the plot
+ plt.savefig("catch_all.png")
+ plt.show()
+
+
+def find_latest_benchmark_file():
+ """Find the most recent benchmark result file."""
+ benchmark_dir = ".benchmarks"
+ pattern = os.path.join(benchmark_dir, "**", "*.json")
+ files = glob.glob(pattern, recursive=True)
+ if not files:
+ raise FileNotFoundError("No benchmark files found.")
+ latest_file = max(files, key=os.path.getctime) # Find the most recently created file
+ return latest_file
+
+
+if __name__ == "__main__":
+ # Step 1: Run benchmarks
+ run_benchmarks()
+
+ # Step 2: Find the latest benchmark results file
+ benchmark_file = find_latest_benchmark_file()
+ print(f"Latest benchmark file: {benchmark_file}")
+
+ # Step 3: Load the benchmark results
+ if os.path.exists(benchmark_file):
+ results = load_benchmark_results(benchmark_file)
+
+ # Step 4: Plot results
+ plot_relative_performance(results)
+
+ else:
+ print(f"Benchmark file not found: {benchmark_file}")
+
+ # Step 5: Move the generated image to docs folder for easy access
+ shutil.copy("relative_performance.png", "docs/")
diff --git a/setup.py b/setup.py
index 44359cee..b5bf7e1b 100644
--- a/setup.py
+++ b/setup.py
@@ -13,7 +13,7 @@
packages = find_packages(include=[package_name, f'{package_name}.*'])
requires = [
- 'typing-extensions>=4; python_version == "3.9" or python_version == "3.10"',
+ 'typing-extensions>=4.9.0; python_version <= "3.12"'
]
if (requires_dev_file := here / 'requirements-dev.txt').exists():
@@ -34,6 +34,12 @@
else: # Running on CI
test_requirements = []
+if (requires_bench_file := here / 'requirements-bench.txt').exists():
+ with requires_bench_file.open() as requires_bench_txt:
+ bench_requirements = [str(req) for req in parse_requirements(requires_bench_txt)]
+else: # Running on CI
+ bench_requirements = []
+
# extras_require = {
# 'dotenv': ['python-dotenv>=0.19.0'],
# }
@@ -77,10 +83,11 @@
'Bug Tracker': 'https://github.com/rnag/dataclass-wizard/issues',
},
license=about['__license__'],
- keywords=['dataclasses', 'dataclass', 'wizard', 'json', 'marshal',
- 'json to dataclass', 'json2dataclass', 'dict to dataclass',
- 'property', 'field-property',
- 'serialization', 'deserialization'],
+ keywords=[
+ 'dataclasses', 'wizard', 'json', 'serialization', 'deserialization',
+ 'dataclass serialization', 'type hints', 'performance', 'alias',
+ 'python', 'env', 'dotenv', 'lightweight'
+ ],
classifiers=[
# Ref: https://pypi.org/classifiers/
'Development Status :: 5 - Production/Stable',
@@ -107,7 +114,7 @@
'tomli-w>=1,<2'
],
'yaml': ['PyYAML>=6,<7'],
- 'dev': dev_requires + doc_requires + test_requirements,
+ 'dev': dev_requires + doc_requires + test_requirements + bench_requirements,
},
zip_safe=False
)
diff --git a/tests/conftest.py b/tests/conftest.py
index d53910ae..d54c8b1a 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -1,4 +1,5 @@
__all__ = [
+ 'snake',
'does_not_raise',
'data_file_path',
'PY310_OR_ABOVE',
@@ -14,6 +15,8 @@
from contextlib import nullcontext as does_not_raise
from pathlib import Path
+from dataclass_wizard.utils.string_conv import to_snake_case
+
# Directory for test files
TEST_DATA_DIR = Path(__file__).resolve().parent / 'testdata'
@@ -46,3 +49,13 @@
def data_file_path(name: str) -> str:
"""Returns the full path to a test file."""
return str((TEST_DATA_DIR / name).absolute())
+
+
+def snake(d):
+ """
+ Helper function to snake-case all keys in a dictionary `d`.
+
+ Useful for `v1`, which by default requires a 1:1 mapping of
+ JSON key to dataclass field.
+ """
+ return {to_snake_case(k): v for k, v in d.items()}
diff --git a/tests/unit/environ/test_loaders.py b/tests/unit/environ/test_loaders.py
index 63f752e2..a70bda81 100644
--- a/tests/unit/environ/test_loaders.py
+++ b/tests/unit/environ/test_loaders.py
@@ -75,7 +75,7 @@ class MyClass(EnvWizard, reload_env=True):
inner_cls_2: Inner2
c = MyClass()
- print(c)
+ # print(c)
assert c.dict() == {
'inner_cls_1': Inner1(my_bool=False,
diff --git a/tests/unit/environ/test_wizard.py b/tests/unit/environ/test_wizard.py
index d481d7a0..6b572a08 100644
--- a/tests/unit/environ/test_wizard.py
+++ b/tests/unit/environ/test_wizard.py
@@ -441,7 +441,7 @@ class _(EnvWizard.Meta):
# assert that the __init__() method declaration is logged
assert mock_debug_log.records[-1].levelname == 'DEBUG'
- assert 'Generated function code' in mock_debug_log.records[-2].message
+ assert 'Generated function code' in mock_debug_log.records[-3].message
# reset global flag for other tests that
# rely on `debug_enabled` functionality
diff --git a/tests/unit/test_load.py b/tests/unit/test_load.py
index e134c292..26f8f584 100644
--- a/tests/unit/test_load.py
+++ b/tests/unit/test_load.py
@@ -18,7 +18,7 @@
from dataclass_wizard import *
from dataclass_wizard.constants import TAG
from dataclass_wizard.errors import (
- ParseError, MissingFields, UnknownJSONKey, MissingData, InvalidConditionError
+ ParseError, MissingFields, UnknownKeysError, MissingData, InvalidConditionError
)
from dataclass_wizard.models import Extras, _PatternBase
from dataclass_wizard.parsers import (
@@ -69,7 +69,7 @@ class MyClass:
# Technically we don't need to pass `load_cfg`, but we'll pass it in as
# that's how we'd typically expect to do it.
- with pytest.raises(UnknownJSONKey) as exc_info:
+ with pytest.raises(UnknownKeysError) as exc_info:
_ = fromdict(MyClass, d)
e = exc_info.value
@@ -524,9 +524,10 @@ class MyClass(JSONSerializable):
assert e.value.fields == ['my_str']
assert e.value.missing_fields == ['MyBool', 'my_int']
+ _ = e.value.message
# optional: these are populated in this case since this can be a somewhat common issue
- assert e.value.kwargs['key transform'] == 'to_snake_case()'
- assert 'resolution' in e.value.kwargs
+ assert e.value.kwargs['Key Transform'] == 'to_snake_case()'
+ assert 'Resolution' in e.value.kwargs
def test_from_dict_key_transform_with_json_field():
@@ -2033,7 +2034,7 @@ def test_from_dict_with_nested_object_key_path():
"""
@dataclass
- class A(JSONWizard, debug=True):
+ class A(JSONWizard):
an_int: int
a_bool: Annotated[bool, KeyPath('x.y.z.0')]
my_str: str = path_field(['a', 'b', 'c', -1], default='xyz')
@@ -2122,7 +2123,7 @@ def test_from_dict_with_nested_object_key_path_with_skip_defaults():
"""
@dataclass
- class A(JSONWizard, debug=True):
+ class A(JSONWizard):
class _(JSONWizard.Meta):
skip_defaults = True
@@ -2266,7 +2267,7 @@ class B:
extra: CatchAll = None
@dataclass
- class Container(JSONWizard, debug=False):
+ class Container(JSONWizard):
obj2: Union[A, B]
extra: CatchAll = None
@@ -2301,7 +2302,7 @@ def test_skip_if():
skip serializing dataclass fields.
"""
@dataclass
- class Example(JSONWizard, debug=True):
+ class Example(JSONWizard):
class _(JSONWizard.Meta):
skip_if = IS_NOT(True)
key_transform_with_dump = 'NONE'
@@ -2321,7 +2322,7 @@ def test_skip_defaults_if():
skip serializing dataclass fields with default values.
"""
@dataclass
- class Example(JSONWizard, debug=True):
+ class Example(JSONWizard):
class _(JSONWizard.Meta):
key_transform_with_dump = 'None'
skip_defaults_if = IS(None)
@@ -2354,7 +2355,7 @@ def test_per_field_skip_if():
``skip_if_field()`` which wraps ``dataclasses.Field``.
"""
@dataclass
- class Example(JSONWizard, debug=True):
+ class Example(JSONWizard):
class _(JSONWizard.Meta):
key_transform_with_dump = 'None'
@@ -2558,3 +2559,24 @@ class Options(JSONWizard):
'ListOfBool': (1, '0', '1')
})
assert opt.list_of_bool == [True, False, True]
+
+
+@pytest.mark.skip('Ran out of time to get this to work')
+def test_dataclass_decorator_is_automatically_applied():
+ """
+ Confirm the `@dataclass` decorator is automatically
+ applied, if not decorated by the user.
+ """
+ class Test(JSONWizard):
+ my_field: str
+ my_bool: bool = False
+
+ t = Test.from_dict({'myField': 'value'})
+ assert t.my_field == 'value'
+
+ t = Test('test', True)
+ assert t.my_field == 'test'
+ assert t.my_bool
+
+ with pytest.raises(TypeError, match=".*Test\.__init__\(\) missing 1 required positional argument: 'my_field'"):
+ Test()
diff --git a/tests/unit/test_wizard_mixins.py b/tests/unit/test_wizard_mixins.py
index 617f3106..22ffbc7d 100644
--- a/tests/unit/test_wizard_mixins.py
+++ b/tests/unit/test_wizard_mixins.py
@@ -231,7 +231,7 @@ class MyClass(TOMLWizard, key_transform='SNAKE'):
MyClass('testing!', {'333': 'this is a test.'})
])
- print(toml_string)
+ # print(toml_string)
assert toml_string == """\
items = [
diff --git a/tests/unit/v1/__init__.py b/tests/unit/v1/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/tests/unit/v1/test_loaders.py b/tests/unit/v1/test_loaders.py
new file mode 100644
index 00000000..eeb974bf
--- /dev/null
+++ b/tests/unit/v1/test_loaders.py
@@ -0,0 +1,3062 @@
+"""
+Tests for the `loaders` module, but more importantly for the `parsers` module.
+
+Note: I might refactor this into a separate `test_parsers.py` as time permits.
+"""
+import logging
+from abc import ABC
+from collections import namedtuple, defaultdict, deque
+from dataclasses import dataclass, field
+from datetime import datetime, date, time, timedelta
+from typing import (
+ List, Optional, Union, Tuple, Dict, NamedTuple, DefaultDict,
+ Set, FrozenSet, Annotated, Literal, Sequence, MutableSequence, Collection
+)
+
+import pytest
+
+from dataclass_wizard import *
+from dataclass_wizard.constants import TAG
+from dataclass_wizard.errors import (
+ ParseError, MissingFields, UnknownKeysError, MissingData, InvalidConditionError
+)
+from dataclass_wizard.models import _PatternBase
+from dataclass_wizard.v1 import *
+from ..conftest import MyUUIDSubclass
+from ...conftest import *
+
+log = logging.getLogger(__name__)
+
+
+def test_missing_fields_is_raised():
+
+ @dataclass
+ class Test(JSONWizard, debug=True):
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: str
+ my_int: int
+ my_bool: bool
+ my_float: float = 1.23
+
+
+ with pytest.raises(MissingFields) as exc_info:
+ _ = Test.from_dict({'my_bool': True})
+
+ e, tp = exc_info.value, exc_info.type
+
+ assert tp is MissingFields
+ assert e.fields == ['my_bool']
+ assert e.missing_fields == ['my_str', 'my_int']
+
+
+def test_auto_key_casing():
+
+ @dataclass
+ class Test(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'AUTO'
+
+ my_str: str
+ my_bool_test: bool
+ my_int: int
+ my_float: float = 1.23
+
+ d = {'My-Str': 'test', 'myBoolTest': True, 'MyInt': 123, 'my_float': 42, }
+
+ assert Test.from_dict(d) == Test(my_str='test', my_bool_test=True, my_int=123, my_float=42.0)
+
+
+def test_alias_mapping():
+
+ @dataclass
+ class Test(JSONPyWizard):
+ class _(JSONPyWizard.Meta):
+ v1 = True
+ v1_field_to_alias = {'my_int': 'MyInt'}
+
+ my_str: str = Alias('a_str')
+ my_bool_test: Annotated[bool, Alias('myBoolTest')]
+ my_int: int
+ my_float: float = 1.23
+
+ d = {'a_str': 'test', 'myBoolTest': True, 'MyInt': 123, 'my_float': 42}
+
+ t = Test.from_dict(d)
+ assert t == Test(my_str='test', my_bool_test=True, my_int=123, my_float=42.0)
+
+ assert t.to_dict() == {'a_str': 'test', 'myBoolTest': True, 'MyInt': 123, 'my_float': 42.0}
+
+
+def test_alias_mapping_with_load_or_dump():
+
+ @dataclass
+ class Test(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+ key_transform_with_dump = 'NONE'
+ v1_field_to_alias = {
+ 'my_int': 'MyInt',
+ '__load__': False,
+ }
+
+ my_str: str = Alias(load='a_str')
+ my_bool_test: Annotated[bool, Alias(dump='myDumpedBool')]
+ my_int: int
+ other_int: int = Alias(dump='DumpedInt')
+ my_float: float = 1.23
+
+ d = {'a_str': 'test',
+ 'myBoolTest': 'T',
+ 'myInt': 123,
+ 'otherInt': 321,
+ 'myFloat': 42}
+
+ t = Test.from_dict(d)
+ assert t == Test(my_str='test',
+ my_bool_test=True,
+ my_int=123,
+ other_int=321,
+ my_float=42.0)
+
+ assert t.to_dict() == {'my_str': 'test',
+ 'MyInt': 123,
+ 'DumpedInt': 321,
+ 'myDumpedBool': True,
+ 'my_float': 42.0}
+
+
+def test_fromdict():
+ """
+ Confirm that Meta settings for `fromdict` are applied as expected.
+ """
+
+ @dataclass
+ class MyClass:
+ my_bool: Optional[bool]
+ myStrOrInt: Union[str, int]
+
+ d = {'myBoolean': 'tRuE', 'myStrOrInt': 123}
+
+ LoadMeta(v1=True,
+ key_transform='CAMEL',
+ v1_field_to_alias={'my_bool': 'myBoolean'}).bind_to(MyClass)
+
+ c = fromdict(MyClass, d)
+
+ assert c.my_bool is True
+ assert isinstance(c.myStrOrInt, int)
+ assert c.myStrOrInt == 123
+
+
+# TODO multiple keys can be raised
+def test_fromdict_raises_on_unknown_json_fields():
+ """
+ Confirm that Meta settings for `fromdict` are applied as expected.
+ """
+
+ @dataclass
+ class MyClass:
+ my_bool: Optional[bool]
+
+ d = {'myBoolean': 'tRuE', 'my_string': 'Hello world!'}
+ LoadMeta(
+ v1=True,
+ v1_field_to_alias={'my_bool': 'myBoolean'},
+ v1_on_unknown_key='Raise').bind_to(MyClass)
+
+ # Technically we don't need to pass `load_cfg`, but we'll pass it in as
+ # that's how we'd typically expect to do it.
+ with pytest.raises(UnknownKeysError) as exc_info:
+ _ = fromdict(MyClass, d)
+
+ e = exc_info.value
+
+ assert e.json_key == {'my_string'}
+ assert e.obj == d
+ assert e.fields == ['my_bool']
+
+
+def test_from_dict_raises_on_unknown_json_key_nested():
+
+ @dataclass
+ class Sub(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1_key_case = 'P'
+
+ my_str: str
+
+ @dataclass
+ class Test(JSONWizard, debug=True):
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_on_unknown_key = 'RAISE'
+
+ my_str: str = Alias('a_str')
+ my_bool: bool
+ my_sub: Sub
+
+
+ d = {'a_str': 'test',
+ 'my_bool': True,
+ 'my_sub': {'MyStr': 'test'}}
+ t = Test.from_dict(d)
+ log.debug(repr(t))
+
+ d = {'a_str': 'test',
+ 'my_sub': {'MyStr': 'test'},
+ 'my_bool': 'F',
+ 'my_str': 'test2', 'myBoolTest': True, 'MyInt': 123}
+
+ with pytest.raises(UnknownKeysError) as exc_info:
+ _ = Test.from_dict(d)
+
+ e = exc_info.value
+
+ # TODO
+ assert e.unknown_keys == {'myBoolTest', 'MyInt', 'my_str'}
+ assert e.obj == d
+ assert e.fields == ['my_str', 'my_bool', 'my_sub']
+
+ d = {'a_str': 'test',
+ 'my_bool': True,
+ 'my_sub': {'MyStr': 'test', 'myBoolTest': False}}
+
+ # d = {'a_str': 'test',
+ # 'my_bool': True,
+ # 'my_sub': {'MyStr': 'test', 'my_bool': False, 'myBoolTest': False},
+ # }
+
+ with pytest.raises(UnknownKeysError) as exc_info:
+ _ = Test.from_dict(d)
+
+ e = exc_info.value
+
+ assert e.unknown_keys == {'myBoolTest'}
+ assert e.obj == d['my_sub']
+ assert e.fields == ['my_str']
+
+
+@pytest.mark.xfail(reason='TODO need to support multiple JSON keys for a dataclass field')
+def test_fromdict_with_key_case_auto_expected_failure():
+ """
+ Failure in `fromdict()` because multiple JSON keys are not mapped to single dataclass field.
+ """
+ @dataclass
+ class MyElement:
+ order_index: int
+ status_code: 'int | str'
+
+ @dataclass
+ class Container:
+ id: int
+ my_elements: list[MyElement]
+
+ d = {'id': '123',
+ 'myElements': [
+ {'orderIndex': 111,
+ 'statusCode': '200'},
+ {'order_index': '222',
+ 'status_code': 404}
+ ]}
+
+ LoadMeta(v1=True, v1_key_case='AUTO').bind_to(Container)
+
+ # Failure!
+ c = fromdict(Container, d)
+
+
+def test_fromdict_with_nested_dataclass():
+ """Confirm that `fromdict` works for nested dataclasses as well."""
+
+ @dataclass
+ class Container:
+ id: int
+ submittedDt: datetime
+ myElements: List['MyElement']
+
+ @dataclass
+ class MyElement:
+ order_index: Optional[int]
+ status_code: Union[int, str]
+
+ d = {'id': '123',
+ 'submittedDt': '2021-01-01 05:00:00',
+ 'myElements': [
+ {'orderIndex': 111,
+ 'statusCode': '200'},
+ {'orderIndex': '222',
+ 'statusCode': 404}
+ ]}
+
+ # Fix so the forward reference works (since the class definition is inside
+ # the test case)
+ globals().update(locals())
+
+ LoadMeta(
+ v1=True,
+ recursive=False).bind_to(Container)
+
+ LoadMeta(v1=True, v1_key_case='AUTO').bind_to(MyElement)
+
+ c = fromdict(Container, d)
+
+ assert c.id == 123
+ assert c.submittedDt == datetime(2021, 1, 1, 5, 0)
+ # Key transform only applies to top-level dataclass
+ # unfortunately. Need to setup `LoadMeta` for `MyElement`
+ # if we need different key transform.
+ assert c.myElements == [
+ MyElement(order_index=111, status_code='200'),
+ MyElement(order_index=222, status_code=404)
+ ]
+
+
+def test_invalid_types_with_debug_mode_enabled():
+ """
+ Passing invalid types (i.e. that *can't* be coerced into the annotated
+ field types) raises a formatted error when DEBUG mode is enabled.
+ """
+ @dataclass
+ class InnerClass:
+ my_float: float
+ my_list: List[int] = field(default_factory=list)
+
+ @dataclass
+ class MyClass(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'CAMEL'
+ debug_enabled = True
+
+ my_int: int
+ my_dict: Dict[str, datetime] = field(default_factory=dict)
+ my_inner: Optional[InnerClass] = None
+
+ with pytest.raises(ParseError) as e:
+ _ = MyClass.from_dict({'myInt': '3', 'myDict': 'string'})
+
+ err = e.value
+ assert type(err.base_error) == AttributeError
+ assert "no attribute 'items'" in str(err.base_error)
+ assert err.class_name == MyClass.__qualname__
+ assert err.field_name == 'my_dict'
+ assert (err.ann_type, err.obj_type) == (Dict[str, datetime], str)
+
+ with pytest.raises(ParseError) as e:
+ _ = MyClass.from_dict({'myInt': '1', 'myInner': {'myFloat': '1.A'}})
+
+ err = e.value
+ assert type(err.base_error) == ValueError
+ assert "could not convert" in str(err.base_error)
+ assert err.class_name == InnerClass.__qualname__
+ assert err.field_name == 'my_float'
+ assert (err.ann_type, err.obj_type) == (float, str)
+
+ with pytest.raises(ParseError) as e:
+ _ = MyClass.from_dict({
+ 'myInt': '1',
+ 'myDict': {2: '2021-01-01'},
+ 'myInner': {
+ 'my-float': '1.23',
+ 'myList': [{'key': 'value'}]
+ }
+ })
+
+ err = e.value
+ assert type(err.base_error) == TypeError
+ assert "int()" in str(err.base_error)
+ assert err.class_name == InnerClass.__qualname__
+ assert err.field_name == 'my_list'
+ assert (err.ann_type, err.obj_type) == (List[int], list)
+
+
+def test_from_dict_called_with_incorrect_type():
+ """
+ Calling `from_dict` with a non-`dict` argument should raise a
+ formatted error, i.e. with a :class:`ParseError` object.
+ """
+ @dataclass
+ class MyClass(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: str
+
+ with pytest.raises(ParseError) as e:
+ # noinspection PyTypeChecker
+ _ = MyClass.from_dict(['my_str'])
+
+ err = e.value
+ assert e.value.field_name == 'my_str'
+ assert e.value.class_name == MyClass.__qualname__
+ assert e.value.obj == ['my_str']
+ assert 'Incorrect type' in str(e.value.base_error)
+ # basically says we want a `dict`, but were passed in a `list`
+ assert (err.ann_type, err.obj_type) == (dict, list)
+
+
+@pytest.mark.xfail(reason='Need to add support in v1')
+def test_date_times_with_custom_pattern():
+ """
+ Date, time, and datetime objects with a custom date string
+ format that will be passed to the built-in `datetime.strptime` method
+ when de-serializing date strings.
+
+ Note that the serialization format for dates and times still use ISO
+ format, by default.
+ """
+
+ def create_strict_eq(name, bases, cls_dict):
+ """Generate a strict "type" equality method for a class."""
+ cls = type(name, bases, cls_dict)
+ __class__ = cls # provide closure cell for super()
+
+ def __eq__(self, other):
+ if type(other) is not cls: # explicitly check the type
+ return False
+ return super().__eq__(other)
+
+ cls.__eq__ = __eq__
+ return cls
+
+ class MyDate(date, metaclass=create_strict_eq):
+ ...
+
+ class MyTime(time, metaclass=create_strict_eq):
+ def get_hour(self):
+ return self.hour
+
+ class MyDT(datetime, metaclass=create_strict_eq):
+ def get_year(self):
+ return self.year
+
+ @dataclass
+ class MyClass:
+ date_field1: DatePattern['%m-%y']
+ time_field1: TimePattern['%H-%M']
+ dt_field1: DateTimePattern['%d, %b, %Y %I::%M::%S.%f %p']
+ date_field2: Annotated[MyDate, Pattern('%Y/%m/%d')]
+ time_field2: Annotated[List[MyTime], Pattern('%I:%M %p')]
+ dt_field2: Annotated[MyDT, Pattern('%m/%d/%y %H@%M@%S')]
+
+ other_field: str
+
+ data = {'date_field1': '12-22',
+ 'time_field1': '15-20',
+ 'dt_field1': '3, Jan, 2022 11::30::12.123456 pm',
+ 'date_field2': '2021/12/30',
+ 'time_field2': ['1:20 PM', '12:30 am'],
+ 'dt_field2': '01/02/23 02@03@52',
+ 'other_field': 'testing'}
+
+ LoadMeta(v1=True).bind_to(MyClass)
+
+ class_obj = fromdict(MyClass, data)
+
+ # noinspection PyTypeChecker
+ expected_obj = MyClass(date_field1=date(2022, 12, 1),
+ time_field1=time(15, 20),
+ dt_field1=datetime(2022, 1, 3, 23, 30, 12, 123456),
+ date_field2=MyDate(2021, 12, 30),
+ time_field2=[MyTime(13, 20), MyTime(0, 30)],
+ dt_field2=MyDT(2023, 1, 2, 2, 3, 52),
+ other_field='testing')
+
+ log.debug('Deserialized object: %r', class_obj)
+ # Assert that dates / times are correctly de-serialized as expected.
+ assert class_obj == expected_obj
+
+ serialized_dict = asdict(class_obj)
+
+ expected_dict = {'dateField1': '2022-12-01',
+ 'timeField1': '15:20:00',
+ 'dtField1': '2022-01-03T23:30:12.123456',
+ 'dateField2': '2021-12-30',
+ 'timeField2': ['13:20:00', '00:30:00'],
+ 'dtField2': '2023-01-02T02:03:52',
+ 'otherField': 'testing'}
+
+ log.debug('Serialized dict object: %s', serialized_dict)
+ # Assert that dates / times are correctly serialized as expected.
+ assert serialized_dict == expected_dict
+
+ # Assert that de-serializing again, using the serialized date strings
+ # in ISO format, still works.
+ assert fromdict(MyClass, serialized_dict) == expected_obj
+
+
+@pytest.mark.xfail(reason='Need to add support in v1')
+def test_date_times_with_custom_pattern_when_input_is_invalid():
+ """
+ Date, time, and datetime objects with a custom date string
+ format, but the input date string does not match the set pattern.
+ """
+
+ @dataclass
+ class MyClass:
+ date_field: DatePattern['%m-%d-%y']
+
+ data = {'date_field': '12.31.21'}
+
+ LoadMeta(v1=True).bind_to(MyClass)
+
+ with pytest.raises(ParseError):
+ _ = fromdict(MyClass, data)
+
+
+@pytest.mark.skip(reason='Need to add support in v1')
+def test_date_times_with_custom_pattern_when_annotation_is_invalid():
+ """
+ Date, time, and datetime objects with a custom date string
+ format, but the annotated type is not a valid date/time type.
+ """
+ class MyCustomPattern(str, _PatternBase):
+ pass
+
+ @dataclass
+ class MyClass:
+ date_field: MyCustomPattern['%m-%d-%y']
+
+ data = {'date_field': '12-31-21'}
+
+ LoadMeta(v1=True).bind_to(MyClass)
+
+ with pytest.raises(TypeError) as e:
+ _ = fromdict(MyClass, data)
+
+ log.debug('Error details: %r', e.value)
+
+
+def test_tag_field_is_used_in_load_process():
+ """
+ Confirm that the `_TAG` field is used when de-serializing to a dataclass
+ instance (even for nested dataclasses) when a value is set in the
+ `Meta` config for a JSONWizard sub-class.
+ """
+
+ @dataclass
+ class Data(ABC):
+ """ base class for a Member """
+ number: float
+
+ class DataA(Data, JSONWizard):
+ """ A type of Data"""
+ class _(JSONWizard.Meta):
+ """
+ This defines a custom tag that uniquely identifies the dataclass.
+ """
+ tag = 'A'
+
+ class DataB(Data, JSONWizard):
+ """ Another type of Data """
+ class _(JSONWizard.Meta):
+ """
+ This defines a custom tag that uniquely identifies the dataclass.
+ """
+ tag = 'B'
+
+ class DataC(Data):
+ """ A type of Data"""
+
+ @dataclass
+ class Container(JSONWizard):
+ """ container holds a subclass of Data """
+ class _(JSONWizard.Meta):
+ v1 = True
+ tag = 'CONTAINER'
+ # Need for `DataC`, which doesn't have a tag assigned
+ v1_unsafe_parse_dataclass_in_union = True
+
+ data: Union[DataA, DataB, DataC]
+
+ data = {
+ 'data': {
+ TAG: 'A',
+ 'number': '1.0'
+ }
+ }
+
+ # initialize container with DataA
+ container = Container.from_dict(data)
+
+ # Assert we de-serialize as a DataA object.
+ assert type(container.data) == DataA
+ assert isinstance(container.data.number, float)
+ assert container.data.number == 1.0
+
+ data = {
+ 'data': {
+ TAG: 'B',
+ 'number': 2.0
+ }
+ }
+
+ # initialize container with DataA
+ container = Container.from_dict(data)
+
+ # Assert we de-serialize as a DataA object.
+ assert type(container.data) == DataB
+ assert isinstance(container.data.number, float)
+ assert container.data.number == 2.0
+
+ # Test we receive an error when we provide an invalid tag value
+ data = {
+ 'data': {
+ TAG: 'C',
+ 'number': 2.0
+ }
+ }
+
+ with pytest.raises(ParseError):
+ _ = Container.from_dict(data)
+
+
+def test_e2e_process_with_init_only_fields():
+ """
+ We are able to correctly de-serialize a class instance that excludes some
+ dataclass fields from the constructor, i.e. `field(init=False)`
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_str: str
+ my_float: float = field(default=0.123, init=False)
+ my_int: int = 1
+
+ c = MyClass('testing')
+
+ expected = {'myStr': 'testing', 'myFloat': 0.123, 'myInt': 1}
+
+ out_dict = c.to_dict()
+ assert out_dict == expected
+
+ # Assert we are able to de-serialize the data back as expected
+ assert c.from_dict(out_dict) == c
+
+
+@pytest.mark.parametrize(
+ 'input,expected',
+ [
+ (True, True),
+ ('TrUe', True),
+ ('y', True),
+ ('T', True),
+ (1, True),
+ (False, False),
+ ('False', False),
+ ('testing', False),
+ (0, False),
+ ]
+)
+def test_bool(input, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'P'
+
+ my_bool: bool
+
+ d = {'MyBool': input}
+
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert result.my_bool == expected
+
+
+def test_from_dict_handles_identical_cased_json_keys():
+ """
+ Calling `from_dict` when required JSON keys have the same casing as
+ dataclass field names, even when the field names are not "snake-cased".
+
+ See https://github.com/rnag/dataclass-wizard/issues/54 for more details.
+ """
+
+ @dataclass
+ class ExtendedFetch(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ comments: dict
+ viewMode: str
+ my_str: str
+ MyBool: bool
+
+ j = '{"viewMode": "regular", "comments": {}, "MyBool": "true", "my_str": "Testing"}'
+
+ c = ExtendedFetch.from_json(j)
+
+ assert c.comments == {}
+ assert c.viewMode == 'regular'
+ assert c.my_str == 'Testing'
+ assert c.MyBool
+
+
+def test_from_dict_with_missing_fields():
+ """
+ Calling `from_dict` when required dataclass field(s) are missing in the
+ JSON object.
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: str
+ MyBool1: bool
+ my_int: int
+
+ value = 'Testing'
+ d = {'my_str': value, 'myBool': 'true'}
+
+ with pytest.raises(MissingFields) as e:
+ _ = MyClass.from_dict(d)
+
+ assert e.value.fields == ['my_str']
+ assert e.value.missing_fields == ['MyBool1', 'my_int']
+ assert 'key transform' not in e.value.kwargs
+ assert 'resolution' not in e.value.kwargs
+
+
+def test_from_dict_with_missing_fields_with_resolution():
+ """
+ Calling `from_dict` when required dataclass field(s) are missing in the
+ JSON object, with a more user-friendly message.
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: str
+ MyBool: bool
+ my_int: int
+
+ value = 'Testing'
+ d = {'my_str': value, 'myBool': 'true'}
+
+ with pytest.raises(MissingFields) as e:
+ _ = MyClass.from_dict(d)
+
+ assert e.value.fields == ['my_str']
+ assert e.value.missing_fields == ['MyBool', 'my_int']
+ _ = e.value.message
+ # optional: these are populated in this case since this can be a somewhat common issue
+ assert e.value.kwargs['Key Transform'] is None
+ assert 'Resolution' in e.value.kwargs
+
+
+def test_from_dict_key_transform_with_json_field():
+ """
+ Specifying a custom mapping of alias key to dataclass field, via the
+ `Alias` helper function.
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: str = Alias('myCustomStr')
+ my_bool: bool = Alias('myTestBool')
+
+ # TODO: currently multiple aliases are not supported
+ # my_bool: bool = Alias(('my_json_bool', 'myTestBool'))
+
+ value = 'Testing'
+ d = {'myCustomStr': value, 'myTestBool': 'true'}
+
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert result.my_str == value
+ assert result.my_bool is True
+
+
+def test_from_dict_key_transform_with_json_key():
+ """
+ Specifying a custom mapping of JSON key to dataclass field, via the
+ `json_key` helper function.
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: Annotated[str, Alias('myCustomStr')]
+ my_bool: Annotated[bool, Alias('myTestBool')]
+
+ value = 'Testing'
+ d = {'myCustomStr': value, 'myTestBool': 'true'}
+
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert result.my_str == value
+ assert result.my_bool is True
+
+
+@pytest.mark.parametrize(
+ 'input,expected,expectation',
+ [
+ ([1, '2', 3], {1, 2, 3}, does_not_raise()),
+ ('TrUe', True, pytest.raises(ParseError)),
+ ((3.22, 2.11, 1.22), {3, 2, 1}, does_not_raise()),
+ ]
+)
+def test_set(input, expected, expectation):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ num_set: Set[int]
+ any_set: set
+
+ d = {'num_set': input, 'any_set': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert isinstance(result.num_set, set)
+ assert isinstance(result.any_set, set)
+
+ assert result.num_set == expected
+ assert result.any_set == set(input)
+
+
+@pytest.mark.parametrize(
+ 'input,expected,expectation',
+ [
+ ([1, '2', 3], {1, 2, 3}, does_not_raise()),
+ ('TrUe', True, pytest.raises(ParseError)),
+ ((3.22, 2.11, 1.22), {1, 2, 3}, does_not_raise()),
+ ]
+)
+def test_frozenset(input, expected, expectation):
+
+ @dataclass
+ class MyClass(JSONSerializable):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ num_set: FrozenSet[int]
+ any_set: frozenset
+
+ d = {'num_set': input, 'any_set': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert isinstance(result.num_set, frozenset)
+ assert isinstance(result.any_set, frozenset)
+
+ assert result.num_set == expected
+ assert result.any_set == frozenset(input)
+
+
+@pytest.mark.parametrize(
+ 'input,expectation',
+ [
+ ('testing', pytest.raises(ParseError)),
+ ('e1', does_not_raise()),
+ # TODO: currently no type check for Literal
+ # (False, pytest.raises(ParseError)),
+ (0, does_not_raise()),
+ ]
+)
+def test_literal(input, expectation):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1_key_case = 'P'
+ v1 = True
+
+ my_lit: Literal['e1', 'e2', 0]
+
+ d = {'MyLit': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+
+@pytest.mark.parametrize(
+ 'input,expected',
+ [
+ (True, True),
+ (None, None),
+ ('TrUe', True),
+ ('y', True),
+ ('T', True),
+ ('F', False),
+ ('On', True),
+ ('OFF', False),
+ (1, True),
+ (False, False),
+ (0, False),
+ ]
+)
+def test_annotated(input, expected):
+
+ @dataclass(unsafe_hash=True)
+ class MaxLen:
+ length: int
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'Auto'
+
+ bool_or_none: Annotated[Optional[bool], MaxLen(23), "testing", 123]
+
+ d = {'Bool-Or-None': input}
+
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert result.bool_or_none == expected
+
+
+@pytest.mark.parametrize(
+ 'input',
+ [
+ '12345678-1234-1234-1234-1234567abcde',
+ '{12345678-1234-5678-1234-567812345678}',
+ '12345678123456781234567812345678',
+ 'urn:uuid:12345678-1234-5678-1234-567812345678'
+ ]
+)
+def test_uuid(input):
+
+ @dataclass
+ class MyUUIDTestClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_id: MyUUIDSubclass
+
+ d = {'my_id': input}
+
+ result = MyUUIDTestClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ expected = MyUUIDSubclass(input)
+
+ assert result.my_id == expected
+ assert isinstance(result.my_id, MyUUIDSubclass)
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ ('testing', does_not_raise(), 'testing'),
+ (False, does_not_raise(), 'False'),
+ (0, does_not_raise(), '0'),
+ (None, does_not_raise(), None),
+ ]
+)
+def test_optional(input, expectation, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'P'
+
+ my_str: str
+ my_opt_str: Optional[str]
+
+ d = {'MyStr': input, 'MyOptStr': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert result.my_opt_str == expected
+ if input is None:
+ assert result.my_str == '', \
+ 'expected `my_str` to be set to an empty string'
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ ('testing', does_not_raise(), 'testing'),
+ # The actual value would end up being 0 (int) if we checked the type
+ # using `isinstance` instead. However, we do an exact `type` check for
+ # :class:`Union` types.
+ (False, does_not_raise(), False),
+ (0, does_not_raise(), 0),
+ (None, does_not_raise(), None),
+ # Since the first type in `Union` is `str`,
+ # the float value is converted to a string.
+ (1.2, does_not_raise(), '1.2')
+ ]
+)
+def test_union(input, expectation, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_opt_str_int_or_bool: Union[str, int, bool, None]
+
+ d = {'myOptStrIntOrBool': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+ assert result.my_opt_str_int_or_bool == expected
+
+
+def test_forward_refs_are_resolved():
+ """
+ Confirm that :class:`typing.ForwardRef` usages, such as `List['B']`,
+ are resolved correctly.
+
+ """
+ @dataclass
+ class A(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ b: List['B']
+ c: 'C'
+
+ @dataclass
+ class B:
+ optional_int: Optional[int] = None
+
+ @dataclass
+ class C:
+ my_str: str
+
+ # This is trick that allows us to treat classes A, B, and C as if they
+ # were defined at the module level. Otherwise, the forward refs won't
+ # resolve as expected.
+ globals().update(locals())
+
+ d = {'b': [{}], 'c': {'my_str': 'testing'}}
+
+ a = A.from_dict(d)
+
+ log.debug(a)
+
+
+@pytest.mark.parametrize(
+ 'input,expectation',
+ [
+ ('testing', pytest.raises(ParseError)),
+ ('2020-01-02T01:02:03Z', does_not_raise()),
+ ('2010-12-31 23:59:59-04:00', does_not_raise()),
+ (123456789, does_not_raise()),
+ (True, pytest.raises(ParseError)),
+ (datetime(2010, 12, 31, 23, 59, 59), does_not_raise()),
+ ]
+)
+def test_datetime(input, expectation):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_dt: datetime
+
+ d = {'my_dt': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+
+@pytest.mark.parametrize(
+ 'input,expectation',
+ [
+ ('testing', pytest.raises(ParseError)),
+ ('2020-01-02', does_not_raise()),
+ ('2010-12-31', does_not_raise()),
+ (123456789, does_not_raise()),
+ (True, pytest.raises(ParseError)),
+ (date(2010, 12, 31), does_not_raise()),
+ ]
+)
+def test_date(input, expectation):
+
+ @dataclass
+ class MyClass(JSONSerializable):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_d: date
+
+ d = {'my_d': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+
+@pytest.mark.parametrize(
+ 'input,expectation',
+ [
+ ('testing', pytest.raises(ParseError)),
+ ('01:02:03Z', does_not_raise()),
+ ('23:59:59-04:00', does_not_raise()),
+ (123456789, pytest.raises(ParseError)),
+ (True, pytest.raises(ParseError)),
+ (time(23, 59, 59), does_not_raise()),
+ ]
+)
+def test_time(input, expectation):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_t: time
+
+ d = {'my_t': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+
+
+@pytest.mark.parametrize(
+ 'input,expectation, base_err',
+ [
+ ('testing', pytest.raises(ParseError), ValueError),
+ ('23:59:59-04:00', pytest.raises(ParseError), ValueError),
+ ('32', does_not_raise(), None),
+ ('32.7', does_not_raise(), None),
+ ('32m', does_not_raise(), None),
+ ('2h32m', does_not_raise(), None),
+ ('4:13', does_not_raise(), None),
+ ('5hr34m56s', does_not_raise(), None),
+ ('1.2 minutes', does_not_raise(), None),
+ (12345, does_not_raise(), None),
+ (True, pytest.raises(ParseError), TypeError),
+ (timedelta(days=1, seconds=2), does_not_raise(), None),
+ ]
+)
+def test_timedelta(input, expectation, base_err):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_td: timedelta
+
+ d = {'my_td': input}
+
+ with expectation as e:
+ result = MyClass.from_dict(d)
+ log.debug('Parsed object: %r', result)
+ log.debug('timedelta string value: %s', result.my_td)
+
+ if e: # if an error was raised, assert the underlying error type
+ assert type(e.value.base_error) == base_err
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ # For the `int` parser, only do explicit type checks against
+ # `bool` currently (which is a special case) so this is expected
+ # to pass.
+ [{}], does_not_raise(), [0]),
+ (
+ # `bool` is a sub-class of int, so we explicitly check for this
+ # type.
+ [True, False], pytest.raises(ParseError), None),
+ (
+ ['hello', 'world'], pytest.raises(ParseError), None
+ ),
+ (
+ [1, 'two', 3], pytest.raises(ParseError), None),
+ (
+ [1, '2', 3], does_not_raise(), [1, 2, 3]
+ ),
+ (
+ 'testing', pytest.raises(ParseError), None
+ ),
+ ]
+)
+def test_list(input, expectation, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_list: List[int]
+
+ d = {'my_list': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_list == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ ['hello', 'world'], pytest.raises(ParseError), None
+ ),
+ (
+ [1, '2', 3], does_not_raise(), [1, 2, 3]
+ ),
+ ]
+)
+def test_deque(input, expectation, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_deque: deque[int]
+
+ d = {'my_deque': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+
+ assert isinstance(result.my_deque, deque)
+ assert list(result.my_deque) == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ [{}], does_not_raise(), [{}]),
+ (
+ [True, False], does_not_raise(), [True, False]),
+ (
+ ['hello', 'world'], does_not_raise(), ['hello', 'world']
+ ),
+ (
+ [1, 'two', 3], does_not_raise(), [1, 'two', 3]),
+ (
+ [1, '2', 3], does_not_raise(), [1, '2', 3]
+ ),
+ # TODO maybe we should raise an error in this case?
+ (
+ 'testing', does_not_raise(),
+ ['t', 'e', 's', 't', 'i', 'n', 'g']
+ ),
+ ]
+)
+def test_list_without_type_hinting(input, expectation, expected):
+ """
+ Test case for annotating with a bare `list` (acts as just a pass-through
+ for its elements)
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_list: list
+
+ d = {'my_list': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_list == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ # Wrong number of elements (technically the wrong type)
+ [{}], pytest.raises(ParseError), None),
+ (
+ [True, False, True], pytest.raises(ParseError), None),
+ (
+ [1, 'hello'], pytest.raises(ParseError), None
+ ),
+ (
+ ['1', 'two', True], does_not_raise(), (1, 'two', True)),
+ (
+ 'testing', pytest.raises(ParseError), None
+ ),
+ ]
+)
+def test_tuple(input, expectation, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_tuple: Tuple[int, str, bool]
+
+ d = {'my_tuple': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_tuple == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ # Wrong number of elements (technically the wrong type)
+ [{}], pytest.raises(ParseError), None),
+ (
+ [True, False, True], pytest.raises(ParseError), None),
+ (
+ [1, 'hello'], pytest.raises(ParseError), None
+ ),
+ (
+ ['1', 'two', 'tRuE'], pytest.raises(ParseError), None
+ ),
+ (
+ ['1', 'two', None, 3], does_not_raise(), (1, 'two', None, 3)),
+ (
+ ['1', 'two', 'false', None], does_not_raise(),
+ (1, 'two', False, None)),
+ (
+ 'testing', pytest.raises(ParseError), None
+ ),
+ ]
+)
+def test_tuple_with_optional_args(input, expectation, expected):
+ """
+ Test case when annotated type has any "optional" arguments, such as
+ `Tuple[str, Optional[int]]` or
+ `Tuple[bool, Optional[str], Union[int, None]]`.
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_tuple: Tuple[int, str, Optional[bool], Union[str, int, None]]
+
+ d = {'my_tuple': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_tuple == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ # This is when we don't really specify what elements the tuple is
+ # expected to contain.
+ [{}], does_not_raise(), ({},)),
+ (
+ [True, False, True], does_not_raise(), (True, False, True)),
+ (
+ [1, 'hello'], does_not_raise(), (1, 'hello')
+ ),
+ (
+ ['1', 'two', True], does_not_raise(), ('1', 'two', True)),
+ (
+ 'testing', does_not_raise(),
+ ('t', 'e', 's', 't', 'i', 'n', 'g')
+ ),
+ ]
+)
+def test_tuple_without_type_hinting(input, expectation, expected):
+ """
+ Test case for annotating with a bare `tuple` (acts as just a pass-through
+ for its elements)
+ """
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_tuple: tuple
+
+ d = {'my_tuple': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_tuple == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ # Technically this is the wrong type (dict != int) however the
+ # conversion to `int` still succeeds. Might need to change this
+ # behavior later if needed.
+ [{}], does_not_raise(), (0, )),
+ (
+ [], does_not_raise(), tuple()),
+ (
+ [True, False, True], pytest.raises(ParseError), None),
+ (
+ # Raises a `ValueError` because `hello` cannot be converted to int
+ [1, 'hello'], pytest.raises(ParseError), None
+ ),
+ (
+ [1], does_not_raise(), (1, )),
+ (
+ ['1', 2, '3'], does_not_raise(), (1, 2, 3)),
+ (
+ ['1', '2', None, '4', 5, 6, '7'], does_not_raise(),
+ (1, 2, 0, 4, 5, 6, 7)),
+ (
+ 'testing', pytest.raises(ParseError), None
+ ),
+ ]
+)
+def test_tuple_with_variadic_args(input, expectation, expected):
+ """
+ Test case when annotated type is in the "variadic" format, i.e.
+ `Tuple[str, ...]`
+ """
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'P'
+
+ my_tuple: Tuple[int, ...]
+
+ d = {'MyTuple': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_tuple == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ None, pytest.raises(ParseError), None
+ ),
+ (
+ {}, does_not_raise(), {}
+ ),
+ (
+ # Wrong types for both key and value
+ {'key': 'value'}, pytest.raises(ParseError), None),
+ (
+ {'1': 'test', '2': 't', '3': 'false'}, does_not_raise(),
+ {1: False, 2: True, 3: False}
+ ),
+ (
+ {2: None}, does_not_raise(), {2: False}
+ ),
+ (
+ # Incorrect type - `list`, but should be a `dict`
+ [{'my_str': 'test', 'my_int': 2, 'my_bool': True}],
+ pytest.raises(ParseError), None
+ )
+ ]
+)
+def test_dict(input, expectation, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_dict: Dict[int, bool]
+
+ d = {'myDict': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_dict == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ None, pytest.raises(ParseError), None
+ ),
+ (
+ {}, does_not_raise(), {}
+ ),
+ (
+ # Wrong types for both key and value
+ {'key': 'value'}, pytest.raises(ParseError), None),
+ (
+ {'1': 'test', '2': 't', '3': ['false']}, does_not_raise(),
+ {1: ['t', 'e', 's', 't'],
+ 2: ['t'],
+ 3: ['false']}
+ ),
+ (
+ # Might need to change this behavior if needed: currently it
+ # raises an error, which I think is good for now since we don't
+ # want to add `null`s to a list anyway.
+ {2: None}, pytest.raises(ParseError), None
+ ),
+ (
+ # Incorrect type - `list`, but should be a `dict`
+ [{'my_str': 'test', 'my_int': 2, 'my_bool': True}],
+ pytest.raises(ParseError), None
+ )
+ ]
+)
+def test_default_dict(input, expectation, expected):
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_def_dict: DefaultDict[int, list]
+
+ d = {'myDefDict': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert isinstance(result.my_def_dict, defaultdict)
+ assert result.my_def_dict == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ None, pytest.raises(ParseError), None
+ ),
+ (
+ {}, does_not_raise(), {}
+ ),
+ (
+ # Wrong types for both key and value
+ {'key': 'value'}, does_not_raise(), {'key': 'value'}),
+ (
+ {'1': 'test', '2': 't', '3': 'false'}, does_not_raise(),
+ {'1': 'test', '2': 't', '3': 'false'}
+ ),
+ (
+ {2: None}, does_not_raise(), {2: None}
+ ),
+ (
+ # Incorrect type - `list`, but should be a `dict`
+ [{'my_str': 'test', 'my_int': 2, 'my_bool': True}],
+ pytest.raises(ParseError), None
+ )
+ ]
+)
+def test_dict_without_type_hinting(input, expectation, expected):
+ """
+ Test case for annotating with a bare `dict` (acts as just a pass-through
+ for its key-value pairs)
+ """
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_dict: dict
+
+ d = {'myDict': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_dict == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ {}, pytest.raises(ParseError), None
+ ),
+ (
+ {'key': 'value'}, pytest.raises(ParseError), {}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2,
+ 'my_bool': True, 'other_key': 'testing'}, does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ (
+ {'my_str': 3}, pytest.raises(ParseError), None
+ ),
+ (
+ {'my_str': 'test', 'my_int': 'test', 'my_bool': True},
+ pytest.raises(ParseError), None
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True},
+ does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ (
+ # Incorrect type - `list`, but should be a `dict`
+ [{'my_str': 'test', 'my_int': 2, 'my_bool': True}],
+ pytest.raises(ParseError), None
+ )
+ ]
+)
+def test_typed_dict(input, expectation, expected):
+
+ class MyDict(TypedDict):
+ my_str: str
+ my_bool: bool
+ my_int: int
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_typed_dict: MyDict
+
+ d = {'myTypedDict': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_typed_dict == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ {}, does_not_raise(), {}
+ ),
+ (
+ {'key': 'value'}, does_not_raise(), {}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2,
+ 'my_bool': True, 'other_key': 'testing'}, does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ (
+ {'my_str': 3}, does_not_raise(), {'my_str': '3'}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 'test', 'my_bool': True},
+ pytest.raises(ParseError),
+ {'my_str': 'test', 'my_int': 'test', 'my_bool': True}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True},
+ does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ )
+ ]
+)
+def test_typed_dict_with_all_fields_optional(input, expectation, expected):
+ """
+ Test case for loading to a TypedDict which has `total=False`, indicating
+ that all fields are optional.
+
+ """
+ class MyDict(TypedDict, total=False):
+ my_str: str
+ my_bool: bool
+ my_int: int
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_typed_dict: MyDict
+
+ d = {'myTypedDict': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_typed_dict == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ {}, pytest.raises(ParseError), None
+ ),
+ (
+ {'key': 'value'}, pytest.raises(ParseError), {}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2,
+ 'my_bool': True, 'other_key': 'testing'}, does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ (
+ {'my_str': 3}, pytest.raises(ParseError), None
+ ),
+ (
+ {'my_str': 'test', 'my_int': 'test', 'my_bool': True},
+ pytest.raises(ParseError), None,
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True},
+ does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ (
+ {'my_str': 'test', 'my_bool': True},
+ does_not_raise(),
+ {'my_str': 'test', 'my_bool': True}
+ ),
+ (
+ # Incorrect type - `list`, but should be a `dict`
+ [{'my_str': 'test', 'my_int': 2, 'my_bool': True}],
+ pytest.raises(ParseError), None
+ )
+ ]
+)
+def test_typed_dict_with_one_field_not_required(input, expectation, expected):
+ """
+ Test case for loading to a TypedDict whose fields are all mandatory
+ except for one field, whose annotated type is NotRequired.
+
+ """
+ class MyDict(TypedDict):
+ my_str: str
+ my_bool: bool
+ my_int: NotRequired[int]
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_typed_dict: MyDict
+
+ d = {'myTypedDict': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_typed_dict == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ {}, pytest.raises(ParseError), None
+ ),
+ (
+ {'my_int': 2}, does_not_raise(), {'my_int': 2}
+ ),
+ (
+ {'key': 'value'}, pytest.raises(ParseError), None
+ ),
+ (
+ {'key': 'value', 'my_int': 2}, does_not_raise(),
+ {'my_int': 2}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2,
+ 'my_bool': True, 'other_key': 'testing'}, does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ (
+ {'my_str': 3}, pytest.raises(ParseError), None
+ ),
+ (
+ {'my_str': 'test', 'my_int': 'test', 'my_bool': True},
+ pytest.raises(ParseError),
+ {'my_str': 'test', 'my_int': 'test', 'my_bool': True}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True},
+ does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ )
+ ]
+)
+def test_typed_dict_with_one_field_required(input, expectation, expected):
+ """
+ Test case for loading to a TypedDict whose fields are all optional
+ except for one field, whose annotated type is Required.
+
+ """
+ class MyDict(TypedDict, total=False):
+ my_str: str
+ my_bool: bool
+ my_int: Required[int]
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'C'
+
+ my_typed_dict: MyDict
+
+ d = {'myTypedDict': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ assert result.my_typed_dict == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ (
+ # Should raise a `TypeError` (types for last two are wrong)
+ ['test', 2, True],
+ pytest.raises(ParseError), None
+ ),
+ (
+ ['test', True, 2],
+ does_not_raise(),
+ ('test', True, 2)
+ ),
+ ]
+)
+def test_named_tuple(input, expectation, expected):
+
+ class MyNamedTuple(NamedTuple):
+ my_str: str
+ my_bool: bool
+ my_int: int
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_nt: MyNamedTuple
+
+ d = {'my_nt': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ if isinstance(expected, dict):
+ expected = MyNamedTuple(**expected)
+
+ assert result.my_nt == expected
+
+
+@pytest.mark.skip('Need to add support in v1')
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ # TODO I guess these all technically should raise a ParseError
+ (
+ {}, pytest.raises(TypeError), None
+ ),
+ (
+ {'key': 'value'}, pytest.raises(KeyError), {}
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2,
+ 'my_bool': True, 'other_key': 'testing'},
+ # Unlike a TypedDict, extra arguments to a `NamedTuple` should
+ # result in an error
+ pytest.raises(KeyError), None
+ ),
+ (
+ {'my_str': 'test', 'my_int': 'test', 'my_bool': True},
+ pytest.raises(ValueError), None
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True},
+ does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ ]
+)
+def test_named_tuple_with_input_dict(input, expectation, expected):
+
+ class MyNamedTuple(NamedTuple):
+ my_str: str
+ my_bool: bool
+ my_int: int
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_nt: MyNamedTuple
+
+ d = {'my_nt': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ if isinstance(expected, dict):
+ expected = MyNamedTuple(**expected)
+
+ assert result.my_nt == expected
+
+
+@pytest.mark.parametrize(
+ 'input,expectation,expected',
+ [
+ # TODO I guess these all technically should raise a ParseError
+ # TODO need to add support for parsing dict's
+ # (
+ # {}, pytest.raises(TypeError), None
+ # ),
+ # (
+ # {'key': 'value'}, pytest.raises(TypeError), {}
+ # ),
+ # (
+ # {'my_str': 'test', 'my_int': 2,
+ # 'my_bool': True, 'other_key': 'testing'},
+ # # Unlike a TypedDict, extra arguments to a `namedtuple` should
+ # # result in an error
+ # pytest.raises(TypeError), None
+ # ),
+ # (
+ # {'my_str': 'test', 'my_int': 'test', 'my_bool': True},
+ # does_not_raise(), ('test', True, 'test')
+ # ),
+ (
+ ['test', 2, True],
+ does_not_raise(), ('test', 2, True)
+ ),
+ (
+ ['test', True, 2],
+ does_not_raise(),
+ ('test', True, 2)
+ ),
+ (
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True},
+ does_not_raise(),
+ {'my_str': 'test', 'my_int': 2, 'my_bool': True}
+ ),
+ ]
+)
+def test_named_tuple_without_type_hinting(input, expectation, expected):
+ """
+ Test case for annotating with a bare :class:`collections.namedtuple`. In
+ this case, we lose out on proper type checking and conversion, but at
+ least we still have a check on the parameter names, as well as the no. of
+ expected elements.
+
+ """
+ MyNamedTuple = namedtuple('MyNamedTuple', ['my_str', 'my_bool', 'my_int'])
+
+ @dataclass
+ class MyClass(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_nt: MyNamedTuple
+
+ d = {'my_nt': input}
+
+ with expectation:
+ result = MyClass.from_dict(d)
+
+ log.debug('Parsed object: %r', result)
+ if isinstance(expected, dict):
+ expected = MyNamedTuple(**expected)
+
+ assert result.my_nt == expected
+
+
+def test_load_with_inner_model_when_data_is_null():
+ """
+ Test loading JSON data to an inner model dataclass, when the
+ data being de-serialized is a null, and the annotated type for
+ the field is not in the syntax `T | None`.
+ """
+
+ @dataclass
+ class Inner:
+ my_bool: bool
+ my_str: str
+
+ @dataclass
+ class Outer(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ inner: Inner
+
+ json_dict = {'inner': None}
+
+ with pytest.raises(MissingData) as exc_info:
+ _ = Outer.from_dict(json_dict)
+
+ e = exc_info.value
+ assert e.class_name == Outer.__qualname__
+ assert e.nested_class_name == Inner.__qualname__
+ assert e.field_name == 'inner'
+ # the error should mention that we want an Inner, but get a None
+ assert e.ann_type is Inner
+ assert type(None) is e.obj_type
+
+
+def test_load_with_inner_model_when_data_is_wrong_type():
+ """
+ Test loading JSON data to an inner model dataclass, when the
+ data being de-serialized is a wrong type (list).
+ """
+
+ @dataclass
+ class Inner:
+ my_bool: bool
+ my_str: str
+
+ @dataclass
+ class Outer(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'AUTO'
+
+ my_str: str
+ inner: Inner
+
+ json_dict = {
+ 'myStr': 'testing',
+ 'inner': [
+ {
+ 'myStr': '123',
+ 'myBool': 'false',
+ 'my_val': '2',
+ }
+ ]
+ }
+
+ with pytest.raises(ParseError) as exc_info:
+ _ = Outer.from_dict(json_dict)
+
+ e = exc_info.value
+ # TODO - is this right?
+ assert e.class_name == Inner.__qualname__
+ assert e.field_name == 'my_bool'
+ assert e.base_error.__class__ is TypeError
+ # the error should mention that we want a dict, but get a list
+ assert e.ann_type == dict
+ assert e.obj_type == list
+
+
+def test_load_with_python_3_11_regression():
+ """
+ This test case is to confirm intended operation with `typing.Any`
+ (either explicit or implicit in plain `list` or `dict` type
+ annotations).
+
+ Note: I have been unable to reproduce [the issue] posted on GitHub.
+ I've tested this on multiple Python versions on Mac, including
+ 3.10.6, 3.11.0, 3.11.5, 3.11.10.
+
+ See [the issue].
+
+ [the issue]: https://github.com/rnag/dataclass-wizard/issues/89
+ """
+
+ @dataclass
+ class Item(JSONSerializable):
+
+ class _(JSONSerializable.Meta):
+ v1 = True
+
+ a: dict
+ b: Optional[dict]
+ c: Optional[list] = None
+
+ item = Item.from_json('{"a": {}, "b": null}')
+
+ assert item.a == {}
+ assert item.b is item.c is None
+
+
+@pytest.mark.skip(reason='TODO add support in v1')
+def test_with_self_referential_dataclasses_1():
+ """
+ Test loading JSON data, when a dataclass model has cyclic
+ or self-referential dataclasses. For example, A -> A -> A.
+ """
+ @dataclass
+ class A:
+ a: Optional['A'] = None
+
+ # enable support for self-referential / recursive dataclasses
+ LoadMeta(v1=True, recursive_classes=True).bind_to(A)
+
+ # Fix for local test cases so the forward reference works
+ globals().update(locals())
+
+ # assert that `fromdict` with a recursive, self-referential
+ # input `dict` works as expected.
+ a = fromdict(A, {'a': {'a': {'a': None}}})
+ assert a == A(a=A(a=A(a=None)))
+
+
+@pytest.mark.skip(reason='TODO add support in v1')
+def test_with_self_referential_dataclasses_2():
+ """
+ Test loading JSON data, when a dataclass model has cyclic
+ or self-referential dataclasses. For example, A -> B -> A -> B.
+ """
+ @dataclass
+ class A(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1 = True
+ # enable support for self-referential / recursive dataclasses
+ recursive_classes = True
+
+ b: Optional['B'] = None
+
+ @dataclass
+ class B:
+ a: Optional['A'] = None
+
+ # Fix for local test cases so the forward reference works
+ globals().update(locals())
+
+ # assert that `fromdict` with a recursive, self-referential
+ # input `dict` works as expected.
+ a = fromdict(A, {'b': {'a': {'b': {'a': None}}}})
+ assert a == A(b=B(a=A(b=B())))
+
+
+def test_catch_all():
+ """'Catch All' support with no default field value."""
+ @dataclass
+ class MyData(TOMLWizard):
+ my_str: str
+ my_float: float
+ extra: CatchAll
+
+ LoadMeta(v1=True).bind_to(MyData)
+
+ toml_string = '''
+ my_extra_str = "test!"
+ my_str = "test"
+ my_float = 3.14
+ my_bool = true
+ '''
+
+ # Load from TOML string
+ data = MyData.from_toml(toml_string)
+
+ assert data.extra == {'my_extra_str': 'test!', 'my_bool': True}
+
+ # Save to TOML string
+ toml_string = data.to_toml()
+
+ assert toml_string == """\
+my_str = "test"
+my_float = 3.14
+my_extra_str = "test!"
+my_bool = true
+"""
+
+ # Read back from the TOML string
+ new_data = MyData.from_toml(toml_string)
+
+ assert new_data.extra == {'my_extra_str': 'test!', 'my_bool': True}
+
+
+def test_catch_all_with_default():
+ """'Catch All' support with a default field value."""
+
+ @dataclass
+ class MyData(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: str
+ my_float: float
+ extra_data: CatchAll = False
+
+ # Case 1: Extra Data is provided
+
+ input_dict = {
+ 'my_str': "test",
+ 'my_float': 3.14,
+ 'my_other_str': "test!",
+ 'my_bool': True
+ }
+
+ # Load from TOML string
+ data = MyData.from_dict(input_dict)
+
+ assert data.extra_data == {'my_other_str': 'test!', 'my_bool': True}
+
+ # Save to TOML file
+ output_dict = data.to_dict()
+
+ assert output_dict == {
+ "myStr": "test",
+ "myFloat": 3.14,
+ "my_other_str": "test!",
+ "my_bool": True
+ }
+
+ new_data = MyData.from_dict(snake(output_dict))
+
+ assert new_data.extra_data == {'my_other_str': 'test!', 'my_bool': True}
+
+ # Case 2: Extra Data is not provided
+
+ input_dict = {
+ 'my_str': "test",
+ 'my_float': 3.14,
+ }
+
+ # Load from TOML string
+ data = MyData.from_dict(input_dict)
+
+ assert data.extra_data is False
+
+ # Save to TOML file
+ output_dict = data.to_dict()
+
+ assert output_dict == {
+ "myStr": "test",
+ "myFloat": 3.14,
+ }
+
+ new_data = MyData.from_dict(snake(output_dict))
+
+ assert new_data.extra_data is False
+
+
+def test_catch_all_with_skip_defaults():
+ """'Catch All' support with a default field value and `skip_defaults`."""
+
+ @dataclass
+ class MyData(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1 = True
+ skip_defaults = True
+
+ my_str: str
+ my_float: float
+ extra_data: CatchAll = False
+
+ # Case 1: Extra Data is provided
+
+ input_dict = {
+ 'my_str': "test",
+ 'my_float': 3.14,
+ 'my_other_str': "test!",
+ 'my_bool': True
+ }
+
+ # Load from TOML string
+ data = MyData.from_dict(input_dict)
+
+ assert data.extra_data == {'my_other_str': 'test!', 'my_bool': True}
+
+ # Save to TOML file
+ output_dict = data.to_dict()
+
+ assert output_dict == {
+ "myStr": "test",
+ "myFloat": 3.14,
+ "my_other_str": "test!",
+ "my_bool": True
+ }
+
+ new_data = MyData.from_dict(snake(output_dict))
+
+ assert new_data.extra_data == {'my_other_str': 'test!', 'my_bool': True}
+
+ # Case 2: Extra Data is not provided
+
+ input_dict = {
+ 'my_str': "test",
+ 'my_float': 3.14,
+ }
+
+ # Load from TOML string
+ data = MyData.from_dict(input_dict)
+
+ assert data.extra_data is False
+
+ # Save to TOML file
+ output_dict = data.to_dict()
+
+ assert output_dict == {
+ "myStr": "test",
+ "myFloat": 3.14,
+ }
+
+ new_data = MyData.from_dict(snake(output_dict))
+
+ assert new_data.extra_data is False
+
+
+def test_catch_all_with_auto_key_case():
+ """'Catch All' with `auto` key case."""
+
+ @dataclass
+ class Options(JSONWizard):
+ class _(JSONWizard.Meta):
+ v1 = True
+ v1_key_case = 'Auto'
+
+ my_extras: CatchAll
+ email: str
+
+ opt = Options.from_dict({
+ 'Email': 'a@b.org',
+ 'token': '',
+ })
+ assert opt == Options(my_extras={'token': ''}, email='a@b.org')
+
+ opt = Options.from_dict({
+ 'Email': 'x@y.org',
+ })
+ assert opt == Options(my_extras={}, email='x@y.org')
+
+
+def test_from_dict_with_nested_object_alias_path():
+ """
+ Specifying a custom mapping of "nested" alias to dataclass field,
+ via the `AliasPath` helper function.
+ """
+
+ @dataclass
+ class A(JSONPyWizard, debug=True):
+ class _(JSONPyWizard.Meta):
+ v1 = True
+
+ an_int: int
+ a_bool: Annotated[bool, AliasPath('x.y.z.0')]
+ my_str: str = AliasPath(['a', 'b', 'c', -1], default='xyz')
+
+ # Failures
+
+ d = {'my_str': 'test'}
+
+ with pytest.raises(ParseError) as e:
+ _ = A.from_dict(d)
+
+ err = e.value
+ assert err.field_name == 'a_bool'
+ assert err.base_error.args == ('x', )
+ assert err.kwargs['current_path'] == "'x'"
+
+ d = {'a': {'b': {'c': []}},
+ 'x': {'y': {}}, 'an_int': 3}
+
+ with pytest.raises(ParseError) as e:
+ _ = A.from_dict(d)
+
+ err = e.value
+ assert err.field_name == 'a_bool'
+ assert err.base_error.args == ('z', )
+ assert err.kwargs['current_path'] == "'z'"
+
+ # Successes
+
+ # Case 1
+ d = {'a': {'b': {'c': [1, 5, 7]}},
+ 'x': {'y': {'z': [False]}}, 'an_int': 3}
+
+ a = A.from_dict(d)
+ assert repr(a).endswith("A(an_int=3, a_bool=False, my_str='7')")
+
+ d = a.to_dict()
+
+ assert d == {
+ 'x': {
+ 'y': {
+ 'z': { 0: False }
+ }
+ },
+ 'a': {
+ 'b': {
+ 'c': { -1: '7' }
+ }
+ },
+ 'an_int': 3
+ }
+
+ a = A.from_dict(d)
+ assert repr(a).endswith("A(an_int=3, a_bool=False, my_str='7')")
+
+ # Case 2
+ d = {'a': {'b': {}},
+ 'x': {'y': {'z': [True, False]}}, 'an_int': 5}
+
+ a = A.from_dict(d)
+ assert repr(a).endswith("A(an_int=5, a_bool=True, my_str='xyz')")
+
+ d = a.to_dict()
+
+ assert d == {
+ 'x': {
+ 'y': {
+ 'z': { 0: True }
+ }
+ },
+ 'a': {
+ 'b': {
+ 'c': { -1: 'xyz' }
+ }
+ },
+ 'an_int': 5
+ }
+
+
+def test_from_dict_with_nested_object_alias_path_with_skip_defaults():
+ """
+ Specifying a custom mapping of "nested" alias to dataclass field,
+ via the `AliasPath` helper function.
+
+ Test with `skip_defaults=True`, `load_alias`, and `skip=True`.
+ """
+
+ @dataclass
+ class A(JSONWizard, debug=True):
+ class _(JSONWizard.Meta):
+ v1 = True
+ skip_defaults = True
+
+ an_int: Annotated[int, AliasPath('my."test value"[here!][0]')]
+
+ a_bool: Annotated[bool, AliasPath(load='x.y.z.-1')]
+ my_str: Annotated[str, AliasPath(['a', 'b', 'c', -1], skip=True)] = 'xyz1'
+
+ other_bool: bool = AliasPath('x.y."z z"', default=True)
+
+ # Failures
+
+ d = {'my_str': 'test'}
+
+ with pytest.raises(ParseError) as e:
+ _ = A.from_dict(d)
+
+ err = e.value
+ assert err.field_name == 'an_int'
+ assert err.base_error.args == ('my', )
+ assert err.kwargs['current_path'] == "'my'"
+
+ d = {
+ 'my': {'test value': {'here!': [1, 2, 3]}},
+ 'a': {'b': {'c': []}},
+ 'x': {'y': {}}, 'an_int': 3}
+
+ with pytest.raises(ParseError) as e:
+ _ = A.from_dict(d)
+
+ err = e.value
+ assert err.field_name == 'a_bool'
+ assert err.base_error.args == ('z', )
+ assert err.kwargs['current_path'] == "'z'"
+
+ # Successes
+
+ # Case 1
+ d = {
+ 'my': {'test value': {'here!': [1, 2, 3]}},
+ 'a': {'b': {'c': [1, 5, 7]}},
+ 'x': {'y': {'z': [False]}}, 'an_int': 3
+ }
+
+ a = A.from_dict(d)
+ assert repr(a).endswith("A(an_int=1, a_bool=False, my_str='7', other_bool=True)")
+
+ d = a.to_dict()
+
+ assert d == {
+ 'aBool': False,
+ 'my': {'test value': {'here!': {0: 1}}},
+ }
+
+ with pytest.raises(ParseError):
+ _ = A.from_dict(d)
+
+ # Case 2
+ d = {
+ 'my': {'test value': {'here!': [1, 2, 3]}},
+ 'a': {'b': {}},
+ 'x': {'y': {
+ 'z': [],
+ 'z z': False,
+ }},
+ }
+
+ with pytest.raises(ParseError) as e:
+ _ = A.from_dict(d)
+
+ err = e.value
+ assert err.field_name == 'a_bool'
+ assert repr(err.base_error) == "IndexError('list index out of range')"
+
+ # Case 3
+ d = {
+ 'my': {'test value': {'here!': [1, 2, 3]}},
+ 'a': {'b': {}},
+ 'x': {'y': {
+ 'z': [True, False],
+ 'z z': False,
+ }},
+ }
+
+ a = A.from_dict(d)
+ assert repr(a).endswith("A(an_int=1, a_bool=False, my_str='xyz1', other_bool=False)")
+
+ d = a.to_dict()
+
+ assert d == {
+ 'aBool': False,
+ 'my': {'test value': {'here!': {0: 1}}},
+ 'x': {
+ 'y': {
+ 'z z': False,
+ }
+ },
+ }
+
+
+def test_from_dict_with_nested_object_alias_path_with_dump_alias_and_skip():
+ """
+ Test nested object `AliasPath` with dump='...' and skip=True,
+ along with `Alias` with `skip=True`,
+ added for branch coverage.
+ """
+ @dataclass
+ class A(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: str = AliasPath(dump='a.b.c[0]')
+ my_bool: bool = AliasPath('x.y."Z 1"', skip=True)
+ my_int: int = Alias('my Integer', skip=True)
+
+ d = {'a': {'b': {'c': [1, 2, 3]}},
+ 'x': {'y': {'Z 1': 'f'}},}
+
+ with pytest.raises(MissingFields) as exc_info:
+ _ = A.from_dict(d)
+
+ e = exc_info.value
+ assert e.fields == ['my_bool']
+ assert e.missing_fields == ['my_str', 'my_int']
+
+ d = {'my_str': 'test',
+ 'my Integer': '123',
+ 'x': {'y': {'Z 1': 'f'}},}
+
+ a = A.from_dict(d)
+
+ assert a.my_str == 'test'
+ assert a.my_int == 123
+ assert a.my_bool is False
+
+ serialized = a.to_dict()
+ assert serialized == {
+ 'a': {'b': {'c': {0: 'test'}}},
+ }
+
+
+def test_auto_assign_tags_and_raise_on_unknown_json_key():
+
+ @dataclass
+ class A:
+ mynumber: int
+
+ @dataclass
+ class B:
+ mystring: str
+
+ @dataclass
+ class Container(JSONWizard):
+ obj2: Union[A, B]
+
+ class _(JSONWizard.Meta):
+ auto_assign_tags = True
+ v1 = True
+ v1_on_unknown_key = 'RAISE'
+
+ c = Container(obj2=B("bar"))
+
+ output_dict = c.to_dict()
+
+ assert output_dict == {
+ "obj2": {
+ "mystring": "bar",
+ "__tag__": "B",
+ }
+ }
+
+ assert c == Container.from_dict(output_dict)
+
+ input_dict = {
+ "obj2": {
+ "mystring": "bar",
+ "__tag__": "B",
+ "__extra__": "C",
+ }
+ }
+
+ with pytest.raises(UnknownKeysError) as exc_info:
+ _ = Container.from_dict(input_dict)
+
+ e = exc_info.value
+
+ assert e.unknown_keys == {'__extra__'}
+
+
+def test_auto_assign_tags_and_catch_all():
+ """Using both `auto_assign_tags` and `CatchAll` does not save tag key in `CatchAll`."""
+ @dataclass
+ class A:
+ mynumber: int
+ extra: CatchAll = None
+
+ @dataclass
+ class B:
+ mystring: str
+ extra: CatchAll = None
+
+ @dataclass
+ class Container(JSONWizard, debug=False):
+ obj2: Union[A, B]
+ extra: CatchAll = None
+
+ class _(JSONWizard.Meta):
+ auto_assign_tags = True
+ v1 = True
+ tag_key = 'type'
+
+ c = Container(obj2=B("bar"))
+
+ output_dict = c.to_dict()
+
+ assert output_dict == {
+ "obj2": {
+ "mystring": "bar",
+ "type": "B"
+ }
+ }
+
+ c2 = Container.from_dict(output_dict)
+ assert c2 == c == Container(obj2=B(mystring='bar', extra=None), extra=None)
+
+ assert c2.to_dict() == {
+ "obj2": {
+ "mystring": "bar", "type": "B"
+ }
+ }
+
+
+def test_skip_if():
+ """
+ Using Meta config `skip_if` to conditionally
+ skip serializing dataclass fields.
+ """
+ @dataclass
+ class Example(JSONPyWizard, debug=True):
+ class _(JSONPyWizard.Meta):
+ v1 = True
+ skip_if = IS_NOT(True)
+
+ my_str: 'str | None'
+ my_bool: bool
+ other_bool: bool = False
+
+ ex = Example(my_str=None, my_bool=True)
+
+ assert ex.to_dict() == {'my_bool': True}
+
+
+def test_skip_defaults_if():
+ """
+ Using Meta config `skip_defaults_if` to conditionally
+ skip serializing dataclass fields with default values.
+ """
+ @dataclass
+ class Example(JSONPyWizard):
+ class _(JSONPyWizard.Meta):
+ v1 = True
+ skip_defaults_if = IS(None)
+
+ my_str: 'str | None'
+ other_str: 'str | None' = None
+ third_str: 'str | None' = None
+ my_bool: bool = False
+
+ ex = Example(my_str=None, other_str='')
+
+ assert ex.to_dict() == {
+ 'my_str': None,
+ 'other_str': '',
+ 'my_bool': False
+ }
+
+ ex = Example('testing', other_str='', third_str='')
+ assert ex.to_dict() == {'my_str': 'testing', 'other_str': '',
+ 'third_str': '', 'my_bool': False}
+
+ ex = Example(None, my_bool=None)
+ assert ex.to_dict() == {'my_str': None}
+
+
+def test_per_field_skip_if():
+ """
+ Test per-field `skip_if` functionality, with the ``SkipIf``
+ condition in type annotation, and also specified in
+ ``skip_if_field()`` which wraps ``dataclasses.Field``.
+ """
+ @dataclass
+ class Example(JSONPyWizard):
+ class _(JSONPyWizard.Meta):
+ v1 = True
+
+ my_str: Annotated['str | None', SkipIfNone]
+ other_str: 'str | None' = None
+ third_str: 'str | None' = skip_if_field(EQ(''), default=None)
+ my_bool: bool = False
+ other_bool: Annotated[bool, SkipIf(IS(True))] = True
+
+ ex = Example(my_str='test')
+ assert ex.to_dict() == {
+ 'my_str': 'test',
+ 'other_str': None,
+ 'third_str': None,
+ 'my_bool': False
+ }
+
+ ex = Example(None, other_str='', third_str='', my_bool=True, other_bool=False)
+ assert ex.to_dict() == {'other_str': '',
+ 'my_bool': True,
+ 'other_bool': False}
+
+ ex = Example('None', other_str='test', third_str='None', my_bool=None, other_bool=True)
+ assert ex.to_dict() == {'my_str': 'None', 'other_str': 'test',
+ 'third_str': 'None', 'my_bool': None}
+
+
+def test_is_truthy_and_is_falsy_conditions():
+ """
+ Test both IS_TRUTHY and IS_FALSY conditions within a single test case.
+ """
+
+ # Define the Example class within the test case and apply the conditions
+ @dataclass
+ class Example(JSONPyWizard):
+
+ class _(JSONPyWizard.Meta):
+ v1 = True
+
+ my_str: Annotated['str | None', SkipIf(IS_TRUTHY())] # Skip if truthy
+ my_bool: bool = skip_if_field(IS_FALSY()) # Skip if falsy
+ my_int: Annotated['int | None', SkipIf(IS_FALSY())] = None # Skip if falsy
+
+ # Test IS_TRUTHY condition (field will be skipped if truthy)
+ obj = Example(my_str="Hello", my_bool=True, my_int=5)
+ assert obj.to_dict() == {'my_bool': True, 'my_int': 5} # `my_str` is skipped because it is truthy
+
+ # Test IS_FALSY condition (field will be skipped if falsy)
+ obj = Example(my_str=None, my_bool=False, my_int=0)
+ assert obj.to_dict() == {'my_str': None} # `my_str` is None (falsy), so it is not skipped
+
+ # Test a mix of truthy and falsy values
+ obj = Example(my_str="Not None", my_bool=True, my_int=None)
+ assert obj.to_dict() == {'my_bool': True} # `my_str` is truthy, so it is skipped, `my_int` is falsy and skipped
+
+ # Test with both IS_TRUTHY and IS_FALSY applied (both `my_bool` and `my_in
+
+
+def test_skip_if_truthy_or_falsy():
+ """
+ Test skip if condition is truthy or falsy for individual fields.
+ """
+
+ # Use of SkipIf with IS_TRUTHY
+ @dataclass
+ class SkipExample(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_str: Annotated['str | None', SkipIf(IS_TRUTHY())]
+ my_bool: bool = skip_if_field(IS_FALSY())
+
+ # Test with truthy `my_str` and falsy `my_bool` should be skipped
+ obj = SkipExample(my_str="Test", my_bool=False)
+ assert obj.to_dict() == {}
+
+ # Test with truthy `my_str` and `my_bool` should include the field
+ obj = SkipExample(my_str="", my_bool=True)
+ assert obj.to_dict() == {'myStr': '', 'myBool': True}
+
+
+def test_invalid_condition_annotation_raises_error():
+ """
+ Test that using a Condition (e.g., LT) directly as a field annotation
+ without wrapping it in SkipIf() raises an InvalidConditionError.
+ """
+ with pytest.raises(InvalidConditionError, match="Wrap conditions inside SkipIf()"):
+
+ @dataclass
+ class Example(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ debug_enabled = True
+
+ my_field: Annotated[int, LT(5)] # Invalid: LT is not wrapped in SkipIf.
+
+ # Attempt to serialize an instance, which should raise the error.
+ Example(my_field=3).to_dict()
+
+
+def test_dataclass_in_union_when_tag_key_is_field():
+ """
+ Test case for dataclasses in `Union` when the `Meta.tag_key` is a dataclass field.
+ """
+ @dataclass
+ class DataType(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ id: int
+ type: str
+
+ @dataclass
+ class XML(DataType):
+ class _(JSONWizard.Meta):
+ tag = "xml"
+
+ field_type_1: str
+
+ @dataclass
+ class HTML(DataType):
+ class _(JSONWizard.Meta):
+ tag = "html"
+
+ field_type_2: str
+
+ @dataclass
+ class Result(JSONWizard):
+ class _(JSONWizard.Meta):
+ tag_key = "type"
+
+ data: Union[XML, HTML]
+
+ t1 = Result.from_dict({"data": {"id": 1, "type": "xml", "field_type_1": "value"}})
+ assert t1 == Result(data=XML(id=1, type='xml', field_type_1='value'))
+
+
+def test_sequence_and_mutable_sequence_are_supported():
+ """
+ Confirm `Collection`, `Sequence`, and `MutableSequence` -- imported
+ from either `typing` or `collections.abc` -- are supported.
+ """
+ @dataclass
+ class IssueFields:
+ name: str
+
+ @dataclass
+ class Options(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ email: str = ""
+ token: str = ""
+ fields: Sequence[IssueFields] = (
+ IssueFields('A'),
+ IssueFields('B'),
+ IssueFields('C'),
+ )
+ fields_tup: tuple[IssueFields] = IssueFields('A'),
+ fields_var_tup: tuple[IssueFields, ...] = IssueFields('A'),
+ list_of_int: MutableSequence[int] = field(default_factory=list)
+ list_of_bool: Collection[bool] = field(default_factory=list)
+
+ # initialize with defaults
+ opt = Options.from_dict({
+ 'email': 'a@b.org',
+ 'token': '',
+ })
+ assert opt == Options(
+ email='a@b.org', token='',
+ fields=(IssueFields(name='A'), IssueFields(name='B'), IssueFields(name='C')),
+ )
+
+ # check annotated `Sequence` maps to `tuple`
+ opt = Options.from_dict({
+ 'email': 'a@b.org',
+ 'token': '',
+ 'fields': [{'name': 'X'}, {'name': 'Y'}, {'name': 'Z'}]
+ })
+ assert opt.fields == (IssueFields('X'), IssueFields('Y'), IssueFields('Z'))
+
+ # does not raise error
+ opt = Options.from_dict({
+ 'email': 'a@b.org',
+ 'token': '',
+ 'fields_tup': [{'name': 'X'}]
+ })
+ assert opt.fields_tup == (IssueFields('X'), )
+
+ # TODO: ought to raise error - maybe support a`strict` mode?
+ opt = Options.from_dict({
+ 'email': 'a@b.org',
+ 'token': '',
+ 'fields_tup': [{'name': 'X'}, {'name': 'Y'}]
+ })
+
+ assert opt.fields_tup == (IssueFields('X'), )
+
+ # does not raise error
+ opt = Options.from_dict({
+ 'email': 'a@b.org',
+ 'token': '',
+ 'fields_var_tup': [{'name': 'X'}, {'name': 'Y'}]
+ })
+ assert opt.fields_var_tup == (IssueFields('X'), IssueFields('Y'))
+
+ # check annotated `MutableSequence` maps to `list`
+ opt = Options.from_dict({
+ 'email': 'a@b.org',
+ 'token': '',
+ 'list_of_int': (1, '2', 3.0)
+ })
+ assert opt.list_of_int == [1, 2, 3]
+
+ # check annotated `Collection` maps to `list`
+ opt = Options.from_dict({
+ 'email': 'a@b.org',
+ 'token': '',
+ 'list_of_bool': (1, '0', '1')
+ })
+ assert opt.list_of_bool == [True, False, True]
+
+
+@pytest.mark.skip('Ran out of time to get this to work')
+def test_dataclass_decorator_is_automatically_applied():
+ """
+ Confirm the `@dataclass` decorator is automatically
+ applied, if not decorated by the user.
+ """
+ class Test(JSONWizard):
+
+ class _(JSONWizard.Meta):
+ v1 = True
+
+ my_field: str
+ my_bool: bool = False
+
+ t = Test.from_dict({'myField': 'value'})
+ assert t.my_field == 'value'
+
+ t = Test('test', True)
+ assert t.my_field == 'test'
+ assert t.my_bool
+
+ with pytest.raises(TypeError, match=".*Test\.__init__\(\) missing 1 required positional argument: 'my_field'"):
+ Test()