diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml new file mode 100644 index 00000000..03be2c8f --- /dev/null +++ b/.github/workflows/docs.yml @@ -0,0 +1,32 @@ +name: Documentation + +on: + pull_request: + push: + branches: + - master + - develop + +jobs: + build: + runs-on: ubuntu-latest + env: + MPLBACKEND: svg + PYDEVD_DISABLE_FILE_VALIDATION: 1 + strategy: + fail-fast: false + + steps: + - uses: actions/checkout@v3 + - name: Set up Python 3.11 + uses: actions/setup-python@v4 + with: + python-version: 3.11 + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install -U setuptools setuptools_scm wheel + pip install -e .[all,docs] + sudo apt-get install pandoc + - name: Build docs + run: sphinx-build -b html docs _build/html diff --git a/.github/workflows/pytest.yml b/.github/workflows/pytest.yml index 61c6be9c..0eeb5ac5 100644 --- a/.github/workflows/pytest.yml +++ b/.github/workflows/pytest.yml @@ -12,7 +12,6 @@ jobs: runs-on: ${{ matrix.os }} env: MPLBACKEND: svg - PYTENSOR_FLAGS: cxx="",exception_verbosity=high strategy: fail-fast: false matrix: diff --git a/CHANGELOG.md b/CHANGELOG.md index 164a8a0e..d1f43d72 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,24 @@ In this file noteworthy changes of new releases of pyLife are documented since 2.0.0. +## since pylife-2.0.4 + +### New features + +* History output for `odbclient` + +* `WoehlerCurve.miner_original()` + + +### Breaking changes + +* Non-destructive miner modifiers of `WoehlerCurve` + + The methods `WoehlerCurve.miner_elementary()` and + `WoehlerCurve.miner_haibach()` now return modified copies of the original + WoehlerCurve object, rather than modifying the original. + + ## pylife-2.0.4 Minor bugfix release diff --git a/CODINGSTYLE.md b/CODINGSTYLE.md deleted file mode 100644 index 48392002..00000000 --- a/CODINGSTYLE.md +++ /dev/null @@ -1,301 +0,0 @@ -# pyLife coding style guidelines - -## Introduction - -One crucial quality criteria of program code is maintainability. In order to -maintain code, the code has to be written clearly so that it is easily readable -to someone who has not written it. Therefore it is helpful to have a consistent -coding style with consistent naming conventions. However, the coding style -rules are not supposed to be strict rules. They can be disobeyed if there are -good reasons to do so. - -As we are programming in Python we vastly stick to the [PEP8 coding style -guide][1]. That document is generally recommendable to python programmers. This -document therefore covers only things that go beyond the PEP8. So please read -PEP8 for the general recommendations on python programming. - -### Clean code - -The notion of code quality that keeps software maintainable, makes it easier to -find and fix bugs and so on is nowadays referred to by the expression *Clean -Code*. - -The iconic figure behind that notion is [Robert C. Martin][] aka Uncle Bob. For -the full story about clean code you can read his books *Clean Code* and *Clean -Coders*. Some of his lectures about Clean Code are available on Youtube. - - -## Use a linter and let your editor help you - -A linter is a tool that scans your code and shows you where you are not -following the coding style guidelines. The anaconda environment of -`environment.yml` comes with flake8 and pep8-naming, which warns about a lot of -things. Best is to configure your editor in a way that it shows you the linter -warnings as you type. - -Many editors have some other useful helpers. For example whitespace cleanup, -i.e. delete any trailing whitespace as soon as you save the file. - - -## Line lengths - -Lines should not often exceed the 90 characters. Exceeding it sometimes by a -bit is ok, though. Please do *never* exceed 125 characters because that's the -width of the GitHub code viewer. - - -## Naming conventions - -By naming conventions the programmer can give some indications to the reader of -the program, what an identifier is supposed to be or what it is referring -to. Therefore some consistency guidelines. - - -### Mandatory names throughout the pyLife code base - -For variables representing physical quantities, we have a dedicated document in -the documentation. Please follow the points discussed there. - -### Module names - -For module names, try to find one word names like `rainflow`, `gradient`. If -you by all means need word separation in a module name, use -`snake_case`. *Never* use dashes (`-`) and capital letters in module -names. They lead to all kinds of problems. - -### Class names - -Class names are usually short and a single or compound noun. For these short -names we use the so called `CamelCase` style: -```python -class DataObjectReader: - ... -``` - -### Function names - -Function and variable names can be longer than class names. Especially function -names tend to be actual sentences like: -```python -def calc_all_data_from_scratch(): - ... -``` -These are way more readable in the so called `lowercase_with_underscores` -style. - -### Variable names - -Variable names can be shorter as long as they are local. For example when you -store the result of a function in a variable that the function is finally to -return, don't call it `result_to_be_returned` but only `res`. A rule of thumb -is that the name of a variable needs to be descriptive, if the code part in -which the variable is used, exceeds the area that you can capture with one eye -glimpse. - -### Class method names - -There are a couple of conventions that make it easier to understand an API of a -class. - -To access the data items of a class we used to use getter and setter -functions. A better and more modern way is python's `@property` decorator. -```python -class ExampleClass: - def __init__(self): - self._foo = 23 - self._bar = 42 - self._sum = None - - @property - def foo(self): - ''' getter functions have the name of the accessed data item - ''' - return self._foo - - @foo.setter - def foo(self, v): - ''' setter functions have the name of the accessed data item prefixed - with `set_` - ''' - if v < 0: # sanity check - raise Exception("Value for foo must be >= 0") - self._foo = v - - def calc_sum_of_foo_and_bar(self): - ''' class methods whose name does not imply that they return data - should not return anything. - ''' - self._sum = self._foo + self._bar -``` - -The old style getter and setter function like `set_foo(self, new_foo)`are still -tolerable but should be avoided in new code. Before major releases we might dig -to the code and replace them with `@property` where feasible. - - -## Structuring of the code - -### Data encapsulation - -One big advantage for object oriented programming is the so called data -encapsulation. That means that items of a class that is intended only for -internal use can be made inaccessible from outside of the class. Python does -not strictly enforce that concept, but in order to make it clear to the reader -of the code, we mark every class method and every class member variable that is -not meant to be accessed from outside the class with a leading underscore `_` -like: -```python -class Foo: - - def __init__(self): - self.public_variable = 'bar' - self._private_variable = 'baz' - - def public_method(self): - ... - - def _private_method(self): -``` - -### Object orientation - -Usually it makes sense to compound data structures and the functions using -these data structures into classes. The data structures then become class -members and the functions become class methods. This object oriented way of -doing things is recommendable but not always necessary. Sets of simple utility -routines can also be autonomous functions. - -As a rule of thumb: If the user of some functionality needs to keep around a -data structure for a longer time and make several different function calls that -deal with the same data structure, it is probably a good idea to put everything -into a class. - -Do not just put functions into a class because they belong semantically -together. That is what python modules are there for. - -### Functions and methods - -Functions are not only there for sharing code but also to divide code into -easily manageable pieces. Therefore functions should be short and sweet and do -just one thing. If a function does not fit into your editor window, you should -consider to split it into smaller pieces. Even more so, if you need to scroll -in order to find out, where a loop or an if statement begins and ends. Ideally -a function should be as short, that it is no longer *possible* to extract a -piece of it. - -### Commenting - -Programmers are taught in the basic programming lessons that comments are -important. However, a more modern point of view is, that comments are only the -last resort, if the code is so obscure that the reader needs the comment to -understand it. Generally it would be better to write the code in a way that it -speaks for itself. That's why keeping functions short is so -important. Extracting a code block of a function into another function makes -the code more readable, because the new function has a name. - -*Bad* example: - -```python -def some_function(data, parameters): - ... # a bunch of code - ... # over several lines - ... # hard to figure out - ... # what it is doing - if parameters['use_method_1']: - ... # a bunch of code - ... # over several lines - ... # hard to figure out - ... # what it is doing - else: - ... # a bunch of code - ... # over several lines - ... # hard to figure out - ... # what it is doing - ... # a bunch of code - ... # over several lines - ... # hard to figure out - ... # what it is doing -``` - -*Good* example - -```python -def prepare(data, parameters): - ... # a bunch of code - ... # over several lines - ... # easily understandable - ... # by the function's name - -def cleanup(data, parameters): - ... # a bunch of code - ... # over several lines - ... # easily understandable - ... # by the function's name - -def method_1(data): - ... # a bunch of code - ... # over several lines - ... # easily understandable - ... # by the function's name - -def other_method(data): - ... # a bunch of code - ... # over several lines - ... # easily understandable - ... # by the function's name - -def some_function(data, parameters): - prepare(data, parameters) - if parameters['use_method_1']: - method_1(data) - else: - other_method(data) - cleanup(data, parameters) -``` - - -Ideally the only comments that you need are docstrings that document the public -interface of your functions and classes. - -Compare the following functions: - -*Bad* example: -```python -def hypot(triangle): - - # reading in a - a = triangle.get_a() - - # reading in b - b = triangle.get_b() - - # reading in gamma - gamma = triangle.get_gamma() - - # calculate c - c = np.sqrt(a*a + b*b - 2*a*b*np.cos(gamma)) - - # return result - return c -``` - -Everyone sees that you read in some parameter `a`. Everyone sees that you read -in some parameter `b` and `gamma`. Everyone sees that you calculate and return -some value `c`. But what is it that you are doing? - -Now the *good* example: -```python -def hypot(triangle): - ''' Calculates the hypotenuse of a triangle using the law of cosines - - https://en.wikipedia.org/wiki/Law_of_cosines - ''' - a = triangle.a - b = triangle.b - gamma = triangle.gamma - - return np.sqrt(a*a + b*b - 2*a*b*np.cos(gamma)) -``` - -[pep8]: https://www.python.org/dev/peps/pep-0008/ -[Robert C. Martin]: https://en.wikipedia.org/wiki/Robert_C._Martin diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 0d760520..9909ba2b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -34,9 +34,10 @@ the test coverage. ## Coding style -Please do consult the [CODINGSTYLE](CODINGSTYLE.md) file for codingstyle guide -lines. In order to have your contribution merged to main line following guide -lines should be met. +Please do consult the +[codingstyle](https://pylife.readthedocs.io/en/stable/CODINGSTYLE.html) file +for codingstyle guide lines. In order to have your contribution merged to main +line following guide lines should be met. ### Docstrings diff --git a/INSTALLATION.md b/INSTALLATION.md index bbdcc21f..6352fc3c 100644 --- a/INSTALLATION.md +++ b/INSTALLATION.md @@ -15,16 +15,17 @@ You need a python installation e.g. a virtual environment with `pip` a recent (brand new ones might not work) python versions installed. There are several ways to achieve that. -#### Using anaconda +#### Using miniconda or anaconda -Install anaconda or miniconda [http://anaconda.com] on your computer and create -a virtual environment with the package `pip` installed. See the [conda +Install [miniconda](https://conda.io/miniconda.html) or +[anaconda](http://anaconda.com) on your computer and create a virtual +environment with python installed. See the [conda documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) on how to do that. The newly created environment must be activated. The following command lines should do it ``` -conda create -n pylife-env python=3.9 pip --yes +conda create -n pylife-env python=3.11 --yes conda activate pylife-env ``` @@ -50,7 +51,6 @@ That installs pyLife with all the dependencies to use pyLife in python programs. You might want to install some further packages like `jupyter` in order to work with jupyter notebooks. - There is no conda package as of now, unfortunately. @@ -75,7 +75,7 @@ Create an environment – usually a good idea to use a prefixed environment in your pyLife working directory and activate it. ``` -conda create -p .venv python=3.9 pip --yes +conda create -p .venv python=3.11 pip --yes conda activate ./.venv ``` diff --git a/README.md b/README.md index 31f3cdb7..f7297f02 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ [![Documentation Status](https://readthedocs.org/projects/pylife/badge/?version=latest)](https://pylife.readthedocs.io/en/latest/?badge=latest) [![PyPI](https://img.shields.io/pypi/v/pylife)](https://pypi.org/project/pylife/) ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pylife) -[![testsuite](https://github.com/boschresearch/pylife/workflows/testsuite/badge.svg)](https://github.com/boschresearch/pylife/actions?query=workflow%3Atestsuite) +[![Testsuite](https://github.com/boschresearch/pylife/actions/workflows/pytest.yml/badge.svg)](https://github.com/boschresearch/pylife/actions/workflows/pytest.yml) pyLife is an Open Source Python library for state of the art algorithms used in lifetime assessment of mechanical components subject to fatigue load. diff --git a/binder/environment.yml b/binder/environment.yml index 384f1f73..d8eb7577 100644 --- a/binder/environment.yml +++ b/binder/environment.yml @@ -4,8 +4,8 @@ dependencies: - pip - pip: - pylife[all] - - pyvista - - panel + - pyvista[all,jupyter] + - trame<3 - plotly - xvfbwrapper - ipyvtklink diff --git a/demos/hotspot_beam.ipynb b/demos/hotspot_beam.ipynb index ca106cae..6ee4bb25 100644 --- a/demos/hotspot_beam.ipynb +++ b/demos/hotspot_beam.ipynb @@ -32,10 +32,7 @@ "import pylife.mesh.hotspot\n", "import pylife.vmap\n", "\n", - "import pyvista as pv\n", - "\n", - "pv.set_plot_theme('document')\n", - "pv.set_jupyter_backend('panel')" + "import pyvista as pv" ] }, { @@ -102,7 +99,7 @@ "outputs": [], "source": [ "grid = pv.UnstructuredGrid(*mesh.mesh.vtk_data())\n", - "plotter = pv.Plotter(window_size=[1920, 1080])\n", + "plotter = pv.Plotter()\n", "plotter.add_mesh(grid, scalars=mesh.groupby('element_id')['hotspot'].first().to_numpy(), show_edges=True)\n", "plotter.add_scalar_bar()\n", "plotter.show()" @@ -169,9 +166,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.9" + "version": "3.11.0" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/demos/lifetime_calc.ipynb b/demos/lifetime_calc.ipynb index a74839c0..50b00792 100644 --- a/demos/lifetime_calc.ipynb +++ b/demos/lifetime_calc.ipynb @@ -61,10 +61,7 @@ "# mpl.style.use('seaborn')\n", "# mpl.style.use('seaborn-notebook')\n", "mpl.style.use('bmh')\n", - "%matplotlib inline\n", - "\n", - "pv.set_plot_theme('document')\n", - "pv.set_jupyter_backend('panel')" + "%matplotlib inline" ] }, { @@ -255,7 +252,10 @@ { "cell_type": "markdown", "metadata": { - "collapsed": true + "collapsed": true, + "jupyter": { + "outputs_hidden": true + } }, "source": [ "#### Plot the damage vs collectives" @@ -433,9 +433,8 @@ "outputs": [], "source": [ "grid = pv.UnstructuredGrid(*pyLife_mesh.mesh.vtk_data())\n", - "plotter = pv.Plotter(window_size=[1920, 1400])\n", - "plotter.add_mesh(grid, scalars=damage.to_numpy(), log_scale=True,\n", - " show_edges=True, cmap='jet')\n", + "plotter = pv.Plotter()\n", + "plotter.add_mesh(grid, scalars=damage.to_numpy(), log_scale=True, show_edges=True, cmap='jet')\n", "plotter.add_scalar_bar()\n", "plotter.show()" ] @@ -516,7 +515,8 @@ "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", - "pygments_lexer": "ipython3" + "pygments_lexer": "ipython3", + "version": "3.11.0" } }, "nbformat": 4, diff --git a/demos/local_stress_with_FE.ipynb b/demos/local_stress_with_FE.ipynb index 23ae55fd..8bc6ecde 100644 --- a/demos/local_stress_with_FE.ipynb +++ b/demos/local_stress_with_FE.ipynb @@ -32,11 +32,7 @@ "import pylife.stress\n", "import pylife.strength.fatigue\n", "import pylife.utils.histogram as psh\n", - "import pyvista as pv\n", - "\n", - "# from ansys.dpf import post\n", - "pv.set_plot_theme('document')\n", - "pv.set_jupyter_backend('panel')" + "import pyvista as pv\n" ] }, { diff --git a/demos/stress_gradient.ipynb b/demos/stress_gradient.ipynb index 46844b32..e32cc75a 100644 --- a/demos/stress_gradient.ipynb +++ b/demos/stress_gradient.ipynb @@ -24,10 +24,7 @@ "import pylife.mesh.gradient\n", "import pylife.vmap\n", "\n", - "import pyvista as pv\n", - "\n", - "pv.set_plot_theme('document')\n", - "pv.set_jupyter_backend('panel')" + "import pyvista as pv" ] }, { @@ -44,7 +41,7 @@ "outputs": [], "source": [ "vm_mesh = pylife.vmap.VMAPImport(\"plate_with_hole.vmap\")\n", - "pyLife_mesh = (vm_mesh.make_mesh('1', 'STATE-2')\n", + "mesh = (vm_mesh.make_mesh('1', 'STATE-2')\n", " .join_coordinates()\n", " .join_variable('STRESS_CAUCHY')\n", " .to_frame())" @@ -63,7 +60,7 @@ "metadata": {}, "outputs": [], "source": [ - "pyLife_mesh['mises'] = pyLife_mesh.equistress.mises()" + "mesh['mises'] = mesh.equistress.mises()" ] }, { @@ -79,7 +76,10 @@ "metadata": {}, "outputs": [], "source": [ - "grad = pyLife_mesh.gradient.gradient_of('mises')" + "grad = mesh.gradient.gradient_of('mises')\n", + "grad[\"abs_grad\"] = np.linalg.norm(grad, axis = 1)\n", + "mesh = mesh.join(grad)\n", + "display(mesh)\n" ] }, { @@ -88,9 +88,11 @@ "metadata": {}, "outputs": [], "source": [ - "grad[\"abs_grad\"] = np.linalg.norm(grad, axis = 1)\n", - "pyLife_mesh = pyLife_mesh.join(grad)\n", - "display(pyLife_mesh)\n" + "grid = pv.UnstructuredGrid(*mesh.mesh.vtk_data())\n", + "plotter = pv.Plotter()\n", + "plotter.add_mesh(grid, scalars=mesh.groupby('element_id')[\"abs_grad\"].mean().to_numpy(), show_edges=True, cmap='jet')\n", + "plotter.add_scalar_bar()\n", + "plotter.show()" ] }, { @@ -99,12 +101,7 @@ "metadata": {}, "outputs": [], "source": [ - "grid = pv.UnstructuredGrid(*pyLife_mesh.mesh.vtk_data())\n", - "plotter = pv.Plotter(window_size=[1920, 1080])\n", - "plotter.add_mesh(grid, scalars=pyLife_mesh.groupby('element_id')[\"abs_grad\"].mean().to_numpy(),\n", - " show_edges=True, cmap='jet')\n", - "plotter.add_scalar_bar()\n", - "plotter.show()" + "\"That's it\"" ] } ], @@ -123,10 +120,11 @@ "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", - "pygments_lexer": "ipython3" + "pygments_lexer": "ipython3", + "version": "3.11.0" }, "name": "stress_gradient.ipynb" }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/demos/stress_strength.ipynb b/demos/stress_strength.ipynb index e0697a28..48e16043 100644 --- a/demos/stress_strength.ipynb +++ b/demos/stress_strength.ipynb @@ -338,7 +338,8 @@ "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", - "pygments_lexer": "ipython3" + "pygments_lexer": "ipython3", + "version": "3.11.0" } }, "nbformat": 4, diff --git a/docs/3rd-party-licenses.md b/docs/3rd-party-licenses.md deleted file mode 100644 index 5a39b9ae..00000000 --- a/docs/3rd-party-licenses.md +++ /dev/null @@ -1,5 +0,0 @@ -# 3rd Party Licenses - -```{literalinclude} ../3rd-party-licenses.txt -:language: text -``` diff --git a/docs/3rd-party-licenses.txt b/docs/3rd-party-licenses.txt new file mode 100644 index 00000000..2cb0ad01 --- /dev/null +++ b/docs/3rd-party-licenses.txt @@ -0,0 +1,5 @@ +3rd Party Licenses +================== + +.. literalinclude:: ../3rd-party-licenses.txt + :language: text diff --git a/docs/CODINGSTYLE.md b/docs/CODINGSTYLE.md deleted file mode 100644 index 4a2989a6..00000000 --- a/docs/CODINGSTYLE.md +++ /dev/null @@ -1,2 +0,0 @@ -```{include} ../CODINGSTYLE.md -``` diff --git a/docs/CODINGSTYLE.rst b/docs/CODINGSTYLE.rst new file mode 100644 index 00000000..100396f6 --- /dev/null +++ b/docs/CODINGSTYLE.rst @@ -0,0 +1,328 @@ + +pyLife coding style guidelines +============================== + +Introduction +------------ + +One crucial quality criteria of program code is maintainability. In order to +maintain code, the code has to be written clearly so that it is easily readable +to someone who has not written it. Therefore it is helpful to have a consistent +coding style with consistent naming conventions. However, the coding style +rules are not supposed to be strict rules. They can be disobeyed if there are +good reasons to do so. + +As we are programming in Python we vastly stick to the [PEP8 coding style +guide][1]. That document is generally recommendable to python programmers. This +document therefore covers only things that go beyond the PEP8. So please read +PEP8 for the general recommendations on python programming. + +Clean code +^^^^^^^^^^ + +The notion of code quality that keeps software maintainable, makes it easier to +find and fix bugs and so on is nowadays referred to by the expression *Clean +Code*. + +The iconic figure behind that notion is `Robert C. Martin `_ aka Uncle Bob. For +the full story about clean code you can read his books *Clean Code* and *Clean +Coders*. Some of his lectures about Clean Code are available on Youtube. + +Use a linter and let your editor help you +----------------------------------------- + +A linter is a tool that scans your code and shows you where you are not +following the coding style guidelines. The anaconda environment of +``environment.yml`` comes with flake8 and pep8-naming, which warns about a lot of +things. Best is to configure your editor in a way that it shows you the linter +warnings as you type. + +Many editors have some other useful helpers. For example whitespace cleanup, +i.e. delete any trailing whitespace as soon as you save the file. + +Line lengths +------------ + +Lines should not often exceed the 90 characters. Exceeding it sometimes by a +bit is ok, though. Please do *never* exceed 125 characters because that's the +width of the GitHub code viewer. + +Naming conventions +------------------ + +By naming conventions the programmer can give some indications to the reader of +the program, what an identifier is supposed to be or what it is referring +to. Therefore some consistency guidelines. + +Mandatory names throughout the pyLife code base +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For variables representing physical quantities, we have a dedicated document in +the documentation. Please follow the points discussed there. + +Module names +^^^^^^^^^^^^ + +For module names, try to find one word names like ``rainflow``\ , ``gradient``. If +you by all means need word separation in a module name, use +``snake_case``. *Never* use dashes (\ ``-``\ ) and capital letters in module +names. They lead to all kinds of problems. + +Class names +^^^^^^^^^^^ + +Class names are usually short and a single or compound noun. For these short +names we use the so called ``CamelCase`` style: + +.. code-block:: python + + class DataObjectReader: + ... + +Function names +^^^^^^^^^^^^^^ + +Function and variable names can be longer than class names. Especially function +names tend to be actual sentences like: + +.. code-block:: python + + def calc_all_data_from_scratch(): + ... + +These are way more readable in the so called ``lowercase_with_underscores`` +style. + +Variable names +^^^^^^^^^^^^^^ + +Variable names can be shorter as long as they are local. For example when you +store the result of a function in a variable that the function is finally to +return, don't call it ``result_to_be_returned`` but only ``res``. A rule of thumb +is that the name of a variable needs to be descriptive, if the code part in +which the variable is used, exceeds the area that you can capture with one eye +glimpse. + +Class method names +^^^^^^^^^^^^^^^^^^ + +There are a couple of conventions that make it easier to understand an API of a +class. + +To access the data items of a class we used to use getter and setter +functions. A better and more modern way is python's ``@property`` decorator. + +.. tip:: + + Think twice before you implement setters. Oftentimes you simply don't need + them as you would usually simply create an object, use it and then throw it + away without ever changing it. So don't implement setters unless you + *really* need them. Even if you really need them it is often better to + create a modified copy of a data structure rather than modifying the + original. Modifying objects can have side effects which can lead to subtle + bugs. + + +.. code-block:: python + + class ExampleClass: + def __init__(self): + self._foo = 23 + self._bar = 42 + self._sum = None + + @property + def foo(self): + """ getter functions have the name of the accessed data item + """ + return self._foo + + @foo.setter + def foo(self, v): + """ setter functions have the name of the accessed data item prefixed + with `set_` + """ + if v < 0: # sanity check + raise Exception("Value for foo must be >= 0") + self._foo = v + + def calc_sum_of_foo_and_bar(self): + """ class methods whose name does not imply that they return data + should not return anything. + """ + self._sum = self._foo + self._bar + +The old style getter and setter function like ``set_foo(self, new_foo)``\ are still +tolerable but should be avoided in new code. Before major releases we might dig +to the code and replace them with ``@property`` where feasible. + +Structuring of the code +----------------------- + +Data encapsulation +^^^^^^^^^^^^^^^^^^ + +One big advantage for object oriented programming is the so called data +encapsulation. That means that items of a class that is intended only for +internal use can be made inaccessible from outside of the class. Python does +not strictly enforce that concept, but in order to make it clear to the reader +of the code, we mark every class method and every class member variable that is +not meant to be accessed from outside the class with a leading underscore ``_`` +like: + +.. code-block:: python + + class Foo: + + def __init__(self): + self.public_variable = 'bar' + self._private_variable = 'baz' + + def public_method(self): + ... + + def _private_method(self): + +Object orientation +^^^^^^^^^^^^^^^^^^ + +Usually it makes sense to compound data structures and the functions using +these data structures into classes. The data structures then become class +members and the functions become class methods. This object oriented way of +doing things is recommendable but not always necessary. Sets of simple utility +routines can also be autonomous functions. + +As a rule of thumb: If the user of some functionality needs to keep around a +data structure for a longer time and make several different function calls that +deal with the same data structure, it is probably a good idea to put everything +into a class. + +Do not just put functions into a class because they belong semantically +together. That is what python modules are there for. + +Functions and methods +^^^^^^^^^^^^^^^^^^^^^ + +Functions are not only there for sharing code but also to divide code into +easily manageable pieces. Therefore functions should be short and sweet and do +just one thing. If a function does not fit into your editor window, you should +consider to split it into smaller pieces. Even more so, if you need to scroll +in order to find out, where a loop or an if statement begins and ends. Ideally +a function should be as short, that it is no longer *possible* to extract a +piece of it. + +Commenting +^^^^^^^^^^ + +Programmers are taught in the basic programming lessons that comments are +important. However, a more modern point of view is, that comments are only the +last resort, if the code is so obscure that the reader needs the comment to +understand it. Generally it would be better to write the code in a way that it +speaks for itself. That's why keeping functions short is so +important. Extracting a code block of a function into another function makes +the code more readable, because the new function has a name. + +*Bad* example: + +.. code-block:: python + + def some_function(data, parameters): + ... # a bunch of code + ... # over several lines + ... # hard to figure out + ... # what it is doing + if parameters['use_method_1']: + ... # a bunch of code + ... # over several lines + ... # hard to figure out + ... # what it is doing + else: + ... # a bunch of code + ... # over several lines + ... # hard to figure out + ... # what it is doing + ... # a bunch of code + ... # over several lines + ... # hard to figure out + ... # what it is doing + +*Good* example + +.. code-block:: python + + def prepare(data, parameters): + ... # a bunch of code + ... # over several lines + ... # easily understandable + ... # by the function's name + + def cleanup(data, parameters): + ... # a bunch of code + ... # over several lines + ... # easily understandable + ... # by the function's name + + def method_1(data): + ... # a bunch of code + ... # over several lines + ... # easily understandable + ... # by the function's name + + def other_method(data): + ... # a bunch of code + ... # over several lines + ... # easily understandable + ... # by the function's name + + def some_function(data, parameters): + prepare(data, parameters) + if parameters['use_method_1']: + method_1(data) + else: + other_method(data) + cleanup(data, parameters) + +Ideally the only comments that you need are docstrings that document the public +interface of your functions and classes. + +Compare the following functions: + +*Bad* example: + +.. code-block:: python + + def hypot(triangle): + + # reading in a + a = triangle.get_a() + + # reading in b + b = triangle.get_b() + + # reading in gamma + gamma = triangle.get_gamma() + + # calculate c + c = np.sqrt(a*a + b*b - 2*a*b*np.cos(gamma)) + + # return result + return c + +Everyone sees that you read in some parameter ``a``. Everyone sees that you read +in some parameter ``b`` and ``gamma``. Everyone sees that you calculate and return +some value ``c``. But what is it that you are doing? + +Now the *good* example: + +.. code-block:: python + + def hypot(triangle): + """Calculate the hypotenuse of a triangle using the law of cosines + + https://en.wikipedia.org/wiki/Law_of_cosines + """ + a = triangle.a + b = triangle.b + gamma = triangle.gamma + + return np.sqrt(a*a + b*b - 2*a*b*np.cos(gamma)) diff --git a/docs/conf.py b/docs/conf.py index b68191a0..ce3306e7 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -28,6 +28,8 @@ vdisplay = Xvfb() vdisplay.start() +os.environ['PYDEVD_DISABLE_FILE_VALIDATION'] = "1" + ipython_dir = os.path.join(__projectdir__, "_build", "ipythondir") os.environ['IPYTHONDIR'] = ipython_dir @@ -47,9 +49,31 @@ ep.preprocess(nb, {'metadata': {'path': os.path.dirname(fn)}}) +# Inject code to set pyvista jupyter backend into relevant notebooks +# +import nbsphinx +original_from_notebook_node = nbsphinx.Exporter.from_notebook_node + +from nbconvert.preprocessors import Preprocessor + +def from_notebook_node(self, nb, resources, **kwargs): + + class InjectPyvistaBackendPreprocessor(Preprocessor): + def preprocess(self, nb, resources): + for index, cell in enumerate(nb.cells): + if cell['cell_type'] == 'code' and 'import pyvista as pv' in cell['source'] : + cell['source'] += "\npv.set_jupyter_backend('html')\n" + break + return nb, resources + + nb, resources = InjectPyvistaBackendPreprocessor().preprocess(nb, resources=resources) + return original_from_notebook_node(self, nb, resources, **kwargs) + +nbsphinx.Exporter.from_notebook_node = from_notebook_node + + # Don't try to use C-code for the pytensor stuff when building the docs. # Otherwise the demo notebooks fail on readthedocs -os.environ['PYTENSOR_FLAGS'] = 'cxx=""' os.environ['PYTHONWARNINGS'] = 'ignore' import asyncio @@ -129,7 +153,11 @@ templates_path = ["_templates"] # The suffix of source filenames. -source_suffix = [".rst", ".md", ".txt", ".ipynb"] +source_suffix = { + '.rst': 'restructuredtext', + '.txt': 'restructuredtext', + '.md': 'markdown', +} # The encoding of source files. # source_encoding = 'utf-8-sig' diff --git a/docs/reference.rst b/docs/reference.rst index f35fc2b7..eafa380e 100644 --- a/docs/reference.rst +++ b/docs/reference.rst @@ -4,14 +4,14 @@ pyLife Reference General ------- .. toctree:: - :maxdepth: 2 + :maxdepth: 1 general/signal Stress ------ .. toctree:: - :maxdepth: 2 + :maxdepth: 1 stress/index stress/equistress @@ -25,7 +25,7 @@ Stress Strength -------- .. toctree:: - :maxdepth: 2 + :maxdepth: 1 strength/fatigue strength/meanstress @@ -35,7 +35,7 @@ Strength Materiallaws ------------ .. toctree:: - :maxdepth: 2 + :maxdepth: 1 materiallaws/hookeslaw materiallaws/rambgood @@ -45,7 +45,7 @@ Materiallaws Materialdata ------------ .. toctree:: - :maxdepth: 2 + :maxdepth: 1 materialdata/woehler @@ -53,7 +53,7 @@ Materialdata Mesh utilities -------------- .. toctree:: - :maxdepth: 2 + :maxdepth: 1 mesh/meshsignal mesh/hotspot @@ -63,7 +63,7 @@ Mesh utilities VMAP Interface -------------- .. toctree:: - :maxdepth: 2 + :maxdepth: 1 vmap/vmap.rst vmap/vmap_import.rst @@ -72,7 +72,7 @@ VMAP Interface Utils ----- .. toctree:: - :maxdepth: 2 + :maxdepth: 1 utils/functions utils/histogram diff --git a/setup.cfg b/setup.cfg index c70dcd50..4c376fb2 100644 --- a/setup.cfg +++ b/setup.cfg @@ -47,9 +47,9 @@ python_requires = >=3.8 # new major versions. This works if the required packages follow Semantic Versioning. # For more information, check out https://semver.org/. install_requires = - numpy==1.23.5 + numpy>=1.23.5 scipy - pandas>=1.4.0,<2.1 + pandas>=1.4.0,!=2.1 h5py!=3.7.0 matplotlib cython @@ -70,11 +70,11 @@ testing = pytest-cov hypothesis pyvista - panel xvfbwrapper testbook ipykernel ipywidgets + ipython<8.17.0 # https://github.com/ipython/ipython/issues/14235 docs = sphinx @@ -84,8 +84,11 @@ docs = plotly jupyter_sphinx myst_parser - panel - pyvista + pyvista[all,jupyter] + trame<3 + ipywidgets + ipython<8.17.0 # https://github.com/ipython/ipython/issues/14235 + ipykernel xvfbwrapper @@ -95,11 +98,11 @@ analysis = tsfresh = tsfresh - numpy>=1.23 pymc = pymc bambi + numpy<1.26 # https://github.com/pymc-devs/pymc/issues/7043#issuecomment-1837141888 all = %(tsfresh)s %(pymc)s diff --git a/src/pylife/materiallaws/woehlercurve.py b/src/pylife/materiallaws/woehlercurve.py index dee366b1..e734f1b4 100644 --- a/src/pylife/materiallaws/woehlercurve.py +++ b/src/pylife/materiallaws/woehlercurve.py @@ -129,25 +129,38 @@ def transform_to_failure_probability(self, failure_probability): return WoehlerCurve(transformed) + def miner_original(self): + """Set k_2 to inf according Miner Original method (k_2 = inf). + + Returns + ------- + modified copy of self + """ + new = self._obj.copy() + new['k_2'] = np.inf + return self.__class__(new) + def miner_elementary(self): """Set k_2 to k_1 according Miner Elementary method (k_2 = k_1). Returns ------- - self + modified copy of self """ - self._obj['k_2'] = self._obj.k_1 - return self + new = self._obj.copy() + new['k_2'] = self._obj.k_1 + return self.__class__(new) def miner_haibach(self): """Set k_2 to value according Miner Haibach method (k_2 = 2 * k_1 - 1). Returns ------- - self + modified copy of self """ - self._obj['k_2'] = 2. * self._obj.k_1 - 1. - return self + new = self._obj.copy() + new['k_2'] = 2. * self._obj.k_1 - 1. + return self.__class__(new) def cycles(self, load, failure_probability=0.5): """Calculate the cycles numbers from loads. @@ -214,16 +227,14 @@ def basquin_cycles(self, load, failure_probability=0.5): cycles : numpy.ndarray The cycle numbers at which the component fails for the given `load` values """ - transformed = self.transform_to_failure_probability(failure_probability) + def ensure_float_to_prevent_int_overflow(load): + if isinstance(load, pd.Series): + return pd.Series(load, dtype=np.float64) + return np.asarray(load, dtype=np.float64) - if hasattr(load, '__iter__'): - if hasattr(load, 'astype'): - load = load.astype('float64') - else: - load = np.array(load).astype('float64') - else: - load = float(load) + transformed = self.transform_to_failure_probability(failure_probability) + load = ensure_float_to_prevent_int_overflow(load) ld, wc = transformed.broadcast(load) cycles = np.full_like(ld, np.inf) diff --git a/tests/materiallaws/test_woehlercurve.py b/tests/materiallaws/test_woehlercurve.py index f8617a00..34916894 100644 --- a/tests/materiallaws/test_woehlercurve.py +++ b/tests/materiallaws/test_woehlercurve.py @@ -26,7 +26,9 @@ from pylife.materiallaws import WoehlerCurve -wc_data = pd.Series({ +@pytest.fixture +def wc_data(): + return pd.Series({ 'k_1': 7., 'TN': 1.75, 'ND': 1e6, @@ -34,7 +36,7 @@ }) -def test_woehler_accessor(): +def test_woehler_accessor(wc_data): wc = wc_data.drop('TN') for key in wc.index: @@ -129,7 +131,7 @@ def test_woehler_transform_probability_SD_0(): pd.testing.assert_series_equal(transformed_back, wc_50) -def test_woehler_basquin_cycles_50_single_load_single_wc(): +def test_woehler_basquin_cycles_50_single_load_single_wc(wc_data): load = 500. cycles = wc_data.woehler.basquin_cycles(load) @@ -138,7 +140,7 @@ def test_woehler_basquin_cycles_50_single_load_single_wc(): np.testing.assert_allclose(cycles, expected_cycles, rtol=1e-4) -def test_woehler_basquin_cycles_50_multiple_load_single_wc(): +def test_woehler_basquin_cycles_50_multiple_load_single_wc(wc_data): load = [200., 300., 400., 500.] cycles = wc_data.woehler.basquin_cycles(load) @@ -150,13 +152,13 @@ def test_woehler_basquin_cycles_50_multiple_load_single_wc(): def test_woehler_basquin_cycles_50_single_load_multiple_wc(): load = 400. - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e6, 1e6, 1e6] }) - cycles = wc_data.woehler.basquin_cycles(load) + cycles = wc.woehler.basquin_cycles(load) expected_cycles = [7.5e5, 1e6, np.inf] np.testing.assert_allclose(cycles, expected_cycles, rtol=1e-4) @@ -165,13 +167,13 @@ def test_woehler_basquin_cycles_50_single_load_multiple_wc(): def test_woehler_basquin_cycles_50_multiple_load_multiple_wc(): load = [3000., 400., 500.] - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e6, 1e6, 1e6] }) - cycles = wc_data.woehler.basquin_cycles(load) + cycles = wc.woehler.basquin_cycles(load) expected_cycles = [1e5, 1e6, 1e6] np.testing.assert_allclose(cycles, expected_cycles, rtol=1e-4) @@ -181,13 +183,13 @@ def test_woehler_basquin_cycles_50_multiple_load_multiple_wc_aligned_index(): index = pd.Index([1, 2, 3], name='element_id') load = pd.Series([3000., 400., 500.], index=index) - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e6, 1e6, 1e6] }, index=index) - cycles = wc_data.woehler.basquin_cycles(load) + cycles = wc.woehler.basquin_cycles(load) expected_cycles = pd.Series([1e5, 1e6, 1e6], index=index) pd.testing.assert_series_equal(cycles, expected_cycles, rtol=1e-4) @@ -196,13 +198,13 @@ def test_woehler_basquin_cycles_50_multiple_load_multiple_wc_aligned_index(): def test_woehler_basquin_cycles_50_multiple_load_multiple_wc_cross_index(): load = pd.Series([3000., 400., 500.], index=pd.Index([1, 2, 3], name='scenario')) - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e6, 1e6, 1e6] }, pd.Index([1, 2, 3], name='element_id')) - cycles = wc_data.woehler.basquin_cycles(load) + cycles = wc.woehler.basquin_cycles(load) expected_index = pd.MultiIndex.from_tuples([ (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), @@ -212,7 +214,7 @@ def test_woehler_basquin_cycles_50_multiple_load_multiple_wc_cross_index(): pd.testing.assert_index_equal(cycles.index, expected_index) -def test_woehler_basquin_cycles_50_same_k(): +def test_woehler_basquin_cycles_50_same_k(wc_data): load = [200., 300., 400., 500.] wc = wc_data.copy() @@ -223,7 +225,7 @@ def test_woehler_basquin_cycles_50_same_k(): np.testing.assert_approx_equal(calculated_k, wc.k_1) -def test_woehler_basquin_cycles_10_90(): +def test_woehler_basquin_cycles_10_90(wc_data): load = [200., 300., 400., 500.] cycles_10 = wc_data.woehler.basquin_cycles(load, 0.1)[1:] @@ -233,7 +235,7 @@ def test_woehler_basquin_cycles_10_90(): np.testing.assert_allclose(cycles_90/cycles_10, expected) -def test_woehler_basquin_load_50_single_cycles_single_wc(): +def test_woehler_basquin_load_50_single_cycles_single_wc(wc_data): cycles = 27994 load = wc_data.woehler.basquin_load(cycles) @@ -242,7 +244,7 @@ def test_woehler_basquin_load_50_single_cycles_single_wc(): np.testing.assert_allclose(load, expected_load, rtol=1e-4) -def test_woehler_basquin_load_50_multiple_cycles_single_wc(): +def test_woehler_basquin_load_50_multiple_cycles_single_wc(wc_data): cycles = [np.inf, 1e6, 133484, 27994] load = wc_data.woehler.basquin_load(cycles) @@ -253,26 +255,26 @@ def test_woehler_basquin_load_50_multiple_cycles_single_wc(): def test_woehler_basquin_load_single_cycles_multiple_wc(): cycles = 1e6 - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e5, 1e6, 1e6] }) - load = wc_data.woehler.basquin_load(cycles) + load = wc.woehler.basquin_load(cycles) expected_load = [300., 400., 500.] np.testing.assert_allclose(load, expected_load) def test_woehler_basquin_load_multiple_cycles_multiple_wc(): cycles = [1e5, 1e6, 1e7] - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e6, 1e6, 1e6] }) - load = wc_data.woehler.basquin_load(cycles) + load = wc.woehler.basquin_load(cycles) expected_load = [3000., 400., 500.] np.testing.assert_allclose(load, expected_load) @@ -281,14 +283,14 @@ def test_woehler_basquin_load_multiple_cycles_multiple_wc_aligned_index(): index = pd.Index([1, 2, 3], name='element_id') cycles = pd.Series([1e5, 1e6, 1e7], index=index) - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e6, 1e6, 1e6], }, index=index) expected_load = pd.Series([3000., 400., 500.], index=cycles.index) - load = wc_data.woehler.basquin_load(cycles) + load = wc.woehler.basquin_load(cycles) pd.testing.assert_series_equal(load, expected_load) @@ -296,7 +298,7 @@ def test_woehler_basquin_load_multiple_cycles_multiple_wc_aligned_index(): def test_woehler_basquin_load_multiple_cycles_multiple_wc_cross_index(): cycles = pd.Series([1e5, 1e6, 1e7], index=pd.Index([1, 2, 3], name='scenario')) - wc_data = pd.DataFrame({ + wc = pd.DataFrame({ 'k_1': [1., 2., 2.], 'SD': [300., 400., 500.], 'ND': [1e6, 1e6, 1e6], @@ -308,12 +310,12 @@ def test_woehler_basquin_load_multiple_cycles_multiple_wc_cross_index(): (3, 1), (3, 2), (3, 3), ], names=['element_id', 'scenario']) - load = wc_data.woehler.basquin_load(cycles) + load = wc.woehler.basquin_load(cycles) pd.testing.assert_index_equal(load.index, expected_index) -def test_woehler_basquin_load_50_same_k(): +def test_woehler_basquin_load_50_same_k(wc_data): cycles = [1e7, 1e6, 1e5, 1e4] wc = wc_data.copy() @@ -324,7 +326,7 @@ def test_woehler_basquin_load_50_same_k(): np.testing.assert_approx_equal(calculated_k, wc.k_1) -def test_woehler_basquin_load_10_90(): +def test_woehler_basquin_load_10_90(wc_data): cycles = [1e2, 1e7] load_10 = wc_data.woehler.basquin_load(cycles, 0.1) @@ -335,45 +337,47 @@ def test_woehler_basquin_load_10_90(): np.testing.assert_allclose(load_90/load_10, expected, rtol=1e-4) -def test_woehler_basquin_load_integer_cycles(): +def test_woehler_basquin_load_integer_cycles(wc_data): wc_data.woehler.basquin_load(1000) -def test_woehler_basquin_cycles_integer_load(): +def test_woehler_basquin_cycles_integer_load(wc_data): wc_data.woehler.basquin_cycles(200) -wc_int_overflow = pd.Series({ - 'k_1': 4., - 'SD': 100., - 'ND': 1e6 -}) +@pytest.fixture +def wc_int_overflow(): + return pd.Series({ + 'k_1': 4., + 'SD': 100., + 'ND': 1e6 + }) -def test_woehler_integer_overflow_scalar(): +def test_woehler_integer_overflow_scalar(wc_int_overflow): assert wc_int_overflow.woehler.basquin_cycles(50) > 0.0 -def test_woehler_integer_overflow_list(): +def test_woehler_integer_overflow_list(wc_int_overflow): assert (wc_int_overflow.woehler.basquin_cycles([50, 50]) > 0.0).all() -def test_woehler_integer_overflow_series(): +def test_woehler_integer_overflow_series(wc_int_overflow): load = pd.Series([50, 50], index=pd.Index(['foo', 'bar'])) cycles = wc_int_overflow.woehler.basquin_cycles(load) assert (cycles > 0.0).all() pd.testing.assert_index_equal(cycles.index, load.index) -def test_woehler_ND(): +def test_woehler_ND(wc_data): assert wc_data.woehler.ND == 1e6 -def test_woehler_SD(): +def test_woehler_SD(wc_data): assert wc_data.woehler.SD == 300 -def test_woehler_k_1(): +def test_woehler_k_1(wc_data): assert wc_data.woehler.k_1 == 7. @@ -387,7 +391,7 @@ def test_woehler_TS_and_TN_guessed(): assert wc.woehler.TS == 1.0 -def test_woehler_TS_guessed(): +def test_woehler_TS_guessed(wc_data): wc = wc_data.copy() wc['k_1'] = 0.5 wc['TN'] = 1.5 @@ -406,43 +410,63 @@ def test_woehler_TN_guessed(): assert wc.woehler.TN == 1.5 -def test_woehler_TS_given(): +def test_woehler_TS_given(wc_data): wc_full = wc_data.copy() wc_full['TS'] = 1.25 assert wc_full.woehler.TS == 1.25 -def test_woehler_TN_given(): +def test_woehler_TN_given(wc_data): wc_full = wc_data.copy() wc_full['TN'] = 1.75 assert wc_full.woehler.TN == 1.75 -def test_woehler_pf_guessed(): +def test_woehler_pf_guessed(wc_data): assert wc_data.woehler.failure_probability == 0.5 -def test_woehler_pf_given(): +def test_woehler_pf_given(wc_data): wc = wc_data.copy() wc['failure_probability'] = 0.1 assert wc.woehler.failure_probability == 0.1 -def test_woehler_miner_original(): +def test_woehler_miner_original_as_default(wc_data): assert wc_data.woehler.k_2 == np.inf -def test_woehler_miner(): +def test_woehler_miner_original_as_request(wc_data): + wc_data['k_2'] = wc_data.k_1 + assert wc_data.woehler.miner_original().k_2 == np.inf + + +def test_woehler_miner_original_new_object(wc_data): + orig = wc_data.woehler + assert orig.miner_original() is not orig + + +def test_woehler_miner_elementary(wc_data): assert wc_data.woehler.miner_elementary().k_2 == wc_data.k_1 assert wc_data.woehler.miner_elementary().to_pandas().k_2 == wc_data.k_1 -def test_woehler_miner_haibach(): +def test_woehler_miner_elementary_new_object(wc_data): + orig = wc_data.woehler + assert orig.miner_elementary() is not orig + + +def test_woehler_miner_haibach(wc_data): assert wc_data.woehler.miner_haibach().k_2 == 13.0 assert wc_data.woehler.miner_haibach().to_pandas().k_2 == 13.0 -def test_woehler_to_pandas(): +def test_woehler_miner_haibach_new_object(wc_data): + orig = wc_data.woehler + assert orig.miner_haibach() is not orig + + +def test_woehler_to_pandas(wc_data): expected = pd.Series({ 'k_1': 0.5, 'k_2': np.inf, @@ -470,13 +494,19 @@ def test_woehler_to_pandas(): @pytest.mark.parametrize('pf', [0.1, 0.5, 0.9]) def test_woehler_miner_original_homogenious_load(pf): cycles = np.logspace(3., 7., 50) - load = wc_data.woehler.basquin_load(cycles, failure_probability=pf) - assert (np.diff(load) < 0.).all() + wc = pd.Series({ + 'k_1': 7., + 'TN': 1.75, + 'ND': 1e6, + 'SD': 300.0 + }) + load = wc.woehler.basquin_load(cycles, failure_probability=pf) + assert (np.diff(load) <= 0.).all() assert (np.diff(np.diff(load)) >= 0.).all() @pytest.mark.parametrize('pf', [0.1, 0.5, 0.9]) -def test_woehler_miner_original_homogenious_cycles(pf): +def test_woehler_miner_original_homogenious_cycles(pf, wc_data): load = np.logspace(3., 2., 50) cycles = wc_data.woehler.basquin_cycles(load, failure_probability=pf) assert (np.diff(cycles[np.isfinite(cycles)]) > 0.).all() diff --git a/tools/odbclient/setup.cfg b/tools/odbclient/setup.cfg index 3e41bd05..728ac6f1 100644 --- a/tools/odbclient/setup.cfg +++ b/tools/odbclient/setup.cfg @@ -42,7 +42,7 @@ package_dir = install_requires = importlib-metadata; python_version<"3.8" pandas - numpy==1.23.5 + numpy [options.packages.find] where = src diff --git a/tools/odbclient/src/odbclient/odbclient.py b/tools/odbclient/src/odbclient/odbclient.py index 42dce924..8f37e65e 100644 --- a/tools/odbclient/src/odbclient/odbclient.py +++ b/tools/odbclient/src/odbclient/odbclient.py @@ -374,6 +374,88 @@ def variable(self, variable_name, instance_name, step_name, frame_id, nset_name= column_names = _ascii(_decode, labels) return pd.DataFrame(values, index=index, columns=column_names) + def history_regions(self, step_name): + """Query the history Regions of a given step. + + Parameters + ---------- + step_name : string + The name of the step + + Returns + ------- + historyRegions : list of strings + The name of history regions, which are in the required step. + """ + return self._query('get_history_regions', step_name) + + def history_outputs(self, step_name, history_region_name): + """Query the history Outputs of a given step in a given history region. + + Parameters + ---------- + step_name : string + The name of the step + + history_region_name: string + The name of the history region + + Returns + ------- + historyOutputs : list of strings + The name of the history outputs, which are in the required step and under the required history region + """ + hisoutputs = self._query("get_history_outputs", (step_name, history_region_name)) + + return hisoutputs + + + def history_output_values(self, step_name, history_region_name, historyoutput_name): + """Query the history Regions of a given step. + + Parameters + ---------- + step_name : string + The name of the step + + Returns + ------- + historyRegions : list of strings + The name of the step the history regions are in. + """ + hisoutput_valuesx, hisoutput_valuesy = self._query("get_history_output_values", (step_name, history_region_name, historyoutput_name)) + history_region_description = self._query("get_history_region_description", (step_name, history_region_name)) + historyoutput_data = pd.Series(hisoutput_valuesy, index = hisoutput_valuesx, name = history_region_description + ": " + historyoutput_name) + + return historyoutput_data + + def history_region_description(self, step_name, history_region_name): + """Query the description of a history Regions of a given step. + + Parameters + ---------- + step_name : string + The name of the step + history_region_name: string + The name of the history region + + Returns + ------- + historyRegion_description : list of strings + The description of the history region. + """ + history_region_description = self._query("get_history_region_description", (step_name, history_region_name)) + return history_region_description + + def history_info(self): + """Query all the information about the history outputs in a given odb. + Returns + ------- + dictionary : ldictionary which contains history information + """ + dictionary = _decode(self._query("get_history_info")) + return dictionary + def _query(self, command, args=None): args = _ascii(_encode, args) self._send_command(command, args) @@ -436,8 +518,13 @@ def _encode(arg): def _decode(arg): - return arg.decode('ascii') if isinstance(arg, bytes) else arg - + if isinstance(arg, bytes): + return arg.decode('ascii') + if isinstance(arg, dict): + return {_decode(key): _decode(value) for key, value in arg.items()} + if isinstance(arg, list): + return [_decode(element) for element in arg] + return arg def _guess_abaqus_bin(): if sys.platform == 'win32': @@ -455,7 +542,7 @@ def _guess_abaqus_bin_windows(): for guess in guesses: if os.path.exists(guess): return guess - return None + raise OSError("Could not guess abaqus binary path! Please submit as abaqus_bin parameter!") def _guess_pythonpath(python_env_path): diff --git a/tools/odbclient/tests/beam_3d_hex_quad.odb b/tools/odbclient/tests/beam_3d_hex_quad.odb index 33751ccd..2a2ff16e 100644 Binary files a/tools/odbclient/tests/beam_3d_hex_quad.odb and b/tools/odbclient/tests/beam_3d_hex_quad.odb differ diff --git a/tools/odbclient/tests/history_output_test.odb b/tools/odbclient/tests/history_output_test.odb new file mode 100644 index 00000000..36f306a7 Binary files /dev/null and b/tools/odbclient/tests/history_output_test.odb differ diff --git a/tools/odbclient/tests/test_odbclient.py b/tools/odbclient/tests/test_odbclient.py index 0eda5392..ab6524aa 100644 --- a/tools/odbclient/tests/test_odbclient.py +++ b/tools/odbclient/tests/test_odbclient.py @@ -22,6 +22,7 @@ import os import pytest +import json import numpy as np import pandas as pd @@ -158,3 +159,29 @@ def test_variable_stress_integration_point(client): result = client.variable('S', 'PART-1-1', 'Load', 1, position='INTEGRATION POINTS') result.to_csv('tests/stress_integration_point.csv') pd.testing.assert_frame_equal(result, expected) + +@pytest.fixture +def client_history(): + return odbclient.OdbClient('tests/history_output_test.odb') + +def test_history_region_empty(client): + assert client.history_regions("Load") == ['Assembly ASSEMBLY'] + +def test_history_region_non_empty(client_history): + assert client_history.history_regions("Step-1") == ['Assembly ASSEMBLY', 'Element ASSEMBLY.1', 'Node ASSEMBLY.1', 'Node ASSEMBLY.2'] + +def test_history_outputs(client_history): + assert client_history.history_outputs("Step-1", 'Element ASSEMBLY.1') == ['CTF1', 'CTF2', 'CTF3', 'CTM1', 'CTM2', 'CTM3', 'CU1', 'CU2', 'CU3', 'CUR1', 'CUR2', 'CUR3'] + +def test_history_output_values(client_history): + assert client_history.history_output_values("Step-1", 'Element ASSEMBLY.1', 'CTF1').array[1] == pytest.approx(0.09999854117631912) + +def test_history_region_description(client_history): + assert client_history.history_region_description("Step-1", 'Element ASSEMBLY.1') == "Output at assembly ASSEMBLY instance ASSEMBLY element 1" + + +def test_history_info(client_history): + expected = json.loads(""" +{"Output at assembly ASSEMBLY instance ASSEMBLY node 1 region RP-1": {"History Outputs": ["RF1", "RF2", "RF3", "RM1", "RM2", "RM3", "U1", "U2", "U3", "UR1", "UR2", "UR3"], "History Region": "Node ASSEMBLY.1", "Steps ": ["Step-1", "Step-2"]}, "Output at assembly ASSEMBLY instance ASSEMBLY element 1": {"History Outputs": ["CTF1", "CTF2", "CTF3", "CTM1", "CTM2", "CTM3", "CU1", "CU2", "CU3", "CUR1", "CUR2", "CUR3"], "History Region": "Element ASSEMBLY.1", "Steps ": ["Step-1", "Step-2"]}, "Output at assembly ASSEMBLY instance ASSEMBLY node 2 region SET-5": {"History Outputs": ["RF1", "RF2", "RF3", "RM1", "RM2", "RM3", "U1", "U2", "U3", "UR1", "UR2", "UR3"], "History Region": "Node ASSEMBLY.2", "Steps ": ["Step-1", "Step-2"]}, "Output at assembly ASSEMBLY": {"History Outputs": ["ALLAE", "ALLCCDW", "ALLCCE", "ALLCCEN", "ALLCCET", "ALLCCSD", "ALLCCSDN", "ALLCCSDT", "ALLCD", "ALLDMD", "ALLDTI", "ALLEE", "ALLFD", "ALLIE", "ALLJD", "ALLKE", "ALLKL", "ALLPD", "ALLQB", "ALLSD", "ALLSE", "ALLVD", "ALLWK", "ETOTAL"], "History Region": "Assembly ASSEMBLY", "Steps ": ["Step-1", "Step-2"]}} + """) + assert client_history.history_info() == expected diff --git a/tools/odbserver/odbserver/__main__.py b/tools/odbserver/odbserver/__main__.py index dd42d109..0798f83a 100644 --- a/tools/odbserver/odbserver/__main__.py +++ b/tools/odbserver/odbserver/__main__.py @@ -41,7 +41,12 @@ def __init__(self, odbfile): "get_node_set": self.node_set, "get_element_set": self.element_set, "get_variable_names": self.variable_names, - "get_variable": self.variable + "get_variable": self.variable, + "get_history_regions": self.history_regions, + "get_history_outputs": self.history_outputs, + "get_history_output_values": self.history_output_values, + "get_history_region_description": self.history_region_description, + "get_history_info": self.history_info } def instances(self, _args): @@ -102,6 +107,30 @@ def variable(self, args): else: _send_response(variable) + def history_regions(self, step_name): + _send_response(self._odb.history_regions(str(step_name))) + + def history_outputs(self, args): + step_name, historyregion_name = args + step_name = str(step_name) + historyregion_name = str(historyregion_name) + _send_response(self._odb.history_outputs(step_name, historyregion_name)) + + def history_output_values(self, args): + step_name, historyregion_name, historyoutput_name = args + step_name = str(step_name) + historyregion_name = str(historyregion_name) + historyoutput_name = str(historyoutput_name) + _send_response(self._odb.history_output_values(step_name, historyregion_name, historyoutput_name)) + + def history_region_description(self, args): + step_name, historyregion_name = args + step_name = str(step_name) + historyregion_name = str(historyregion_name) + _send_response(self._odb.history_region_description(step_name, historyregion_name)) + + def history_info(self, args): + _send_response(self._odb.history_info()) def _send_response(pickle_data, numpy_arrays=None): numpy_arrays = numpy_arrays or [] diff --git a/tools/odbserver/odbserver/interface.py b/tools/odbserver/odbserver/interface.py index 8b59b2e3..b165a1f9 100644 --- a/tools/odbserver/odbserver/interface.py +++ b/tools/odbserver/odbserver/interface.py @@ -20,6 +20,7 @@ import sys import numpy as np import odbAccess as ODB +import json class OdbInterface: @@ -245,6 +246,172 @@ def _instance_or_rootasm(self, instance_name): except Exception as e: return e + def history_regions(self, step_name): + """Get history regions, which belongs to the given step. + + Parameters + ---------- + step_name : Abaqus steps + It is always required. + + Returns + ------- + histRegions : history regions, which belong to the given step. In case of error it gives an error message. + It is a list of hist regions + """ + try: + + required_step = self._odb.steps[step_name] + histRegions = required_step.historyRegions.keys() + + return histRegions + + except Exception as e: + return e + + + def history_outputs(self, step_name, historyregion_name): + """Get history outputs, which belongs to the given step and history region. + + Parameters + ---------- + step_name : Abaqus steps + It is always required. + historyregion_name: Abaqus history region + It is always required. + + Returns + ------- + history_data : history data, which belong to the given step and history region. In case of error it gives an error message. + It is a list of history outputs. + + """ + try: + required_step = self._odb.steps[step_name] + + history_data = required_step.historyRegions[historyregion_name].historyOutputs.keys() + + return history_data + + except Exception as e: + return e + + + def history_output_values(self, step_name, historyregion_name, historyoutput_name): + """Get history output values, which belongs to the given step, history region and history output. + + Parameters + ---------- + step_name : Abaqus steps + It is always required. + historyregion_name: Abaqus history region + It is always required. + historyoutput_name: Abaqus history output + It is always required. + + Returns + ------- + x : time values of a history output. In case of error it gives an error message. + It is a list of data. + y : values of a history output. In case of error it gives an error message. + It is a list of data. + + """ + try: + required_step = self._odb.steps[step_name] + + history_data = required_step.historyRegions[historyregion_name].historyOutputs[historyoutput_name].data + + step_time = required_step.totalTime + + xdata = [] + ydata = [] + for ith in history_data: + xdata.append(ith[0]+step_time) + ydata.append(ith[1]) + + x = np.array(xdata) + y = np.array(ydata) + return x,y + + except Exception as e: + return e + + def history_region_description(self, step_name, historyregion_name): + """Get history region description, which belongs to the given step and history region. + + Parameters + ---------- + step_name : Abaqus steps + It is always required. + historyregion_name: Abaqus history region + It is always required. + + Returns + ------- + history_description : description for history region, which is visible in Abaqus. In case of error it gives an error message. + It is string. + + """ + try: + required_step = self._odb.steps[step_name] + + history_description = required_step.historyRegions[historyregion_name].description + + return history_description + + except Exception as e: + return e + + + def history_info(self): + """Get steps, history regions, history outputs and write into a dictionary. + + + Returns + ------- + A dictionary which contains information about the history of a given odb file. + In case of error it gives an error message. + + """ + dict1 = {} + try: + steps = self._odb.steps.keys() + + for istep in steps: + regions = self.history_regions(step_name=istep) + + for iregion in regions: + + idescription = self.history_region_description(step_name= istep, historyregion_name= iregion) + + outputs = self.history_outputs(step_name= istep, historyregion_name= iregion) + + for ioutputs in outputs: + if "Repeated: key" in ioutputs: + outputs.remove(ioutputs) + + steplist = [] + for istep2 in steps: + try: + self._odb.steps[istep2].historyRegions[iregion].description + steplist.append(istep2) + except Exception as e: + continue + + dict5 = { + "History Region" : iregion, + "History Outputs" : outputs, + "Steps " : steplist + } + + dict1[idescription] = dict(dict5) + + return dict1 + except Exception as e: + return e + + def _set_position(field, user_request=None): """Translate string to symbolic constant and define default behavior.